📊

LLM Proxy Observability

Powered by Langfuse
📊
↑ 12%
1.2M
Total Requests (24h)
💰
↓ 8%
$4,250
API Costs (24h)
↑ 5%
245ms
Avg Latency
⚠️
↑ 2%
0.8%
Error Rate

📈 Request Volume

Last 7 days
Model Requests Tokens Cost Status
gpt-4-turbo 45,231 12.5M $2,150 Active
claude-3-opus 32,180 8.2M $1,420 Active
gemini-pro 18,500 5.1M $480 Active

🔔 Recent Alerts

Rate Limit Warning

OpenAI API approaching rate limit threshold

2 minutes ago

Error Spike

Error rate exceeded 1% threshold

15 minutes ago

Cache Hit Rate

Cache hit rate reached 75%

1 hour ago

Langfuse Integration Features

🔍 Trace-Level Observability

Deep visibility into every LLM request with complete trace data including prompts, responses, and metadata.

  • Full request/response capture
  • Latency breakdown per operation
  • Custom metadata tagging
  • Session grouping

💰 Cost Attribution

Track costs down to individual requests, users, or teams with detailed token counting and cost allocation.

  • Per-request cost tracking
  • User-level attribution
  • Budget alerts
  • Cost forecasting

📊 Quality Scoring

Evaluate response quality with automated scoring, user feedback collection, and A/B testing support.

  • Response quality metrics
  • User feedback integration
  • Custom evaluation pipelines
  • Model comparison

⚡ Real-Time Dashboards

Interactive dashboards with real-time metrics, customizable views, and export capabilities.

  • Live request monitoring
  • Custom visualizations
  • Export to BI tools
  • Sharing capabilities

Integration Setup

Python SDK
# Install Langfuse SDK pip install langfuse # Configure Langfuse with LLM Proxy from langfuse import Langfuse langfuse = Langfuse( public_key="pk-xxx", secret_key="sk-xxx", host="https://cloud.langfuse.com" ) # Trace LLM calls through proxy with langfuse.trace(name="llm-proxy-request") as trace: response = client.chat.completions.create( model="gpt-4", messages=[{"role": "user", "content": "Hello!"}] ) trace.event( name="response", metadata={"model": response.model, "usage": response.usage} )

🔗 Related Resources

Proxy Tools Comparison | Enterprise LLM Proxy | Cost Optimization | Free Proxy Tools