🔍 Developer Tools

LLM Proxy Debugging & Tracing

Master the art of debugging LLM proxy requests. Learn request inspection, response logging, error diagnosis, and performance analysis to build reliable AI-powered applications.

llm-proxy-debug.log
10:23:45.123 INFO Request received: POST /v1/chat/completions
10:23:45.124 INFO Routing to provider: openai (model: gpt-4)
10:23:47.892 SUCCESS Response received (2.77s, 847 tokens)
10:23:47.893 INFO Cache miss - response stored for future requests

Debugging Tools

Essential tools for proxy troubleshooting

📊

Request Inspector

View full request details including headers, body, and routing decisions.

📝

Response Logger

Capture and analyze API responses with timing and token metrics.

🔍

Trace Viewer

Visualize request flow through proxy layers and providers.

⚠️

Error Analyzer

Diagnose errors with stack traces and suggested fixes.

📈

Performance Profiler

Identify bottlenecks and optimize proxy latency.

🔄

Request Replay

Replay failed requests for debugging and testing.

Request Tracing

Follow requests through the proxy pipeline

1
Request Received
POST /v1/chat/completions • 1.2 KB payload
0.12ms
2
Authentication Verified
API key validated • Rate limit check passed
0.34ms
3
Routing Decision
Model: gpt-4 → Provider: openai
0.08ms
4
Provider Request
Streaming enabled • 847 tokens
2.77s
5
Response Returned
Status: 200 • Cache: stored
0.05ms

Logging Configuration

Set up comprehensive logging for debugging

📋 Structured Logging

Configure JSON-formatted logs for easy parsing and analysis in log aggregation systems.

config.yaml
logging:
  format: json
  level: debug
  output: stdout
  fields:
    - request_id
    - duration
    - tokens
🔍 Request Sampling

Enable sampling for high-traffic proxies to reduce log volume while maintaining visibility.

sampling
sampling:
  enabled: true
  rate: 0.1  # 10% of requests
  errors: 1.0 # Log all errors
  slow_requests: 1.0

Common Error Patterns

Diagnose and fix frequent proxy issues

🔌 Connection Errors

Timeout, connection refused, and network issues when connecting to AI providers.

🔑 Authentication Failures

Invalid API keys, expired tokens, and permission denied errors.

Rate Limiting

Provider rate limits exceeded. Configure backoff and retry strategies.

📦 Payload Errors

Invalid request format, missing parameters, or unsupported model names.

Master Proxy Debugging

Learn to diagnose and fix issues quickly with comprehensive debugging tools and best practices for LLM proxy operations.