LLM Proxy Request Logging

Comprehensive logging for every API call. Track requests, debug issues, analyze patterns, and maintain complete audit trails for your LLM applications with real-time log streaming.

📝 Full Logging 🔍 Search & Filter 📊 Analytics 🔒 Secure Storage
📋 Live Request Log
2025-03-17 14:23:45.123 POST /v1/chat/completions 200
2025-03-17 14:23:44.892 GET /v1/models 200
2025-03-17 14:23:43.567 POST /v1/chat/completions 429
2025-03-17 14:23:42.234 POST /v1/embeddings 500
2025-03-17 14:23:41.901 POST /v1/chat/completions 200

Logging Features

Everything you need for comprehensive request tracking and debugging

📝

Request Logging

Log complete request details including headers, body, parameters, and timestamps for every API call.

📤

Response Logging

Capture full responses including status codes, headers, body content, and timing information.

🔍

Search & Filter

Powerful search capabilities to find specific requests by user, endpoint, status, time range, or custom fields.

📊

Log Analytics

Analyze logging patterns with dashboards showing request volumes, error rates, and performance trends.

🔒

Secure Storage

Encrypted log storage with configurable retention policies and compliance-ready archiving.

Real-time Streaming

Stream logs in real-time to external systems like Elasticsearch, Splunk, or custom endpoints.

Log Levels

Granular control over what gets logged

D

Debug

Detailed diagnostic information for troubleshooting

I

Info

General operational information about requests

W

Warning

Potential issues that don't stop execution

E

Error

Failed requests and error conditions

Logged Information

Complete details captured for every request

📥 Request Details

  • TimestampISO 8601 format
  • HTTP MethodGET/POST/PUT
  • Endpoint Path/v1/chat/completions
  • HeadersSanitized
  • Request BodyFull capture
  • Client IPAnonymized option

📤 Response Details

  • Status Code200/400/500
  • Response TimeMilliseconds
  • Response SizeBytes
  • Tokens UsedPrompt + Completion
  • Model Versiongpt-4-0125
  • Cache StatusHIT/MISS

Logging Configuration

logging_config.yaml
# Request logging configuration
logging:
  enabled: true
  level: "info"  # debug, info, warning, error
  
  request:
    log_headers: true
    log_body: true
    sanitize_headers: ["authorization", "api-key"]
    max_body_size: 1048576  # 1MB
  
  response:
    log_headers: true
    log_body: true
    log_timing: true
  
  storage:
    backend: "elasticsearch"
    retention_days: 90
    compress: true
    encrypt: true
  
  export:
    webhook_url: "https://logs.company.com/ingest"
    batch_size: 100
    flush_interval: 5  # seconds

Related Resources

Get Complete API Visibility

Implement comprehensive request logging and never lose track of what's happening with your LLM APIs.