Aggregate multiple LLM APIs behind a single unified endpoint. Consolidate OpenAI, Anthropic, Google, and local models with simplified request handling.
Multiple providers, one endpoint
Simplify your AI infrastructure
Simple code, multiple models
import requests # Single endpoint for all providers response = requests.post( "https://gateway.example.com/v1/chat", headers={"Authorization": "Bearer YOUR_KEY"}, json={ # Gateway routes to best available model "model": "auto", "messages": [ {"role": "user", "content": "Hello!"} ], # Optional: specify provider preference "provider_preference": ["openai", "anthropic"], # Optional: fallback chain "fallback": ["gpt-4", "claude-3", "llama-3"] } ) print(response.json()["model"]) # Shows which model was used
Why aggregate your LLM APIs?