LiteLLM Self-Hosted
Easy SetupLiteLLM is the most popular self-hosted LLM proxy, supporting over 100 different providers through a unified OpenAI-compatible API. The self-hosted version provides complete control over your infrastructure, data privacy, and unlimited customization options. Ideal for organizations that need multi-provider support without relying on external services. The active community and comprehensive documentation make it accessible even for teams new to LLM infrastructure.
Multi-Provider Support
Connect to 100+ LLM providers including OpenAI, Azure, Anthropic, and local models.
Cost Tracking
Built-in usage tracking and cost allocation across teams and projects.
Enterprise Security
API key management, rate limiting, and access control features.
High Performance
Async support, caching, and load balancing for production workloads.
System Requirements
# Quick Docker deployment
docker run -d \
-p 4000:4000 \
-e OPENAI_API_KEY=your_key \
ghcr.io/berriai/litellm:main-latest
# Or install via pip
pip install litellm[proxy]
litellm --model gpt-3.5-turbo