Framework Integration

AI API Gateway
for LangChain

Integrate AI API Gateway with LangChain framework. Build scalable LLM applications with unified API management, intelligent caching, and comprehensive monitoring.

LangChain Integration
📦
LLM Wrapper GatewayLLM
🔗
Chain Support Full Compatible
💾
Cache Layer Redis/Memory
📊
Tracing LangSmith Ready

Why Use Gateway with LangChain?

Enhance your LangChain applications with enterprise features

🔄
Unified LLM Interface
Access multiple LLM providers through a single interface. Switch between OpenAI, Anthropic, and others without code changes.
Intelligent Caching
Cache LLM responses to reduce costs and latency. Supports semantic similarity matching for cache hits.
🛡️
Rate Limiting
Protect your LLM budget with configurable rate limits. Prevent runaway chains from consuming all tokens.
📊
Observability
Full tracing integration with LangSmith. Monitor chain execution, token usage, and performance metrics.
🔐
Secure API Keys
Centralize API key management. Rotate keys without redeploying applications.
🎯
Fallback Support
Configure automatic fallbacks when primary LLM fails. Ensure high availability for critical chains.

Integration Steps

Get started in minutes with simple configuration

1
Install Dependencies
pip install langchain ai-gateway-client
2
Configure Gateway
Set up your gateway endpoint and API credentials
3
Create LLM Instance
Use GatewayLLM wrapper instead of direct LLM classes
4
Build Chains
Create LangChain chains as usual - gateway handles the rest

Code Examples

Quick start guide for LangChain integration

🦜 LangChain with Gateway Python
# Import LangChain and Gateway components
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from ai_gateway.langchain import GatewayLLM

# Create Gateway LLM instance
llm = GatewayLLM(
    endpoint="https://gateway.example.com",
    api_key="your-gateway-key",
    model="gpt-4",
    
    # Enable caching
    cache=True,
    cache_ttl=3600,
    
    # Rate limiting
    max_requests_per_minute=60,
    
    # Fallback model
    fallback_model="gpt-3.5-turbo"
)

# Create a chain as usual
prompt = PromptTemplate(
    input_variables=["topic"],
    template="Write a short joke about {topic}."
)

chain = LLMChain(llm=llm, prompt=prompt)

# Run the chain - gateway handles everything
result = chain.run("programming")
print(result)

# Streaming support
for chunk in chain.stream("AI"):
    print(chunk, end="", flush=True)