Integrate AI API Gateway with LangChain framework. Build scalable LLM applications with unified API management, intelligent caching, and comprehensive monitoring.
Enhance your LangChain applications with enterprise features
Get started in minutes with simple configuration
Quick start guide for LangChain integration
# Import LangChain and Gateway components from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from ai_gateway.langchain import GatewayLLM # Create Gateway LLM instance llm = GatewayLLM( endpoint="https://gateway.example.com", api_key="your-gateway-key", model="gpt-4", # Enable caching cache=True, cache_ttl=3600, # Rate limiting max_requests_per_minute=60, # Fallback model fallback_model="gpt-3.5-turbo" ) # Create a chain as usual prompt = PromptTemplate( input_variables=["topic"], template="Write a short joke about {topic}." ) chain = LLMChain(llm=llm, prompt=prompt) # Run the chain - gateway handles everything result = chain.run("programming") print(result) # Streaming support for chunk in chain.stream("AI"): print(chunk, end="", flush=True)