Connect LlamaIndex data framework with API Gateway proxy. Build powerful RAG applications with unified LLM access, intelligent caching, and comprehensive monitoring.
Enterprise features for your RAG applications
How LlamaIndex connects through the gateway
Quick start with LlamaIndex and Gateway
# Import LlamaIndex and Gateway components from llama_index import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms import GatewayLLM from llama_index.embeddings import GatewayEmbedding # Configure Gateway LLM llm = GatewayLLM( endpoint="https://gateway.example.com", api_key="your-gateway-key", model="gpt-4", cache=True ) # Configure Gateway Embeddings embed_model = GatewayEmbedding( endpoint="https://gateway.example.com", model="text-embedding-3-small" ) # Load documents and build index documents = SimpleDirectoryReader("./data").load_data() index = VectorStoreIndex.from_documents( documents, llm=llm, embed_model=embed_model ) # Create query engine query_engine = index.as_query_engine() # Query your data response = query_engine.query("What is the main topic?") print(response)
Build powerful applications with LlamaIndex and Gateway