OpenAI API Gateway Setup Guide
Complete step-by-step guide for setting up OpenAI API Gateway in production environments. Estimated time: 60 minutes.
๐ฏ Quick Setup Overview
This guide will walk you through the complete setup process, from prerequisites to production deployment. Follow each step sequentially for best results.
Prerequisites & Environment Setup
Estimated time: 10 minutes
Before installing the OpenAI API Gateway, ensure your environment meets all requirements. This includes software dependencies, API access, and network configuration.
โ Prerequisites Checklist
Active OpenAI account with API key access (minimum GPT-4 access)
Linux server (Ubuntu 22.04+ or Debian 11+) with 2+ GB RAM, 20+ GB disk
Node.js 18+, npm 9+, Docker (optional), Redis (optional for caching)
Outbound access to api.openai.com (TCP 443), inbound access for your API (TCP 3000+)
Environment Verification
Verify your environment meets all requirements by running these commands:
Create or retrieve your OpenAI API key from the OpenAI Platform. Store it securely:
# Export as environment variable export OPENAI_API_KEY="sk-your-api-key-here" # Or create a .env file echo "OPENAI_API_KEY=sk-your-api-key-here" > .env
Security Note: Never commit API keys to version control. Use environment variables or secret management systems.
Installation & Configuration
Estimated time: 20 minutes
Install the OpenAI API Gateway using your preferred method. Choose from Docker (recommended), npm package, or source installation.
๐ณ Docker Installation (Recommended)
Simplest method with containerized deployment and built-in dependency management.
# Pull the Docker image docker pull openai/gateway:latest # Run the gateway docker run -d \ --name openai-gateway \ -p 3000:3000 \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ openai/gateway:latest
๐ฆ npm Package
Direct npm installation for Node.js environments with full control over configuration.
# Install the package npm install @openai/gateway # Create configuration file npx openai-gateway init
๐ Source Installation
Clone and build from source for custom modifications and development.
# Clone the repository git clone https://github.com/openai/gateway.git cd gateway # Install dependencies npm install # Build the gateway npm run build
Basic Configuration
Configure the gateway with basic settings for your environment:
# OpenAI Gateway Configuration
server:
port: 3000
host: 0.0.0.0
timeout: 30000
openai:
apiKey: ${OPENAI_API_KEY}
organization: ${OPENAI_ORG_ID}
timeout: 60000
rateLimiting:
enabled: true
windowMs: 900000 # 15 minutes
maxRequests: 100
skipSuccessfulRequests: false
caching:
enabled: true
ttl: 300 # 5 minutes
redis:
host: localhost
port: 6379
logging:
level: info
format: json
file: /var/log/openai-gateway.log
security:
cors:
origin: "*"
methods: ["GET", "POST", "PUT", "DELETE"]
headers:
- "X-Content-Type-Options: nosniff"
- "X-Frame-Options: DENY"
Configure sensitive settings using environment variables:
# Required settings OPENAI_API_KEY=sk-your-key-here OPENAI_ORG_ID=org-your-org-id # Optional settings GATEWAY_PORT=3000 GATEWAY_HOST=0.0.0.0 LOG_LEVEL=info CACHE_ENABLED=true RATE_LIMIT_ENABLED=true
Deployment & Scaling
Estimated time: 15 minutes
Deploy the gateway to your production environment and configure scaling for high availability.
Docker Compose Deployment
Use Docker Compose for easy deployment with additional services like Redis and monitoring:
version: '3.8'
services:
openai-gateway:
image: openai/gateway:latest
container_name: openai-gateway
restart: unless-stopped
ports:
- "3000:3000"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OPENAI_ORG_ID=${OPENAI_ORG_ID}
- LOG_LEVEL=info
volumes:
- ./logs:/app/logs
- ./config:/app/config
depends_on:
- redis
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
redis:
image: redis:7-alpine
container_name: gateway-redis
restart: unless-stopped
ports:
- "6379:6379"
volumes:
- redis-data:/data
command: redis-server --appendonly yes
volumes:
redis-data:
# Start the gateway with Docker Compose docker-compose up -d # Check deployment status docker-compose ps # View logs docker-compose logs -f openai-gateway # Scale the gateway (for load balancing) docker-compose up -d --scale openai-gateway=3 # Update to latest version docker-compose pull openai-gateway docker-compose up -d openai-gateway
Kubernetes Deployment
For Kubernetes environments, use this deployment configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: openai-gateway
spec:
replicas: 3
selector:
matchLabels:
app: openai-gateway
template:
metadata:
labels:
app: openai-gateway
spec:
containers:
- name: gateway
image: openai/gateway:latest
ports:
- containerPort: 3000
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: openai-secrets
key: apiKey
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: openai-gateway
spec:
selector:
app: openai-gateway
ports:
- port: 80
targetPort: 3000
type: LoadBalancer
Testing & Validation
Estimated time: 10 minutes
Verify your OpenAI API Gateway installation is working correctly with comprehensive testing.
โ Testing Checklist
Verify the gateway health endpoint responds correctly
Test connectivity to OpenAI API through the gateway
Verify rate limiting is working as configured
Test caching functionality if enabled
Basic Testing Commands
Automated Testing Script
const axios = require('axios');
const BASE_URL = 'http://localhost:3000';
const API_KEY = process.env.OPENAI_API_KEY;
async function testGateway() {
console.log('๐งช Testing OpenAI API Gateway...\n');
// 1. Test health endpoint
try {
const healthResponse = await axios.get(`${BASE_URL}/health`);
console.log('โ
Health check passed:', healthResponse.data);
} catch (error) {
console.error('โ Health check failed:', error.message);
return false;
}
// 2. Test models endpoint
try {
const modelsResponse = await axios.get(`${BASE_URL}/v1/models`, {
headers: { Authorization: `Bearer ${API_KEY}` }
});
console.log('โ
Models endpoint working, found', modelsResponse.data.data.length, 'models');
} catch (error) {
console.error('โ Models endpoint failed:', error.message);
return false;
}
// 3. Test chat completion
try {
const chatResponse = await axios.post(
`${BASE_URL}/v1/chat/completions`,
{
model: 'gpt-3.5-turbo',
messages: [{ role: 'user', content: 'Say hello!' }],
max_tokens: 10
},
{
headers: {
Authorization: `Bearer ${API_KEY}`,
'Content-Type': 'application/json'
}
}
);
console.log('โ
Chat completion working:', chatResponse.data.choices[0].message.content);
} catch (error) {
console.error('โ Chat completion failed:', error.message);
return false;
}
console.log('\n๐ All tests passed! Gateway is working correctly.');
return true;
}
testGateway();
๐ง Configuration Validator
Use this interactive tool to validate your gateway configuration:
Related Setup Guides
Explore these related guides for comprehensive OpenAI infrastructure knowledge.
AI API Gateway Tutorial
Complete tutorial for building custom AI API gateways
AI API Proxy Documentation
Complete API reference for AI proxy implementations
OpenAI vs Anthropic Gateway
Comparison of different AI gateway implementations
API Gateway Proxy Guide
General guide to API gateway proxy implementation