Compatibility

AI API Gateway OpenAI Compatible

Complete guide to making your AI API Gateway OpenAI-compatible. Learn how to use OpenAI SDK with your own backend, migrate from OpenAI, and maintain compatibility with existing code.

🔗

OpenAI Compatibility Overview

Making your API OpenAI-compatible allows you to use the official OpenAI SDK, client libraries, and existing code with your own AI backend. This provides flexibility, cost savings, and vendor independence.

OpenAI Compatible

Why Make Your API OpenAI-Compatible?

  • Drop-in Replacement: Switch from OpenAI to your backend with minimal code changes
  • SDK Support: Use official OpenAI client libraries and tools
  • Migration Ease: Migrate existing applications without rewriting
  • Vendor Independence: Avoid lock-in to a single provider
  • Cost Control: Use your own infrastructure or alternative providers
📡

Compatible Endpoints

Implement these OpenAI-compatible endpoints in your API Gateway:

POST /v1/chat/completions

Chat completions endpoint compatible with OpenAI's Chat API.

POST /v1/chat/completions

POST /v1/completions

Text completions endpoint for legacy GPT models.

POST /v1/completions

POST /v1/embeddings

Text embeddings endpoint for vector generation.

POST /v1/embeddings

GET /v1/models

List available models endpoint.

GET /v1/models

GET /v1/models/{model}

Retrieve specific model information.

GET /v1/models/gpt-4

GET /v1/models/{model}

Retrieve specific model information.

GET /v1/models/gpt-4
⚙️

SDK Implementation

OpenAI SDK Configuration

Configure OpenAI SDK to use your custom API Gateway:

const OpenAI = require('openai');

const client = new OpenAI({
    apiKey: 'your-api-key',
    baseURL: 'https://your-gateway.com/v1'
});

// Use the same API as OpenAI!
async function callChatAPI() {
    const chatCompletion = await client.chat.completions.create({
        model: 'gpt-4',
        messages: [{
            role: 'user',
            content: 'Hello!'
        }]
    });
    
    console.log(chatCompletion.choices[0].message.content);
}

Request/Response Format

Match OpenAI's request and response format:

// Request format (OpenAI-compatible)
{
    model: "gpt-4",
    messages: [
        { role: "system", content: "You are helpful." },
        { role: "user", content: "What is AI?" }
    ],
    temperature: 0.7,
    max_tokens: 100
}

// Response format (OpenAI-compatible)
{
    id: "chatcmpl-123",
    object: "chat.completion",
    created: 1677652288,
    model: "gpt-4",
    choices: [{
        index: 0,
        message: {
            role: "assistant",
            content: "AI is..."
        },
        finish_reason: "stop"
    }],
    usage: {
        prompt_tokens: 20,
        completion_tokens: 50,
        total_tokens: 70
    }
}

Best Practices

  • Match OpenAI's error response format for easy debugging
  • Implement streaming responses for real-time output
  • Support all major OpenAI API parameters
  • Include usage information in responses
  • Maintain backward compatibility with API changes