🔧 Function Calling Support

LLM Proxy Tool Calling Support

Enable powerful function calling capabilities through your proxy layer. Let AI models execute real-world actions like API calls, database queries, and external integrations with full security and control.

Tool Calling Flow
Multi-Step Process
1
Client Request with Tool Definition
Client sends prompt + available tool schemas to proxy
2
Proxy Forwards to LLM
Request routed to OpenAI/Anthropic with tool definitions
3
LLM Returns Tool Call
Model decides which tool to call with parameters
4
Proxy Executes Tool
Tool executed server-side with results returned to LLM
5
Final Response
LLM generates final response incorporating tool results

Supported Tool Types

Enable various tool types through your proxy for extended AI capabilities.

🌐

API Calls

Let LLMs call external REST APIs to fetch real-time data, submit forms, or integrate with third-party services.

  • GET/POST/PUT/DELETE requests
  • Custom headers support
  • Response parsing
  • Error handling
🗄️

Database Queries

Execute database queries safely through parameterized functions. Read data without exposing connection strings.

  • SQL query execution
  • NoSQL document queries
  • Parameterized inputs
  • Result formatting
📧

Communication

Send emails, SMS, or push notifications. Let AI agents communicate with users through multiple channels.

  • Email sending
  • SMS integration
  • Webhook triggers
  • Notification queues
📊

Data Processing

Process and transform data through custom functions. Calculate metrics, format outputs, or aggregate results.

  • Custom calculations
  • Data transformations
  • Format conversion
  • Validation logic
🔍

Search & Retrieval

Enable web search, document retrieval, or knowledge base queries for up-to-date information access.

  • Web search integration
  • Document retrieval
  • RAG support
  • Vector search

Automation

Trigger automated workflows, schedule tasks, or execute business logic through custom tool functions.

  • Workflow triggers
  • Task scheduling
  • Business rules
  • System integration

Provider Support

Unified tool calling interface across major LLM providers.

OpenAI

Function Calling API

  • ✓ Parallel function calls
  • ✓ Structured outputs
  • ✓ JSON Schema validation
  • ✓ GPT-4 Turbo support
Anthropic

Tool Use API

  • ✓ Extended thinking
  • ✓ Multi-tool selection
  • ✓ Claude 3.5 support
  • ✓ Complex tool chains
Google

Function Calling

  • ✓ Gemini integration
  • ✓ Parameter validation
  • ✓ Multiple tools
  • ✓ Vertex AI support

Implementation Example

How to implement tool calling through your proxy layer.

Tool Definition Schema JSON Schema
// Define available tools for the LLM
const tools = [
  {
    type: "function",
    function: {
      name: "get_weather",
      description: "Get current weather for a location",
      parameters: {
        type: "object",
        properties: {
          location: {
            type: "string",
            description: "City name, e.g. San Francisco"
          },
          unit: {
            type: "string",
            enum: ["celsius", "fahrenheit"]
          }
        },
        required: ["location"]
      }
    }
  }
];

// Proxy handles tool execution
async function executeToolCall(toolName, args) {
  if (toolName === 'get_weather') {
    // Execute weather API call
    return await fetchWeather(args.location, args.unit);
  }
}
Proxy Tool Execution Flow Node.js
async function handleToolCall(toolCall) {
  const { name, arguments: args } = toolCall.function;
  
  // Validate tool name against allowed list
  if (!isAllowedTool(name)) {
    throw new Error(`Tool ${name} not permitted`);
  }
  
  // Execute tool with rate limiting
  await checkRateLimit(toolCall.user_id);
  
  // Run tool function
  const result = await toolRegistry.execute(name, JSON.parse(args));
  
  // Return result for LLM to process
  return {
    tool_call_id: toolCall.id,
    role: "tool",
    content: JSON.stringify(result)
  };
}
50+
Tool Types
3
LLM Providers
<100ms
Tool Latency
Auto
Validation

Implementation Benefits

Key advantages of enabling tool calling through your proxy.

🔐

Secure Execution

All tool calls execute server-side, protecting your API keys, database credentials, and internal systems from exposure.

📊

Usage Tracking

Monitor which tools are called, how often, and by whom. Track costs and optimize tool usage patterns.

⚖️

Rate Limiting

Apply rate limits per tool, per user, or globally. Prevent abuse and manage costs across tool executions.

🔄

Provider Abstraction

Write tools once, use with any LLM provider. Unified interface for OpenAI, Anthropic, and Google function calling.

Enable Tool Calling Today

Extend your LLM proxy with powerful tool calling capabilities. Let AI execute real-world actions with full security and control.