Deploy API gateway proxies as sidecar containers co-located with your applications. Achieve ultra-low latency through local routing, shared memory caching, and simplified networking without centralized bottlenecks.
Deploy API gateways alongside your applications for optimal performance and simplified operations.
Zero network hops between application and gateway. All routing decisions happen locally within the same pod. Sub-millisecond latency for routing and caching operations through localhost communication.
Leverage shared memory volumes between application and sidecar for instant cache access. Eliminate network calls for cached responses. Automatic cache invalidation and updates through shared filesystem.
Network isolation between application containers and external networks. All outbound traffic flows through the sidecar proxy with centralized security policies. Secrets and credentials managed within the pod boundary.
Deploy gateway capabilities alongside applications without separate infrastructure. Sidecar configuration managed through Kubernetes manifests. Automatic scaling of gateway capacity with application instances.
Update gateway configuration without touching application code. Roll out new gateway features independently of application deployments. Version-specific gateway policies per deployment.
Collect metrics and logs from both application and gateway in one place. Correlate application performance with gateway behavior easily. Local tracing without distributed sampling complexity.
The sidecar pattern deploys the API gateway proxy as a second container within the same Kubernetes pod as your application. Both containers share the same network namespace, enabling communication over localhost.
This architecture eliminates the need for a centralized gateway cluster, reducing latency and simplifying infrastructure. Each application instance has its own dedicated gateway proxy, ensuring consistent performance regardless of overall system load.
apiVersion: v1
kind: Pod
metadata:
name: app-with-sidecar
spec:
containers:
# Main application container
- name: application
image: myapp:latest
ports:
- containerPort: 3000
volumeMounts:
- name: shared-cache
mountPath: /var/cache
# Sidecar gateway proxy
- name: gateway-proxy
image: gateway-proxy:v2
ports:
- containerPort: 8080
env:
- name: CACHE_DIR
value: /var/cache/gateway
volumeMounts:
- name: shared-cache
mountPath: /var/cache
volumes:
- name: shared-cache
emptyDir: {}
Ideal scenarios for co-located gateway deployment.
Ultra-low latency requirements demand zero network hops. Sidecar gateway ensures sub-millisecond routing for time-critical trading operations.
Deploy applications at the edge with local gateway capabilities. No dependency on centralized gateway infrastructure in remote locations.
Isolate tenant traffic at the pod level with dedicated sidecar proxies. Per-tenant rate limiting and caching without shared resource contention.
Each microservice instance includes its own gateway. Simplified service mesh with consistent routing policies across all services.
Add gateway capabilities to legacy applications without code changes. Sidecar handles routing, caching, and security transparently.
Maintain strict data isolation with sidecar proxies. All outbound traffic audited and controlled at the pod level for compliance.
Related deployment patterns and architecture guides.
Long-duration streaming with sidecar deployment pattern.
Service mesh integration with sidecar proxies.
Microservices architecture with sidecar deployment.
API mesh topology with sidecar components.