Output Quality Assurance

LLM API Gateway
Response Validation

Validate LLM responses for quality, safety, and format compliance before returning to clients. Ensure outputs meet your standards.

Validation Checks

Comprehensive output validation for LLM responses

📋
Format Validation
Verify response format matches expected schema (JSON, XML, markdown, etc.)
Accuracy
99.8%
Latency
<2ms
🔒
Safety Filtering
Detect and filter harmful, toxic, or inappropriate content in outputs.
Detection
99.5%
False Positive
<0.1%
🎯
Schema Compliance
Validate response structure against OpenAPI or JSON Schema definitions.
Coverage
100%
Errors Found
12%
📏
Length Constraints
Enforce min/max token limits, character counts, and array sizes.
Truncated
8%
Expanded
2%
🔐
PII Detection
Identify and redact personally identifiable information in outputs.
Detected
15%
Redacted
100%
Quality Scoring
Score response quality using relevance, coherence, and accuracy metrics.
Avg Score
4.2/5
Rejection
5%

Validation Flow

Multi-stage validation pipeline for LLM responses

📥
Receive
LLM Response
📋
Parse
Extract Content
🔒
Safety Check
Content Filter
🎯
Schema
Validate Structure
Approve
Return to Client

Implementation Examples

Code samples for response validation

Schema Validator Python
from jsonschema import validate, ValidationError

def validate_response(response, schema):
    """Validate LLM response against schema"""
    try:
        # Parse response if string
        if isinstance(response, str):
            data = json.loads(response)
        else:
            data = response
        
        # Validate against schema
        validate(instance=data, schema=schema)
        
        return {
            "valid": True,
            "data": data
        }
    
    except ValidationError as e:
        return {
            "valid": False,
            "error": e.message,
            "path": list(e.path)
        }
🔒 Safety Filter JavaScript
async function filterSafety(content) {
  // Check for harmful content
  const checks = [
    'hate_speech',
    'violence',
    'sexual_content',
    'self_harm'
  ];
  
  const results = await Promise.all(
    checks.map(c => moderationAPI.check(content, c))
  );
  
  // Filter flagged content
  const flagged = results.filter(r => r.flagged);
  
  if (flagged.length > 0) {
    return {
      safe: false,
      reasons: flagged.map(f => f.category)
    };
  }
  
  return { safe: true, content };
}

Response Validation Types

Configure validation for different response types

Validation Type Description Check Method Priority
JSON Structure Validate JSON syntax and structure Schema Match Critical
Content Safety Filter harmful or inappropriate content ML Classifier Critical
PII Detection Identify and mask personal data Regex + NER High
Token Limits Enforce min/max token counts Counter Medium
Format Check Verify markdown, code blocks, etc. Parser Medium
Quality Score Rate response quality ML Model Low