Validate LLM responses for quality, safety, and format compliance before returning to clients. Ensure outputs meet your standards.
Comprehensive output validation for LLM responses
Multi-stage validation pipeline for LLM responses
Code samples for response validation
from jsonschema import validate, ValidationError def validate_response(response, schema): """Validate LLM response against schema""" try: # Parse response if string if isinstance(response, str): data = json.loads(response) else: data = response # Validate against schema validate(instance=data, schema=schema) return { "valid": True, "data": data } except ValidationError as e: return { "valid": False, "error": e.message, "path": list(e.path) }
async function filterSafety(content) { // Check for harmful content const checks = [ 'hate_speech', 'violence', 'sexual_content', 'self_harm' ]; const results = await Promise.all( checks.map(c => moderationAPI.check(content, c)) ); // Filter flagged content const flagged = results.filter(r => r.flagged); if (flagged.length > 0) { return { safe: false, reasons: flagged.map(f => f.category) }; } return { safe: true, content }; }
Configure validation for different response types