AI Confidence Score
An AI confidence score is a metric (typically 0-1 or 0-100%) that indicates how certain the AI system is about the accuracy and relevance of its generated response. In customer support, confidence scores are critical for determining when AI should handle a query autonomously versus when it should escalate to a human agent.
How confidence is measured
Confidence in RAG systems is typically derived from the relevance scores of retrieved documents. If the vector search finds highly similar documents (high cosine similarity scores), the system is more confident the answer will be grounded and accurate. Low similarity scores suggest the knowledge base doesn't contain relevant information for that query.
Threshold-based escalation
Support AI systems set a confidence threshold — answers above the threshold are served to customers, while answers below it trigger escalation to human agents. Setting this threshold requires balancing automation rate (higher threshold = fewer AI answers = more human tickets) against answer quality (lower threshold = more AI answers = risk of inaccurate responses).
EchoSDK's confidence handling
EchoSDK evaluates confidence based on the relevance of retrieved documentation to the customer's query. When confidence falls below the threshold, the system creates a support ticket with full context and routes it to your Slack channel, ensuring no customer gets an unreliable AI response.
Related terms
Ticket Escalation
The process of routing a customer support query from AI or first-line support to a human agent when the issue cannot be resolved automatically.
Retrieval-Augmented Generation (RAG)
An AI technique that combines a language model with a retrieval system to generate answers grounded in specific documents or data sources.
AI Hallucination
When an AI model generates a response that sounds plausible but is factually incorrect or fabricated, not grounded in actual data.