Vector Embeddings
Vector embeddings are dense numerical representations (arrays of floating-point numbers) that encode the semantic meaning of text. When text is converted into an embedding, similar concepts end up close together in the vector space — allowing systems to find relevant content by measuring mathematical distance rather than matching keywords.
Why embeddings matter for support AI
A customer asking "how do I change my password?" and your docs titled "Account Security Settings" have very different words but similar meaning. Keyword search would miss this match. Vector embeddings capture the semantic relationship, so the AI retrieves the right documentation regardless of how the customer phrases their question.
How vector search works
Your documentation is split into chunks, each converted into a vector embedding. When a query comes in, it's also converted to an embedding. The system then performs a nearest-neighbor search in the vector space to find the most semantically similar document chunks. These chunks become the context for RAG-powered answer generation.
EchoSDK's vector pipeline
EchoSDK uses Firestore Vector Search to store and query embeddings. When you ingest documentation (via URL or text), EchoSDK automatically chunks the content, generates embeddings, and indexes them. At query time, the system finds the top matching chunks in milliseconds and passes them to the LLM for answer generation.
Related terms
Retrieval-Augmented Generation (RAG)
An AI technique that combines a language model with a retrieval system to generate answers grounded in specific documents or data sources.
Semantic Search
A search technique that understands the meaning and context of a query rather than just matching keywords, using vector embeddings to find conceptually related results.
Knowledge Base
A structured collection of documentation, FAQs, and guides that serves as the source of truth for customer support — both for human agents and AI systems.