Build Scalable Search Systems with Haystack and Milvus
Milvus is a vector database optimized for scalable similarity search.
Most AI-powered applications nowadays rely on vector embeddings. Vector embeddings allow us to represent data of any type in a uniform, high-dimensional format. However, common vector operations require a lot of computational power, making it difficult to build systems that scale well. Enter Milvus — a database made specifically for vector embeddings.
Whereas traditional databases operate on a raw data representation, Milvus goes a step further. Its vector-native architecture is made specifically for high-efficiency vector operations and allows Milvus to perform fast on massive datasets. This database comes with all the features required to harness the power of vector embeddings, including vector indexing and vector arithmetic. But what are vectors and why are they so popular for data representation?
A vector is a sequence of numbers, or dimensions, that encode the semantics of the original raw data. Here’s the fun part: when we encode raw data into vectors, similar data points end up having vectors with similar values. But unlike raw data, vectors can be easily compared with one another. This allows us to perform content-based search, make recommendations, and find similar data points—all with staggering accuracy!
Nowadays, we can represent just about anything as a vector — images, audio, video, language, geospatial information, you name it. But prior to the current situation, analyzing unstructured data used to be difficult. That is because unstructured data does not necessarily adhere to any predefined format, or follow common patterns. This is why the recent trend of vector-optimized databases like Milvus is so exciting.
Natural language processing (NLP) in particular has made great strides thanks to vector embeddings, which now enable NLP systems to generate and answer questions, perform semantic search, rank documents, translate, and summarize text.
Frequently asked questions
The that we use to embed natural language into vectors are based on the Transformer architecture. These models are computationally demanding, so indexing a large database of documents will require a lot of computational power. Because of this, Milvus does require a GPU to index documents into a database. However, you can use either a GPU or a CPU to retrieve documents once they have been indexed.
Milvus is optimized for high-performance vector operations, so it’s best suited for dense embedding methods that rely on Transformer-based models. In Haystack, we recommend using Sentence-BERT and DensePassageRetrieval. Should you want to use sparse indexing methods such as BM25 or tf-idf, you may want to choose a different database that is better optimized for sparse retrieval. Check out our documentation for an overview of different indexing techniques and a list of supported databases.