|
|
Line 19: |
Line 19: |
| "king − man + woman ≈ queen" | | "king − man + woman ≈ queen" |
| </code> | | </code> |
| | |
| This result suggests that the difference between "king" and "man" represents some sort of "royalty", which is analogously applicable to "queen" minus "woman". Various concepts, such as "woman", "girl", "boy", etc., can be vectorized into arrays of numbers, often referred to as dimensions. These arrays can be visualized and correlated to familiar words, giving insight into their meaning. | | This result suggests that the difference between "king" and "man" represents some sort of "royalty", which is analogously applicable to "queen" minus "woman". Various concepts, such as "woman", "girl", "boy", etc., can be vectorized into arrays of numbers, often referred to as dimensions. These arrays can be visualized and correlated to familiar words, giving insight into their meaning. |
|
| |
|
Line 55: |
Line 56: |
|
| |
|
| Learning the network weights (i.e., the embedding model) requires a large set of labeled images. The weights are optimized to ensure that images with the same labels have closer embeddings compared to those with different labels. Once the CNN embedding model is learned, images can be transformed into vectors and stored with a K-Nearest-Neighbor index. For a new unseen image, it can be transformed using the CNN model, its k-most similar vectors can be retrieved, and the corresponding similar images can be identified. | | Learning the network weights (i.e., the embedding model) requires a large set of labeled images. The weights are optimized to ensure that images with the same labels have closer embeddings compared to those with different labels. Once the CNN embedding model is learned, images can be transformed into vectors and stored with a K-Nearest-Neighbor index. For a new unseen image, it can be transformed using the CNN model, its k-most similar vectors can be retrieved, and the corresponding similar images can be identified. |
|
| |
| ==Using Vector Embeddings==
| |
| Vector embeddings' ability to represent objects as dense vectors containing their semantic information makes them highly valuable for a wide array of machine learning applications.
| |
|
| |
| One of the most popular uses of vector embeddings is similarity search. Search algorithms like KNN and ANN necessitate calculating distances between vectors to determine similarity. Vector embeddings can be used to compute these distances. Nearest neighbor search can then be utilized for tasks such as deduplication, recommendations, anomaly detection, and reverse image search.
| |
|
| |
| Even if embeddings are not directly used for an application, many popular machine learning models and methods rely on them internally. For instance, in encoder-decoder architectures, the embeddings generated by the encoder contain the required information for the decoder to produce a result. This architecture is widely employed in applications like machine translation and caption generation.
| |
|
| |
| ==Vector Embeddings with Weaviate==
| |
| Weaviate is designed to support a wide range of vectorizer models and vectorizer service providers. Users can bring their own vectors, for example, if they already have a vectorization pipeline available or if none of the publicly available models are suitable.
| |
|
| |
| Weaviate supports using any Hugging Face models through the text2vec-huggingface module, allowing users to choose from many sentence transformers published on Hugging Face. Other popular vectorization APIs, such as OpenAI or Cohere, can be used through the text2vec-openai or text2vec-cohere modules. Users can also run transformer models locally with text2vec-transformers, and modules like multi2vec-clip can convert images and text to vectors using a CLIP model.
| |
|
| |
| All of these models perform the same core task, which is to represent the "meaning" of the original data as a set of numbers, enabling the effective implementation of semantic search. Vector embeddings can be generated from any data object, including text, images, audio, time series data, 3D models, video, and molecules. Embeddings are constructed such that two objects with similar semantics have vectors that are "close" to each other in vector space, with a "small" distance between them.
| |
|
| |
| In conclusion, vector embeddings are numerical representations of various data types, facilitating machine learning applications by capturing semantic similarity. They play a vital role in natural language processing, recommendation systems, and search algorithms. By representing data as dense vectors, they enable the quantification of semantic similarity and allow for efficient similarity search and other machine learning tasks. With the development of more advanced models like transformer-based architectures and support from platforms like Weaviate, vector embeddings continue to be a cornerstone of modern machine learning applications.
| |