ChatGPT Retrieval Plugin: Difference between revisions

From AI Wiki
(Created page with "==Background Information== The introduction of ChatGPT plugins by OpenAI has significantly expanded the capabilities of the popular chatbot. Plugins allow ChatGPT to interact with a variety of apps and services, enabling it to do more than just provide conversation. With this added functionality, ChatGPT can now perform tasks in the real world, such as ordering groceries or booking vacations. One particularly exciting plugin is the ChatGPT Retrieval Plugin, which all...")
 
No edit summary
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{see also|ChatGPT|ChatGPT Plugins}}
[https://github.com/openai/chatgpt-retrieval-plugin GitHub]
==Introduction==
The [[ChatGPT Retrieval Plugin]] is a powerful addition to [[OpenAI]]'s [[ChatGPT]], enabling the [[AI]] to access and utilize data stored in a [[vector database]]. This capability not only enhances ChatGPT's performance by providing access to customized data but also addresses its long-standing limitation of lacking long-term memory. The plugin is part of the broader [[ChatGPT plugins]] ecosystem, which allows ChatGPT to interact with various apps and services, transforming it from a conversation tool to an AI capable of taking actions in the real world.
==Background Information==
==Background Information==
The introduction of [[ChatGPT]] plugins by OpenAI has significantly expanded the capabilities of the popular chatbot. Plugins allow ChatGPT to interact with a variety of apps and services, enabling it to do more than just provide conversation. With this added functionality, ChatGPT can now perform tasks in the real world, such as ordering groceries or booking vacations. One particularly exciting plugin is the ChatGPT Retrieval Plugin, which allows users to connect ChatGPT to a vector database like [[Weaviate]] in order to address its limitations related to long-term memory and provide responses based on internal documents and data.
Recently, OpenAI introduced plugins for ChatGPT, which have significantly expanded the [[chatbot]]'s capabilities. These plugins act as a bridge between the chatbot and a range of third-party resources, enabling it to leverage these resources to perform tasks based on user conversations. The Retrieval Plugin, specifically, has the potential to be widely used, as it allows users to create customized versions of ChatGPT tailored to their own data.
 
===ChatGPT Plugins===
ChatGPT plugins act as a bridge between the chatbot and a range of third-party resources, enabling it to leverage these resources to perform tasks based on user conversations. These plugins can also help ChatGPT compensate for its own shortcomings, making it more adaptable and effective. The Retrieval Plugin, specifically, has the potential to be widely used, as it allows users to create customized versions of ChatGPT tailored to their own data.


==Introduction to ChatGPT Retrieval Plugin==
With the integration of plugins, ChatGPT can now perform various tasks such as ordering groceries, booking restaurants, and organizing vacations by utilizing services like [[Instacart]], [[OpenTable]], and [[Expedia]]. Moreover, the Zapier plugin allows ChatGPT to connect with thousands of other applications, from [[Google Sheets]] to [[Salesforce]], thus broadening its reach.
===Connecting to a Vector Database===
The ChatGPT Retrieval Plugin allows users to connect ChatGPT to an instance of a vector database. By doing so, any information stored in the connected vector database can be used to answer questions and provide responses based on the details stored in the database. Additionally, the vector database can be utilized as a long-term storage solution for ChatGPT, allowing it to persist and store portions of user conversations beyond the short-lived memory of a browser tab.


The retrieval plugin enables ChatGPT to accomplish tasks grounded in the context of knowledge stored in a connected vector database, such as Weaviate. The process consists of two steps: prompting ChatGPT with a question to craft a query that can be sent to the vector database, and then receiving relevant information and context from the database to formulate an appropriate response.
==ChatGPT Retrieval Plugin: Connecting to a Vector Database==
[[File:chatgpt retrieval plugin1.png|400px|right]]
The [[ChatGPT Retrieval Plugin]] enables users to connect [[ChatGPT]] to an instance of a [[vector database]], allowing any information stored in the connected database to be used to answer questions and provide responses based on the details stored in the database. Additionally, the vector database can be utilized as a long-term storage solution for ChatGPT, allowing it to persist and store portions of user conversations beyond the short-lived memory of a browser tab.


===Plugin Functionality===
===Plugin Functionality===
The main functions of the ChatGPT Retrieval Plugin include:
The main functions of the ChatGPT Retrieval Plugin include:
*Connecting a vector database with proprietary data to ChatGPT, allowing it to answer specific questions based on that data
*Connecting a vector database with proprietary data to ChatGPT, allowing it to answer specific questions based on that data
*Persisting personal documents and details to provide a personalized touch to ChatGPT's responses
*Persisting personal documents and details to provide a personalized touch to ChatGPT's responses
*Storing conversations with ChatGPT in the attached vector database, enabling continued conversations across multiple sessions
*Storing conversations with ChatGPT in the attached vector database, enabling continued conversations across multiple sessions


This functionality allows for regular updates to content stored in connected vector databases, giving the model awareness of new information without the need for costly and time-consuming retraining of the large language model (LLM).
This functionality allows for regular updates to content stored in connected vector databases, giving the model awareness of new information without the need for costly and time-consuming retraining of the [[large language model]] ([[LLM]]).
 
==Overview of Vector Database Providers==
This article presents an overview of various vector database providers supported by the plugin, highlighting their unique features, performance, and pricing. The choice of a vector database provider depends on specific use cases and requirements, and each provider necessitates the use of distinct Dockerfiles and environment variables. Detailed instructions for setting up and using each provider can be found in their respective documentation at /docs/providers/<datastore_name>/setup.md.
 
===Pinecone===
[[Pinecone]] is a managed vector database engineered for rapid deployment, speed, and scalability. It uniquely supports hybrid search and is the sole datastore that natively accommodates SPLADE sparse vectors. For comprehensive setup guidance, refer to /docs/providers/pinecone/setup.md.
 
===Weaviate===
[[Weaviate]] is an open-source vector search engine designed to scale effortlessly to billions of data objects. Its out-of-the-box support for hybrid search makes it ideal for users who need efficient keyword searches. Weaviate can be self-hosted or managed, offering flexible deployment options. For extensive setup guidance, refer to /docs/providers/weaviate/setup.md.
 
===Zilliz===
[[Zilliz]] is a managed, cloud-native vector database tailored for billion-scale data. It boasts a plethora of features, including numerous indexing algorithms, distance metrics, scalar filtering, time-travel searches, rollback with snapshots, full RBAC, 99.9% uptime, separated storage and compute, and multi-language SDKs. For comprehensive setup guidance, refer to /docs/providers/zilliz/setup.md.
 
===Milvus===
[[Milvus]] is an open-source, cloud-native vector database that scales to billions of vectors. As the open-source variant of Zilliz, Milvus shares many features with it, such as various indexing algorithms, distance metrics, scalar filtering, time-travel searches, rollback with snapshots, multi-language SDKs, storage and compute separation, and cloud scalability. For extensive setup guidance, refer to /docs/providers/milvus/setup.md.
 
===Qdrant===
[[Qdrant]] is a vector database that can store documents and vector embeddings. It provides self-hosted and managed Qdrant Cloud deployment options, catering to users with diverse requirements. For comprehensive setup guidance, refer to /docs/providers/qdrant/setup.md.
 
===Redis===
[[Redis]] is a real-time data platform suitable for an array of applications, including AI/ML workloads and everyday use. By creating a Redis database with the Redis Stack docker container, Redis can be employed as a low-latency vector engine. For a hosted or managed solution, Redis Cloud is available. For extensive setup guidance, refer to /docs/providers/redis/setup.md.
 
===LlamaIndex===
[[LlamaIndex]] serves as a central interface to connect your LLMs with external data. It offers a collection of in-memory indices over structured and unstructured data for use with ChatGPT. Unlike conventional vector databases, LlamaIndex supports a wide array of indexing strategies (e.g., tree, keyword table, knowledge graph) optimized for various use-cases. It is lightweight, user-friendly, and requires no additional deployment. Users need only specify a few environment variables and, optionally, point to an existing saved Index JSON file. However, metadata filters in queries are not yet supported. For comprehensive setup guidance, refer to /docs/providers/llama/setup.md.


==Weaviate Retrieval Plugin in Action==
==Weaviate Retrieval Plugin in Action==
The Weaviate Retrieval Plugin can be used in various applications, such as creating a private, customized version of ChatGPT tailored to an organization's internal documents or personalizing ChatGPT based on individual user details. By connecting the plugin to Weaviate, users can make ChatGPT more useful and relevant to their specific needs.
===Using ChatGPT on Proprietary Company Documents===
===Using ChatGPT on Proprietary Company Documents===
The Weaviate Retrieval Plugin can be used to create a customized version of ChatGPT trained on a company's internal documents, enabling it to act as a human resources chatbot. This can provide employees with easy access to information about onboarding processes, health benefits, and more.
The Weaviate Retrieval Plugin can be used to create a customized version of ChatGPT trained on a company's internal documents, enabling it to act as a human resources chatbot. This can provide employees with easy access to information about onboarding processes, health benefits, and more.
Line 30: Line 57:
One of the most powerful applications of the Weaviate Retrieval Plugin is its ability to store and reference previous conversations with ChatGPT. By persisting these conversations in Weaviate, ChatGPT can recall past interactions and provide more contextually relevant responses.
One of the most powerful applications of the Weaviate Retrieval Plugin is its ability to store and reference previous conversations with ChatGPT. By persisting these conversations in Weaviate, ChatGPT can recall past interactions and provide more contextually relevant responses.


While still in its Alpha stage, the ChatGPT Retrieval Plugin offers a promising solution toenhancing ChatGPT's capabilities, overcoming its memory limitations, and creating a more personalized and engaging user experience. As the plugin continues to develop and becomes more accessible, it will likely play a significant role in the evolution of ChatGPT and its real-world applications. The potential use cases for this technology are vast, from customized customer service chatbots to more efficient knowledge management systems. By leveraging the power of vector databases like Weaviate, the ChatGPT Retrieval Plugin is poised to bring a new level of versatility and utility to the world of generative AI.
 
[[Category:Plugins]] [[Category:ChatGPT Plugins]]

Latest revision as of 23:29, 8 April 2023

See also: ChatGPT and ChatGPT Plugins

GitHub

Introduction

The ChatGPT Retrieval Plugin is a powerful addition to OpenAI's ChatGPT, enabling the AI to access and utilize data stored in a vector database. This capability not only enhances ChatGPT's performance by providing access to customized data but also addresses its long-standing limitation of lacking long-term memory. The plugin is part of the broader ChatGPT plugins ecosystem, which allows ChatGPT to interact with various apps and services, transforming it from a conversation tool to an AI capable of taking actions in the real world.

Background Information

Recently, OpenAI introduced plugins for ChatGPT, which have significantly expanded the chatbot's capabilities. These plugins act as a bridge between the chatbot and a range of third-party resources, enabling it to leverage these resources to perform tasks based on user conversations. The Retrieval Plugin, specifically, has the potential to be widely used, as it allows users to create customized versions of ChatGPT tailored to their own data.

With the integration of plugins, ChatGPT can now perform various tasks such as ordering groceries, booking restaurants, and organizing vacations by utilizing services like Instacart, OpenTable, and Expedia. Moreover, the Zapier plugin allows ChatGPT to connect with thousands of other applications, from Google Sheets to Salesforce, thus broadening its reach.

ChatGPT Retrieval Plugin: Connecting to a Vector Database

Chatgpt retrieval plugin1.png

The ChatGPT Retrieval Plugin enables users to connect ChatGPT to an instance of a vector database, allowing any information stored in the connected database to be used to answer questions and provide responses based on the details stored in the database. Additionally, the vector database can be utilized as a long-term storage solution for ChatGPT, allowing it to persist and store portions of user conversations beyond the short-lived memory of a browser tab.

Plugin Functionality

The main functions of the ChatGPT Retrieval Plugin include:

  • Connecting a vector database with proprietary data to ChatGPT, allowing it to answer specific questions based on that data
  • Persisting personal documents and details to provide a personalized touch to ChatGPT's responses
  • Storing conversations with ChatGPT in the attached vector database, enabling continued conversations across multiple sessions

This functionality allows for regular updates to content stored in connected vector databases, giving the model awareness of new information without the need for costly and time-consuming retraining of the large language model (LLM).

Overview of Vector Database Providers

This article presents an overview of various vector database providers supported by the plugin, highlighting their unique features, performance, and pricing. The choice of a vector database provider depends on specific use cases and requirements, and each provider necessitates the use of distinct Dockerfiles and environment variables. Detailed instructions for setting up and using each provider can be found in their respective documentation at /docs/providers/<datastore_name>/setup.md.

Pinecone

Pinecone is a managed vector database engineered for rapid deployment, speed, and scalability. It uniquely supports hybrid search and is the sole datastore that natively accommodates SPLADE sparse vectors. For comprehensive setup guidance, refer to /docs/providers/pinecone/setup.md.

Weaviate

Weaviate is an open-source vector search engine designed to scale effortlessly to billions of data objects. Its out-of-the-box support for hybrid search makes it ideal for users who need efficient keyword searches. Weaviate can be self-hosted or managed, offering flexible deployment options. For extensive setup guidance, refer to /docs/providers/weaviate/setup.md.

Zilliz

Zilliz is a managed, cloud-native vector database tailored for billion-scale data. It boasts a plethora of features, including numerous indexing algorithms, distance metrics, scalar filtering, time-travel searches, rollback with snapshots, full RBAC, 99.9% uptime, separated storage and compute, and multi-language SDKs. For comprehensive setup guidance, refer to /docs/providers/zilliz/setup.md.

Milvus

Milvus is an open-source, cloud-native vector database that scales to billions of vectors. As the open-source variant of Zilliz, Milvus shares many features with it, such as various indexing algorithms, distance metrics, scalar filtering, time-travel searches, rollback with snapshots, multi-language SDKs, storage and compute separation, and cloud scalability. For extensive setup guidance, refer to /docs/providers/milvus/setup.md.

Qdrant

Qdrant is a vector database that can store documents and vector embeddings. It provides self-hosted and managed Qdrant Cloud deployment options, catering to users with diverse requirements. For comprehensive setup guidance, refer to /docs/providers/qdrant/setup.md.

Redis

Redis is a real-time data platform suitable for an array of applications, including AI/ML workloads and everyday use. By creating a Redis database with the Redis Stack docker container, Redis can be employed as a low-latency vector engine. For a hosted or managed solution, Redis Cloud is available. For extensive setup guidance, refer to /docs/providers/redis/setup.md.

LlamaIndex

LlamaIndex serves as a central interface to connect your LLMs with external data. It offers a collection of in-memory indices over structured and unstructured data for use with ChatGPT. Unlike conventional vector databases, LlamaIndex supports a wide array of indexing strategies (e.g., tree, keyword table, knowledge graph) optimized for various use-cases. It is lightweight, user-friendly, and requires no additional deployment. Users need only specify a few environment variables and, optionally, point to an existing saved Index JSON file. However, metadata filters in queries are not yet supported. For comprehensive setup guidance, refer to /docs/providers/llama/setup.md.

Weaviate Retrieval Plugin in Action

The Weaviate Retrieval Plugin can be used in various applications, such as creating a private, customized version of ChatGPT tailored to an organization's internal documents or personalizing ChatGPT based on individual user details. By connecting the plugin to Weaviate, users can make ChatGPT more useful and relevant to their specific needs.

Using ChatGPT on Proprietary Company Documents

The Weaviate Retrieval Plugin can be used to create a customized version of ChatGPT trained on a company's internal documents, enabling it to act as a human resources chatbot. This can provide employees with easy access to information about onboarding processes, health benefits, and more.

Personalizing ChatGPT

The plugin also enables the customization of ChatGPT around personal details, such as information about friends or languages spoken by the user. By storing these details in Weaviate, ChatGPT can provide more tailored and personalized responses.

Helping ChatGPT Remember

One of the most powerful applications of the Weaviate Retrieval Plugin is its ability to store and reference previous conversations with ChatGPT. By persisting these conversations in Weaviate, ChatGPT can recall past interactions and provide more contextually relevant responses.