MVI: the most productive, easiest to use, serverless vector index for your data. To get started with MVI, simply sign up for an account. There’s no need to handle infrastructure, manage servers, or be concerned about scaling. MVI is a service that scales automatically to meet your needs.To sign up and access MVI, visit the Momento Console.
Setup
Install prerequisites
You will need:- the
momentopackage for interacting with MVI, and - the openai package for interacting with the OpenAI API.
- the tiktoken package for tokenizing text.
Enter API keys
Momento: for indexing data
Visit the Momento Console to get your API key.OpenAI: for text embeddings
Load your data
Here we use the example dataset from LangChain, the state of the union address. First we load relevant modules:Index your data
Indexing your data is as simple as instantiating theMomentoVectorIndex object. Here we use the from_documents helper to both instantiate and index the data:
Query your data
Ask a question directly against the index
The most direct way to query the data is to search against the index. We can do that as follows using theVectorStore API:
Use an LLM to generate fluent answers
With the data indexed in MVI, we can integrate with any chain that leverages vector similarity search. Here we use theRetrievalQA chain to demonstrate how to answer questions from the indexed data.
First we load the relevant modules:
Next Steps
That’s it! You’ve now indexed your data and can query it using the Momento Vector Index. You can use the same index to query your data from any chain that supports vector similarity search. With Momento you can not only index your vector data, but also cache your API calls and store your chat message history. Check out the other Momento langchain integrations to learn more. To learn more about the Momento Vector Index, visit the Momento Documentation.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.