Momento Cache is the world’s first truly serverless caching service, offering instant elasticity, scale-to-zero capability, and blazing-fast performance. Momento Vector Index stands out as the most productive, easiest-to-use, fully serverless vector index. For both services, simply grab the SDK, obtain an API key, input a few lines into your code, and you’re set to go. Together, they provide a comprehensive solution for your LLM data needs.This page covers how to use the Momento ecosystem within LangChain.
Installation and Setup
- Sign up for a free account here to get an API key
- Install the Momento Python SDK with
pip install momento
Cache
Use Momento as a serverless, distributed, low-latency cache for LLM prompts and responses. The standard cache is the primary use case for Momento users in any environment. To integrate Momento Cache into your application:Vector Store
Momento Vector Index (MVI) can be used as a vector store. See this notebook for a walkthrough of how to use MVI as a vector store.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.