Skip to main content
We've raised a $125M Series B to build the platform for agent engineering.
Read more
.
Docs by LangChain home page
LangChain + LangGraph
Search...
⌘K
GitHub
Try LangSmith
Try LangSmith
Search...
Navigation
General integrations
Model caches
LangChain
LangGraph
Integrations
Learn
Reference
Contributing
TypeScript
Overview
All providers
Popular Providers
OpenAI
Anthropic
Google
AWS
Microsoft
General integrations
Chat models
Tools and Toolkits
LLMs
Key-value stores
Document transformers
Model caches
Callbacks
RAG integrations
Retrievers
텍스트 분할기
Embedding models
Vector stores
Document loaders
Key-value stores
General integrations
Model caches
Copy page
Copy page
Caching LLM calls
can be useful for testing, cost savings, and speed.
Below are some integrations that allow you to cache results of individual LLM calls using different caches with different strategies.
Azure Cosmos DB NoSQL Semantic Cache
View guide
Edit the source of this page on GitHub.
Connect these docs programmatically
to Claude, VSCode, and more via MCP for real-time answers.
Document transformers
Previous
Callbacks
Next
⌘I