ElasticsearchEmbeddingsCache features and configurations head to the API reference.
Overview
TheElasticsearchEmbeddingsCache is a ByteStore implementation that uses your Elasticsearch instance for efficient storage and retrieval of embeddings.
Integration details
| Class | Package | Local | JS support | Downloads | Version |
|---|---|---|---|---|---|
| ElasticsearchEmbeddingsCache | langchain-elasticsearch | ✅ | ❌ |
Setup
To create aElasticsearchEmbeddingsCache byte store, you’ll need an Elasticsearch cluster. You can set one up locally or create an Elastic account.
Installation
The LangChainElasticsearchEmbeddingsCache integration lives in the langchain-elasticsearch package:
Instantiation
Now we can instantiate our byte store:Usage
You can set data under keys like this using themset method:
mdelete method:
Use as an embeddings cache
Like otherByteStores, you can use an ElasticsearchEmbeddingsCache instance for persistent caching in document ingestion for RAG.
However, cached vectors won’t be searchable by default. The developer can customize the building of the Elasticsearch document in order to add indexed vector field.
This can be done by subclassing and overriding methods:
API reference
For detailed documentation of allElasticsearchEmbeddingsCache features and configurations, head to the API reference: python.langchain.com/api_reference/elasticsearch/cache/langchain_elasticsearch.cache.ElasticsearchEmbeddingsCache.html
Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.