Prerequisites
- A deployment (refer to how to set up an application for deployment) and details on hosting options.
- API keys for your embedding provider (in this case, OpenAI).
langchain >= 0.3.8(if you specify using the string format below).
Steps
- Update your
langgraph.jsonconfiguration file to include the store configuration:
- Uses OpenAI’s text-embedding-3-small model for generating embeddings
- Sets the embedding dimension to 1536 (matching the model’s output)
- Indexes all fields in your stored data (
["$"]means index everything, or specify specific fields like["text", "metadata.title"])
- To use the string embedding format above, make sure your dependencies include
langchain >= 0.3.8:
Usage
Once configured, you can use semantic search in your nodes. The store requires a namespace tuple to organize memories:Custom Embeddings
If you want to use custom embeddings, you can pass a path to a custom embedding function:Querying via the API
You can also query the store using the LangGraph SDK. Since the SDK uses async operations:Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.