Prediction Guard is a secure, scalable GenAI platform that safeguards sensitive data, prevents common AI malfunctions, and runs on affordable hardware.
Overview
Integration details
This integration shows how to use the Prediction Guard embeddings integration with LangChain. This integration supports text and images, separately or together in matched pairs.Setup
To access Prediction Guard models, contact us here to get a Prediction Guard API key and get started.Credentials
Once you have a key, you can set it withInstallation
Instantiation
First, install the Prediction Guard and LangChain packages. Then, set the required env vars and set up package imports.Indexing and Retrieval
Direct Usage
The vectorstore and retriever implementations are callingembeddings.embed_documents(...) and embeddings.embed_query(...) to create embeddings from the texts used in the from_texts and retrieval invoke operations.
These methods can be directly called with the following commands.
Embed single texts
Embed multiple texts
Embed single images
Embed multiple images
Embed single text-image pairs
Embed multiple text-image pairs
API reference
For detailed documentation of all PredictionGuardEmbeddings features and configurations check out the API reference: python.langchain.com/api_reference/community/embeddings/langchain_community.embeddings.predictionguard.PredictionGuardEmbeddings.htmlConnect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.