Prediction Guard is a secure, scalable GenAI platform that safeguards sensitive data, prevents common AI malfunctions, and runs on affordable hardware.
Overview
Integration details
This integration utilizes the Prediction Guard API, which includes various safeguards and security features.Setup
To access Prediction Guard models, contact us here to get a Prediction Guard API key and get started.Credentials
Once you have a key, you can set it withInstallation
Instantiation
Invocation
Process Input
With Prediction Guard, you can guard your model inputs for PII or prompt injections using one of our input checks. See the Prediction Guard docs for more information.PII
Prompt Injection
Output Validation
With Prediction Guard, you can check validate the model outputs using factuality to guard against hallucinations and incorrect info, and toxicity to guard against toxic responses (e.g. profanity, hate speech). See the Prediction Guard docs for more information.Toxicity
Factuality
Chaining
API reference
python.langchain.com/api_reference/community/llms/langchain_community.llms.predictionguard.PredictionGuard.htmlConnect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.