EdenAI goes beyond mere model invocation. It empowers you with advanced features, including:
- Multiple Providers: Gain access to a diverse range of language models offered by various providers, giving you the freedom to choose the best-suited model for your use case.
- Fallback Mechanism: Set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.
- Usage Tracking: Track usage statistics on a per-project and per-API key basis. This feature allows you to monitor and manage resource consumption effectively.
-
Monitoring and Observability:
EdenAIprovides comprehensive monitoring and observability tools on the platform. Monitor the performance of your language models, analyze usage patterns, and gain valuable insights to optimize your applications.
Streaming and Batching
ChatEdenAI supports streaming and batching. Below is an example.
Fallback mecanism
With Eden AI you can set a fallback mechanism to ensure seamless operations even if the primary provider is unavailable, you can easily switches to an alternative provider.Chaining Calls
Tools
bind_tools()
WithChatEdenAI.bind_tools, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model.
with_structured_output()
The BaseChatModel.with_structured_output interface makes it easy to get structured output from chat models. You can use ChatEdenAI.with_structured_output, which uses tool-calling under the hood), to get the model to more reliably return an output in a specific format:Passing Tool Results to model
Here is a full example of how to use a tool. Pass the tool output to the model, and get the result back from the modelStreaming
Eden AI does not currently support streaming tool calls. Attempting to stream will yield a single final message.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.