- A simple example of how to use MariTalk to perform a task.
- LLM + RAG: The second example shows how to answer a question whose answer is found in a long document that does not fit within the token limit of MariTalk. For this, we will use a simple searcher (BM25) to first search the document for the most relevant sections and then feed them to MariTalk for answering.
Installation
First, install the LangChain library (and all its dependencies) using the following command:API Key
You will need an API key that can be obtained from chat.maritaca.ai (“Chaves da API” section).Example 1 - Pet Name Suggestions
Let’s define our language model, ChatMaritalk, and configure it with your API key.Stream Generation
For tasks involving the generation of long text, such as creating an extensive article or translating a large document, it can be advantageous to receive the response in parts, as the text is generated, instead of waiting for the complete text. This makes the application more responsive and efficient, especially when the generated text is extensive. We offer two approaches to meet this need: one synchronous and another asynchronous.Synchronous
Asynchronous
Example 2 - RAG + LLM: UNICAMP 2024 Entrance Exam Question Answering System
For this example, we need to install some extra libraries:Loading the database
The first step is to create a database with the information from the notice. For this, we will download the notice from the COMVEST website and segment the extracted text into 500-character windows.Creating a Searcher
Now that we have our database, we need a searcher. For this example, we will use a simple BM25 as a search system, but this could be replaced by any other searcher (such as search via embeddings).Combining Search System + LLM
Now that we have our searcher, we just need to implement a prompt specifying the task and invoke the chain.Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.