Maritalk
Introduction
Section titled “Introduction”MariTalk is an assistant developed by the Brazilian company Maritaca AI. MariTalk is based on language models that have been specially trained to understand Portuguese well.
This notebook demonstrates how to use MariTalk with Llama Index through two examples:
- Get pet name suggestions with chat method;
- Classify film reviews as negative or positive with few-shot examples with complete method.
Installation
Section titled “Installation”If you’re opening this Notebook on colab, you will probably need to install LlamaIndex.
!pip install llama-index!pip install llama-index-llms-maritalk!pip install asyncio
API Key
Section titled “API Key”You will need an API key that can be obtained from chat.maritaca.ai (“Chaves da API” section).
Example 1 - Pet Name Suggestions with Chat
Section titled “Example 1 - Pet Name Suggestions with Chat”from llama_index.core.llms import ChatMessagefrom llama_index.llms.maritalk import Maritalk
import asyncio
# To customize your API key, do this# otherwise it will lookup MARITALK_API_KEY from your env variablellm = Maritalk(api_key="<your_maritalk_api_key>", model="sabia-2-medium")
# Call chat with a list of messagesmessages = [ ChatMessage( role="system", content="You are an assistant specialized in suggesting pet names. Given the animal, you must suggest 4 names.", ), ChatMessage(role="user", content="I have a dog."),]
# Sync chatresponse = llm.chat(messages)print(response)
# Async chatasync def get_dog_name(llm, messages): response = await llm.achat(messages) print(response)
asyncio.run(get_dog_name(llm, messages))
Stream Generation
Section titled “Stream Generation”For tasks involving the generation of long text, such as creating an extensive article or translating a large document, it can be advantageous to receive the response in parts, as the text is generated, instead of waiting for the complete text. This makes the application more responsive and efficient, especially when the generated text is extensive. We offer two approaches to meet this need: one synchronous and another asynchronous.
# Sync streaming chatresponse = llm.stream_chat(messages)for chunk in response: print(chunk.delta, end="", flush=True)
# Async streaming chatasync def get_dog_name_streaming(llm, messages): async for chunk in await llm.astream_chat(messages): print(chunk.delta, end="", flush=True)
asyncio.run(get_dog_name_streaming(llm, messages))
Example 2 - Few-shot Examples with Complete
Section titled “Example 2 - Few-shot Examples with Complete”We recommend using the llm.complete()
method when using the model with few-shot examples
prompt = """Classifique a resenha de filme como "positiva" ou "negativa".
Resenha: Gostei muito do filme, é o melhor do ano!Classe: positiva
Resenha: O filme deixa muito a desejar.Classe: negativa
Resenha: Apesar de longo, valeu o ingresso..Classe:"""
# Sync completeresponse = llm.complete(prompt)print(response)
# Async completeasync def classify_review(llm, prompt): response = await llm.acomplete(prompt) print(response)
asyncio.run(classify_review(llm, prompt))
# Sync streaming completeresponse = llm.stream_complete(prompt)for chunk in response: print(chunk.delta, end="", flush=True)
# Async streaming completeasync def classify_review_streaming(llm, prompt): async for chunk in await llm.astream_complete(prompt): print(chunk.delta, end="", flush=True)
asyncio.run(classify_review_streaming(llm, prompt))