Chat Memory Buffer
NOTE: This example of memory is deprecated in favor of the newer and more flexible Memory
class. See the latest docs.
The ChatMemoryBuffer
is a memory buffer that simply stores the last X messages that fit into a token limit.
%pip install llama-index-core
from llama_index.core.memory import ChatMemoryBuffer
memory = ChatMemoryBuffer.from_defaults(token_limit=40000)
Using Standalone
Section titled “Using Standalone”from llama_index.core.llms import ChatMessage
chat_history = [ ChatMessage(role="user", content="Hello, how are you?"), ChatMessage(role="assistant", content="I'm doing well, thank you!"),]
# put a list of messagesmemory.put_messages(chat_history)
# put one message at a time# memory.put_message(chat_history[0])
# Get the last X messages that fit into a token limithistory = memory.get()
# Get all messagesall_history = memory.get_all()
# clear the memorymemory.reset()
Using with Agents
Section titled “Using with Agents”You can set the memory in any agent in the .run()
method.
import os
os.environ["OPENAI_API_KEY"] = "sk-proj-..."
from llama_index.core.agent.workflow import ReActAgent, FunctionAgentfrom llama_index.core.workflow import Contextfrom llama_index.llms.openai import OpenAI
memory = ChatMemoryBuffer.from_defaults(token_limit=40000)
agent = FunctionAgent(tools=[], llm=OpenAI(model="gpt-4o-mini"))
# context to hold the chat history/statectx = Context(agent)
resp = await agent.run("Hello, how are you?", ctx=ctx, memory=memory)
print(memory.get_all())
[ChatMessage(role=<MessageRole.USER: 'user'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text='Hello, how are you?')]), ChatMessage(role=<MessageRole.ASSISTANT: 'assistant'>, additional_kwargs={}, blocks=[TextBlock(block_type='text', text="Hello! I'm just a program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?")])]