Skip to content

Google AlloyDB for PostgreSQL - `AlloyDBChatStore`

AlloyDB is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. AlloyDB is 100% compatible with PostgreSQL. Extend your database application to build AI-powered experiences leveraging AlloyDB’s LlamaIndex integrations.

This notebook goes over how to use AlloyDB for PostgreSQL to store chat history with AlloyDBChatStore class.

Learn more about the package on GitHub.

Open In Colab

To run this notebook, you will need to do the following:

Install the integration library, llama-index-alloydb-pg, and the library for the embedding service, llama-index-embeddings-vertex.

%pip install --upgrade --quiet llama-index-alloydb-pg llama-index-llms-vertex llama-index

Colab only: Uncomment the following cell to restart the kernel or use the button to restart the kernel. For Vertex AI Workbench you can restart the terminal using the button on top.

# # Automatically restart kernel after installs so that your environment can access the new packages
# import IPython
# app = IPython.Application.instance()
# app.kernel.do_shutdown(True)

Authenticate to Google Cloud as the IAM user logged into this notebook in order to access your Google Cloud Project.

  • If you are using Colab to run this notebook, use the cell below and continue.
  • If you are using Vertex AI Workbench, check out the setup instructions here.
from google.colab import auth
auth.authenticate_user()

Set your Google Cloud project so that you can leverage Google Cloud resources within this notebook.

If you don’t know your project ID, try the following:

# @markdown Please fill in the value below with your Google Cloud project ID and then run the cell.
PROJECT_ID = "my-project-id" # @param {type:"string"}
# Set the project id
!gcloud config set project {PROJECT_ID}

Find your database values, in the AlloyDB Instances page.

# @title Set Your Values Here { display-mode: "form" }
REGION = "us-central1" # @param {type: "string"}
CLUSTER = "my-cluster" # @param {type: "string"}
INSTANCE = "my-primary" # @param {type: "string"}
DATABASE = "my-database" # @param {type: "string"}
TABLE_NAME = "chat_store" # @param {type: "string"}
VECTOR_STORE_TABLE_NAME = "vector_store" # @param {type: "string"}
USER = "postgres" # @param {type: "string"}
PASSWORD = "my-password" # @param {type: "string"}

One of the requirements and arguments to establish AlloyDB as a chat store is a AlloyDBEngine object. The AlloyDBEngine configures a connection pool to your AlloyDB database, enabling successful connections from your application and following industry best practices.

To create a AlloyDBEngine using AlloyDBEngine.from_instance() you need to provide only 5 things:

  1. project_id : Project ID of the Google Cloud Project where the AlloyDB instance is located.
  2. region : Region where the AlloyDB instance is located.
  3. cluster: The name of the AlloyDB cluster.
  4. instance : The name of the AlloyDB instance.
  5. database : The name of the database to connect to on the AlloyDB instance.

By default, IAM database authentication will be used as the method of database authentication. This library uses the IAM principal belonging to the Application Default Credentials (ADC) sourced from the environment.

Optionally, built-in database authentication using a username and password to access the AlloyDB database can also be used. Just provide the optional user and password arguments to AlloyDBEngine.from_instance():

  • user : Database user to use for built-in database authentication and login
  • password : Database password to use for built-in database authentication and login.

Note: This tutorial demonstrates the async interface. All async methods have corresponding sync methods.

from llama_index_alloydb_pg import AlloyDBEngine
engine = await AlloyDBEngine.afrom_instance(
project_id=PROJECT_ID,
region=REGION,
cluster=CLUSTER,
instance=INSTANCE,
database=DATABASE,
user=USER,
password=PASSWORD,
)

To create an AlloyDBEngine for AlloyDB Omni, you will need a connection url. AlloyDBEngine.from_connection_string first creates an async engine and then turns it into an AlloyDBEngine. Here is an example connection with the asyncpg driver:

# Replace with your own AlloyDB Omni info
OMNI_USER = "my-omni-user"
OMNI_PASSWORD = ""
OMNI_HOST = "127.0.0.1"
OMNI_PORT = "5432"
OMNI_DATABASE = "my-omni-db"
connstring = f"postgresql+asyncpg://{OMNI_USER}:{OMNI_PASSWORD}@{OMNI_HOST}:{OMNI_PORT}/{OMNI_DATABASE}"
engine = AlloyDBEngine.from_connection_string(connstring)

The AlloyDBChatStore class requires a database table. The AlloyDBEngine engine has a helper method ainit_chat_store_table() that can be used to create a table with the proper schema for you.

await engine.ainit_chat_store_table(table_name=TABLE_NAME)

You can also specify a schema name by passing schema_name wherever you pass table_name.

SCHEMA_NAME = "my_schema"
await engine.ainit_chat_store_table(
table_name=TABLE_NAME,
schema_name=SCHEMA_NAME,
)
from llama_index_alloydb_pg import AlloyDBChatStore
chat_store = await AlloyDBChatStore.create(
engine=engine,
table_name=TABLE_NAME,
# schema_name=SCHEMA_NAME
)
from llama_index.core.memory import ChatMemoryBuffer
memory = ChatMemoryBuffer.from_defaults(
token_limit=3000,
chat_store=chat_store,
chat_store_key="user1",
)

You can use any of the LLMs compatible with LlamaIndex. You may need to enable Vertex AI API to use Vertex.

from llama_index.llms.vertex import Vertex
llm = Vertex(model="gemini-1.5-flash-002", project=PROJECT_ID)

Use the AlloyDBChatStore without a storage context

Section titled “Use the AlloyDBChatStore without a storage context”
from llama_index.core.chat_engine import SimpleChatEngine
chat_engine = SimpleChatEngine(memory=memory, llm=llm, prefix_messages=[])
response = chat_engine.chat("Hello.")
print(response)

Use the AlloyDBChatStore with a storage context

Section titled “Use the AlloyDBChatStore with a storage context”

Find a detailed guide on how to use the AlloyDBVectorStore here.

You can also use the AlloyDBDocumentStore and AlloyDBIndexStore to persist documents and index metadata. For a detailed python notebook on this, see LlamaIndex Doc Store Guide

from llama_index_alloydb_pg import AlloyDBVectorStore
await engine.ainit_vector_store_table(
table_name=VECTOR_STORE_TABLE_NAME,
vector_size=768, # Vector size for VertexAI model(textembedding-gecko@latest)
)
vector_store = await AlloyDBVectorStore.create(
engine=engine,
table_name=VECTOR_STORE_TABLE_NAME,
)

You can use any Llama Index embeddings model. You may need to enable Vertex AI API to use VertexTextEmbeddings. We recommend setting the embedding model’s version for production, learn more about the Text embeddings models.

# enable Vertex AI API
!gcloud services enable aiplatform.googleapis.com
from llama_index.core import Settings
from llama_index.embeddings.vertex import VertexTextEmbedding
from llama_index.llms.vertex import Vertex
import google.auth
credentials, project_id = google.auth.default()
Settings.embed_model = VertexTextEmbedding(
model_name="textembedding-gecko@003",
project=PROJECT_ID,
credentials=credentials,
)
Settings.llm = Vertex(model="gemini-1.5-flash-002", project=PROJECT_ID)
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
print("Document ID:", documents[0].doc_id)

Create a VectorStoreIndex with a storage context

Section titled “Create a VectorStoreIndex with a storage context”
from llama_index.core import StorageContext, VectorStoreIndex
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
documents, storage_context=storage_context, show_progress=True
)
chat_engine = index.as_chat_engine(llm=llm, chat_mode="context", memory=memory)
response = chat_engine.chat("What did the author do?")