Awadb Vector Store
If you’re opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-embeddings-huggingface%pip install llama-index-vector-stores-awadb
!pip install llama-index
Creating an Awadb index
Section titled “Creating an Awadb index”import loggingimport sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
Load documents, build the VectorStoreIndex
Section titled “Load documents, build the VectorStoreIndex”from llama_index.core import ( SimpleDirectoryReader, VectorStoreIndex, StorageContext,)from IPython.display import Markdown, displayimport openai
openai.api_key = ""
INFO:numexpr.utils:Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.Note: NumExpr detected 12 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.INFO:numexpr.utils:NumExpr defaulting to 8 threads.NumExpr defaulting to 8 threads.
Download Data
Section titled “Download Data”!mkdir -p 'data/paul_graham/'!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
Load Data
Section titled “Load Data”# load documentsdocuments = SimpleDirectoryReader("./data/paul_graham/").load_data()
from llama_index.embeddings.huggingface import HuggingFaceEmbeddingfrom llama_index.vector_stores.awadb import AwaDBVectorStore
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
vector_store = AwaDBVectorStore()storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents( documents, storage_context=storage_context, embed_model=embed_model)
Query Index
Section titled “Query Index”# set Logging to DEBUG for more detailed outputsquery_engine = index.as_query_engine()response = query_engine.query("What did the author do growing up?")
display(Markdown(f"<b>{response}</b>"))
# set Logging to DEBUG for more detailed outputsquery_engine = index.as_query_engine()response = query_engine.query( "What did the author do after his time at Y Combinator?")
display(Markdown(f"<b>{response}</b>"))