ChatGPT
If you’re opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-llms-openai
!pip install llama-index
import loggingimport sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderfrom llama_index.core import Settingsfrom llama_index.llms.openai import OpenAIfrom IPython.display import Markdown, display
Download Data
Section titled “Download Data”!mkdir -p 'data/paul_graham/'!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
Load documents, build the VectorStoreIndex
Section titled “Load documents, build the VectorStoreIndex”# load documentsdocuments = SimpleDirectoryReader("./data/paul_graham").load_data()
# set global settings configllm = OpenAI(temperature=0, model="gpt-3.5-turbo")Settings.llm = llmSettings.chunk_size = 512
index = VectorStoreIndex.from_documents(documents)
Query Index
Section titled “Query Index”By default, with the help of langchain’s PromptSelector abstraction, we use a modified refine prompt tailored for ChatGPT-use if the ChatGPT model is used.
query_engine = index.as_query_engine( similarity_top_k=3, streaming=True,)response = query_engine.query( "What did the author do growing up?",)
INFO:llama_index.token_counter.token_counter:> [retrieve] Total LLM token usage: 0 tokens> [retrieve] Total LLM token usage: 0 tokensINFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 8 tokens> [retrieve] Total embedding token usage: 8 tokensINFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 0 tokens> [get_response] Total LLM token usage: 0 tokensINFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens> [get_response] Total embedding token usage: 0 tokens
response.print_response_stream()
Before college, the author worked on writing short stories and programming on an IBM 1401 using an early version of Fortran. They also worked on programming with microcomputers and eventually created a new dialect of Lisp called Arc. They later realized the potential of publishing essays on the web and began writing and publishing them. The author also worked on spam filters, painting, and cooking for groups.
query_engine = index.as_query_engine( similarity_top_k=5, streaming=True,)response = query_engine.query( "What did the author do during his time at RISD?",)
INFO:llama_index.token_counter.token_counter:> [retrieve] Total LLM token usage: 0 tokens> [retrieve] Total LLM token usage: 0 tokensINFO:llama_index.token_counter.token_counter:> [retrieve] Total embedding token usage: 12 tokens> [retrieve] Total embedding token usage: 12 tokensINFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 0 tokens> [get_response] Total LLM token usage: 0 tokensINFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens> [get_response] Total embedding token usage: 0 tokens
response.print_response_stream()
The author attended RISD and took classes in fundamental subjects like drawing, color, and design. They also learned a lot in the color class they took, but otherwise, they were basically teaching themselves to paint. The author dropped out of RISD in 1993.
Refine Prompt: Here is the chat refine prompt
from llama_index.core.prompts.chat_prompts import CHAT_REFINE_PROMPT
dict(CHAT_REFINE_PROMPT.prompt)
Query Index (Using the standard Refine Prompt)
Section titled “Query Index (Using the standard Refine Prompt)”If we use the “standard” refine prompt (where the prompt is one text template instead of multiple messages), we find that the results over ChatGPT are worse.
from llama_index.core.prompts.default_prompts import DEFAULT_REFINE_PROMPT
query_engine = index.as_query_engine( refine_template=DEFAULT_REFINE_PROMPT, similarity_top_k=5, streaming=True,)response = query_engine.query( "What did the author do during his time at RISD?",)
response.print_response_stream()