Psychic Reader
Demonstrates the Psychic data connector. Used to query data from many SaaS tools from a single LlamaIndex-compatible API.
Prerequisites
Section titled “Prerequisites”Connections must first be established from the Psychic dashboard or React hook before documents can be loaded. Refer to https://docs.psychic.dev/ for more info.
If you’re opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-readers-psychic
!pip install llama-index
import loggingimport sysimport os
logging.basicConfig(stream=sys.stdout, level=logging.INFO)logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import SummaryIndexfrom llama_index.readers.psychic import PsychicReaderfrom IPython.display import Markdown, display
# Get Psychic API key from https://dashboard.psychic.dev/api-keyspsychic_key = "PSYCHIC_API_KEY"# Connector ID and Account ID are typically set programmatically based on the application state.account_id = "ACCOUNT_ID"connector_id = "notion"documents = PsychicReader(psychic_key=psychic_key).load_data( connector_id=connector_id, account_id=account_id)
# set Logging to DEBUG for more detailed outputsos.environ["OPENAI_API_KEY"] = "OPENAI_API_KEY"index = SummaryIndex.from_documents(documents)query_engine = index.as_query_engine()response = query_engine.query("What is Psychic's privacy policy?")display(Markdown(f"<b>{response}</b>"))
INFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total LLM token usage: 0 tokens> [build_index_from_nodes] Total LLM token usage: 0 tokens> [build_index_from_nodes] Total LLM token usage: 0 tokens> [build_index_from_nodes] Total LLM token usage: 0 tokensINFO:llama_index.token_counter.token_counter:> [build_index_from_nodes] Total embedding token usage: 0 tokens> [build_index_from_nodes] Total embedding token usage: 0 tokens> [build_index_from_nodes] Total embedding token usage: 0 tokens> [build_index_from_nodes] Total embedding token usage: 0 tokensINFO:llama_index.token_counter.token_counter:> [get_response] Total LLM token usage: 2383 tokens> [get_response] Total LLM token usage: 2383 tokens> [get_response] Total LLM token usage: 2383 tokens> [get_response] Total LLM token usage: 2383 tokensINFO:llama_index.token_counter.token_counter:> [get_response] Total embedding token usage: 0 tokens> [get_response] Total embedding token usage: 0 tokens> [get_response] Total embedding token usage: 0 tokens> [get_response] Total embedding token usage: 0 tokens