Server APIs & Backends
Installation
Quick Start
Section titled âQuick StartâInstall the core package:
npm i llamaindex
In most cases, youâll also need an LLM provider and the Workflow package:
npm i @llamaindex/openai @llamaindex/workflow
Environment Setup
Section titled âEnvironment SetupâAPI Keys
Section titled âAPI KeysâMost LLM providers require API keys. Set your OpenAI key (or other provider):
export OPENAI_API_KEY=your-api-key
Or use a .env
file:
echo "OPENAI_API_KEY=your-api-key" > .env
Loading Environment Variables
Section titled âLoading Environment VariablesâFor Node.js applications:
node --env-file .env your-script.js
For other environments, see the deployment-specific guides below.
TypeScript Configuration
Section titled âTypeScript ConfigurationâLlamaIndex.TS is built with TypeScript and provides excellent type safety. Add these settings to your tsconfig.json
:
{ "compilerOptions": { // Essential for module resolution "moduleResolution": "bundler", // or "nodenext" | "node16" | "node"
// Required for Web Stream API support "lib": ["DOM.AsyncIterable"],
// Recommended for better compatibility "target": "es2020", "module": "esnext" }}
Running your first agent
Section titled âRunning your first agentâIf you donât already have a project, you can create a new one in a new folder:
npm initnpm i -D typescript @types/nodenpm i @llamaindex/openai @llamaindex/workflow llamaindex zod
Run the agent
Section titled âRun the agentâCreate the file example.ts
. This code will:
- Create two tools for use by the agent:
- A
sumNumbers
tool that adds two numbers - A
divideNumbers
tool that divides numbers
- A
- Give an example of the data structure we wish to generate
- Prompt the LLM with instructions and the example, plus a sample transcript
To run the code:
npx tsx example.ts
You should expect output something like:
{ result: '5 + 5 is 10. Then, 10 divided by 2 is 5.', state: { memory: Memory { messages: [Array], tokenLimit: 30000, shortTermTokenLimitRatio: 0.7, memoryBlocks: [], memoryCursor: 0, adapters: [Object] }, scratchpad: [], currentAgentName: 'Agent', agents: [ 'Agent' ], nextAgentName: null }}Done
Performance Optimization
Section titled âPerformance OptimizationâTokenization Speed
Section titled âTokenization SpeedâInstall gpt-tokenizer
for 60x faster tokenization (Node.js environments only):
npm i gpt-tokenizer
LlamaIndex will automatically use this when available.
Deployment Guides
Section titled âDeployment GuidesâChoose your deployment target:
Serverless Functions
Next.js Applications
Troubleshooting
LLM/Embedding Providers
Section titled âLLM/Embedding ProvidersâGo to LLM APIs and Embedding APIs to find out how to use different LLM and embedding providers beyond OpenAI.
Whatâs Next?
Section titled âWhatâs Next?âLearn LlamaIndex.TS
Show me code examples