Server APIs & Backends
Installation
How to install and set up LlamaIndex.TS for your project.
Quick Start
Section titled “Quick Start”Install the core package:
npm i llamaindexIn most cases, you’ll also need an LLM provider and the Workflow package:
npm i @llamaindex/openai @llamaindex/workflowEnvironment Setup
Section titled “Environment Setup”API Keys
Section titled “API Keys”Most LLM providers require API keys. Set your OpenAI key (or other provider):
export OPENAI_API_KEY=your-api-keyOr use a .env file:
echo "OPENAI_API_KEY=your-api-key" > .envLoading Environment Variables
Section titled “Loading Environment Variables”For Node.js applications:
node --env-file .env your-script.jsFor other environments, see the deployment-specific guides below.
TypeScript Configuration
Section titled “TypeScript Configuration”LlamaIndex.TS is built with TypeScript and provides excellent type safety. Add these settings to your tsconfig.json:
{ "compilerOptions": { // Essential for module resolution "moduleResolution": "bundler", // or "nodenext" | "node16" | "node"
// Required for Web Stream API support "lib": ["DOM.AsyncIterable"],
// Recommended for better compatibility "target": "es2020", "module": "esnext" }}Running your first agent
Section titled “Running your first agent”Set up
Section titled “Set up”If you don’t already have a project, you can create a new one in a new folder:
npm initnpm i -D typescript @types/nodenpm i @llamaindex/openai @llamaindex/workflow llamaindex zodRun the agent
Section titled “Run the agent”Create the file example.ts. This code will:
- Create two tools for use by the agent:
- A
sumNumberstool that adds two numbers - A
divideNumberstool that divides numbers
- A
- Give an example of the data structure we wish to generate
- Prompt the LLM with instructions and the example, plus a sample transcript
To run the code:
npx tsx example.tsYou should expect output something like:
{ result: '5 + 5 is 10. Then, 10 divided by 2 is 5.', state: { memory: Memory { messages: [Array], tokenLimit: 30000, shortTermTokenLimitRatio: 0.7, memoryBlocks: [], memoryCursor: 0, adapters: [Object] }, scratchpad: [], currentAgentName: 'Agent', agents: [ 'Agent' ], nextAgentName: null }}DonePerformance Optimization
Section titled “Performance Optimization”Tokenization Speed
Section titled “Tokenization Speed”Install gpt-tokenizer for 60x faster tokenization (Node.js environments only):
npm i gpt-tokenizerLlamaIndex will automatically use this when available.
Deployment Guides
Section titled “Deployment Guides”Choose your deployment target:
Serverless Functions
Next.js Applications
Troubleshooting
LLM/Embedding Providers
Section titled “LLM/Embedding Providers”Go to LLM APIs and Embedding APIs to find out how to use different LLM and embedding providers beyond OpenAI.
What’s Next?
Section titled “What’s Next?”Learn LlamaIndex.TS
Show me code examples