Skip to content

Installation

Install the core package:

npm i llamaindex

In most cases, you’ll also need an LLM provider and the Workflow package:

npm i @llamaindex/openai @llamaindex/workflow

Most LLM providers require API keys. Set your OpenAI key (or other provider):

Terminal window
export OPENAI_API_KEY=your-api-key

Or use a .env file:

Terminal window
echo "OPENAI_API_KEY=your-api-key" > .env

For Node.js applications:

Terminal window
node --env-file .env your-script.js

For other environments, see the deployment-specific guides below.

LlamaIndex.TS is built with TypeScript and provides excellent type safety. Add these settings to your tsconfig.json:

{
"compilerOptions": {
// Essential for module resolution
"moduleResolution": "bundler", // or "nodenext" | "node16" | "node"
// Required for Web Stream API support
"lib": ["DOM.AsyncIterable"],
// Recommended for better compatibility
"target": "es2020",
"module": "esnext"
}
}

If you don’t already have a project, you can create a new one in a new folder:

npm init
npm i -D typescript @types/node
npm i @llamaindex/openai @llamaindex/workflow llamaindex zod

Create the file example.ts. This code will:

  • Create two tools for use by the agent:
    • A sumNumbers tool that adds two numbers
    • A divideNumbers tool that divides numbers
  • Give an example of the data structure we wish to generate
  • Prompt the LLM with instructions and the example, plus a sample transcript
../../examples/agents/agent/openai.ts

To run the code:

npx tsx example.ts

You should expect output something like:

{
result: '5 + 5 is 10. Then, 10 divided by 2 is 5.',
state: {
memory: Memory {
messages: [Array],
tokenLimit: 30000,
shortTermTokenLimitRatio: 0.7,
memoryBlocks: [],
memoryCursor: 0,
adapters: [Object]
},
scratchpad: [],
currentAgentName: 'Agent',
agents: [ 'Agent' ],
nextAgentName: null
}
}
Done

Install gpt-tokenizer for 60x faster tokenization (Node.js environments only):

npm i gpt-tokenizer

LlamaIndex will automatically use this when available.

Choose your deployment target:

Server APIs & Backends

Serverless Functions

Next.js Applications

Troubleshooting

Go to LLM APIs and Embedding APIs to find out how to use different LLM and embedding providers beyond OpenAI.

Learn LlamaIndex.TS

Show me code examples