Vercel
LlamaIndex provides integration with Vercel’s AI SDK, allowing you to create powerful search and retrieval applications. You can:
- Use any of Vercel AI’s model providers as LLMs in LlamaIndex
- Use indexes (e.g. VectorStoreIndex, LlamaCloudIndex) from LlamaIndexTS in your Vercel AI applications
First, install the required dependencies:
npm i @llamaindex/vercel ai
Using Vercel AI’s Model Providers
Section titled “Using Vercel AI’s Model Providers”Using the VercelLLM
adapter, it’s easy to use any of Vercel AI’s model providers as LLMs in LlamaIndex. Here’s an example of how to use OpenAI’s GPT-4o model:
const llm = new VercelLLM({ model: openai("gpt-4o") });const result = await llm.complete({ prompt: "What is the capital of France?", stream: false, // Set to true if you want streaming responses});console.log(result.text);
Use Indexes
Section titled “Use Indexes”Using VectorStoreIndex
Section titled “Using VectorStoreIndex”Here’s how to create a simple vector store index and query it using Vercel’s AI SDK:
import { openai } from "@ai-sdk/openai";import { llamaindex } from "@llamaindex/vercel";import { streamText } from "ai";import { Document } from "llamaindex";import { LlamaCloudIndex } from "llama-cloud-services";
// Create an index from your documentsconst document = new Document({ text: yourText, id_: "unique-id" });const index = await VectorStoreIndex.fromDocuments([document]);
// Create a query toolconst queryTool = llamaindex({ model: openai("gpt-4"), index, description: "Search through the documents", // optional});
// Use the tool with Vercel's AI SDKstreamText({ model: openai("gpt-4"), prompt: "Your question here", tools: { queryTool }, onFinish({ response }) { console.log("Response:", response.messages); // log the response },}).toDataStream();
Note: the Vercel AI model referenced in the
llamaindex
function is used by the response synthesizer to generate a response for the tool call.
Using LlamaCloud
Section titled “Using LlamaCloud”For production deployments, you can use LlamaCloud to store and manage your documents:
import { LlamaCloudIndex } from "llama-cloud-services";
// Create a LlamaCloud indexconst index = await LlamaCloudIndex.fromDocuments({ documents: [document], name: "your-index-name", projectName: "your-project", apiKey: process.env.LLAMA_CLOUD_API_KEY,});
// Use it the same way as VectorStoreIndexconst queryTool = llamaindex({ model: openai("gpt-4"), index, description: "Search through the documents", options: { fields: ["sourceNodes", "messages"]}});
// Use the tool with Vercel's AI SDKstreamText({ model: openai("gpt-4"), prompt: "Your question here", tools: { queryTool },}).toDataStream();
Next Steps
Section titled “Next Steps”- Explore LlamaCloud for managed document storage and retrieval
- Join our Discord community for support and discussions