Agent Workflows
Agent Workflows are a powerful system that enables you to create and orchestrate one or multiple agents with tools to perform specific tasks. It’s built on top of the base Workflow
system and provides a streamlined interface for agent interactions.
Single Agent Workflow
Section titled “Single Agent Workflow”The simplest use case is creating a single agent with specific tools. Here’s an example of creating an assistant that tells jokes:
import { tool } from "llamaindex";import { agent } from "@llamaindex/workflow";import { openai } from "@llamaindex/openai";
// Define a joke-telling toolconst jokeTool = tool( () => "Baby Llama is called cria", { name: "joke", description: "Use this tool to get a joke", });
// Create an single agent workflow with the toolconst jokeAgent = agent({ tools: [jokeTool], llm: openai({ model: "gpt-4o-mini" }),});
// Run the workflowconst result = await jokeAgent.run("Tell me something funny");console.log(result.data.result); // Baby Llama is called criaconsole.log(result.data.message); // { role: 'assistant', content: 'Baby Llama is called cria' }
Structured Output
Section titled “Structured Output”You can extract structured data from agent responses by providing a responseFormat
with a Zod schema. This is useful when you need the agent’s response in a specific format for further processing:
import { z } from "zod";import { tool } from "llamaindex";import { agent } from "@llamaindex/workflow";import { openai } from "@llamaindex/openai";
// Define a weather toolconst weatherTool = tool({ name: "weatherTool", description: "Get weather information", parameters: z.object({ location: z.string(), }), execute: ({ location }) => { return `The weather in ${location} is sunny. The temperature is 72 degrees. The humidity is 50%. The wind speed is 10 mph.`; },});
// Define the structure you want for the responseconst responseSchema = z.object({ temperature: z.number(), humidity: z.number(), windSpeed: z.number(),});
// Create the agentconst weatherAgent = agent({ name: "weatherAgent", tools: [weatherTool], llm: openai({ model: "gpt-4.1-mini" }),});
// Run with structured outputconst result = await weatherAgent.run("What's the weather in Tokyo?", { responseFormat: responseSchema,});
console.log("Natural language result:", result.data.result);console.log("Structured data:", result.data.object);// Output: { temperature: 72, humidity: 50, windSpeed: 10 }
The agent will:
- Use the weather tool to get the raw weather information
- Process that information through the LLM
- Extract structured data according to your schema
- Return both the natural language response and the structured object
Event Streaming
Section titled “Event Streaming”Agent Workflows provide a unified interface for event streaming, making it easy to track and respond to different events during execution:
import { agentToolCallEvent, agentStreamEvent } from "@llamaindex/workflow";
// Get the workflow execution contextconst events = jokeAgent.runStream("Tell me something funny");
// Stream and handle eventsfor await (const event of events) { if (agentToolCallEvent.include(event)) { console.log(`Tool being called: ${event.data.toolName}`); } if (agentStreamEvent.include(event)) { process.stdout.write(event.data.delta); }}
Multi-Agent Workflow
Section titled “Multi-Agent Workflow”An Agent Workflow can orchestrate multiple agents, enabling complex interactions and task handoffs. Each agent in a multi-agent workflow requires:
name
: Unique identifier for the agentdescription
: Purpose description used for task routingtools
: Array of tools the agent can usecanHandoffTo
(optional): Array of agent names or agent instances that this agent can delegate tasks to
Here’s an example of a multi-agent system that combines joke-telling and weather information:
import { tool } from "llamaindex";import { multiAgent, agent } from "@llamaindex/workflow";import { openai } from "@llamaindex/openai";import { z } from "zod";
// Create a weather agentconst weatherAgent = agent({ name: "WeatherAgent", description: "Provides weather information for any city", tools: [ tool( { name: "fetchWeather", description: "Get weather information for a city", parameters: z.object({ city: z.string(), }), execute: ({ city }) => `The weather in ${city} is sunny`, } ), ], llm: openai({ model: "gpt-4o-mini" }),});
// Create a joke-telling agentconst jokeAgent = agent({ name: "JokeAgent", description: "Tells jokes and funny stories", tools: [jokeTool], // Using the joke tool defined earlier llm: openai({ model: "gpt-4o-mini" }), canHandoffTo: [weatherAgent], // Can hand off to the weather agent});
// Create the multi-agent workflowconst agents = multiAgent({ agents: [jokeAgent, weatherAgent], rootAgent: jokeAgent, // Start with the joke agent});
// Run the workflowconst result = await agents.run( "Give me a morning greeting with a joke and the weather in San Francisco");console.log(result.data.result);
The workflow will coordinate between agents, allowing them to handle different aspects of the request and hand off tasks when appropriate.