---
title: LlamaIndex Workflows | Developer Documentation
description: LlamaIndex Workflows is a simple and lightweight engine for JavaScript and TypeScript apps.
---

LlamaIndex Workflows are a library for event-driven programming in JavaScript and TypeScript. It provides a simple and lightweight orchestration solution for building complex workflows with minimal boilerplate.

It combines [event-driven](/docs/workflows/api-reference/type-aliases/WorkflowEvent/index.md) programming, [async context](/docs/workflows/api-reference/type-aliases/WorkflowContext/index.md) and [streaming](/docs/workflows/api-reference/classes/WorkflowStream/index.md) to create a flexible and efficient way to handle data processing tasks.

The essential concepts of Workflows are:

- **Events**: are the core building blocks of Workflows. They represent data that flows through the system.
- **Handlers**: are functions that process events and can produce new events.
- **Context**: is the environment in which events are processed. It provides access to the event stream and allows sending new events.
- **Workflow**: is the collection of events, handlers, and context that define the processing logic.

## Getting Started

Terminal window

```
npm i @llamaindex/workflow-core


yarn add @llamaindex/workflow-core


pnpm add @llamaindex/workflow-core


bun add @llamaindex/workflow-core


deno add npm:@llamaindex/workflow-core
```

## First Example

With [workflowEvent](/docs/workflows/api-reference/type-aliases/WorkflowEvent/index.md) and [createWorkflow](/docs/workflows/api-reference/functions/createWorkflow/index.md), you can create a simple workflow that processes events.

For example, imagine you want to create a workflow that uses OpenAI to generate a response to a user’s message.

```
import { OpenAI } from "openai";
import { createWorkflow, workflowEvent } from "@llamaindex/workflow-core";


const main = async () => {
  const openai = new OpenAI({
    apiKey: process.env.OPENAI_API_KEY,
  });


  const startEvent = workflowEvent<string>();
  const stopEvent = workflowEvent<string>();


  const workflow = createWorkflow();


  workflow.handle([startEvent], async (event) => {
    const response = await openai.chat.completions.create({
      model: "gpt-4.1-mini",
      messages: [{ role: "user", content: event.data }],
    });


    return stopEvent.with(response.choices[0].message.content ?? "");
  });


  workflow.handle([stopEvent], (event) => {
    console.log("Response:", event.data);
  });


  const { sendEvent } = workflow.createContext();
  sendEvent(startEvent.with("Hello, Workflows!"));
};


void main().catch(console.error);
```

From here, the sky is the limit. You can implement [branching](/docs/workflows/common_patterns/branching.mdx), [looping](/docs/workflows/common_patterns/loops.mdx), [human-in-the-loop](/docs/workflows/common_patterns/human_in_the_loop.mdx), [map-reduce](/docs/workflows/common_patterns/map_reduce.mdx), and more!

## Architecture

Workflows are built around components like events, steps/handlers, context, and more. Learn more about these components [here](/docs/workflows/api-reference/index.md).

## Deployment

Workflows are designed to be run anywhere. Deploy in a server, a lambda function, an edge runtime, or a cloudflare function!

- [Hono Server](https://github.com/run-llama/workflows-ts/tree/main/demo/hono)
- [Express (Client + Server)](tutorials/express_agent/)
- [CloudFlare Function](https://github.com/run-llama/workflows-ts/tree/main/demo/cloudflare)

## Community

- [Discord](https://discord.gg/dGcwcsnxhU)
- [X (formerly Twitter)](https://x.com/llama_index)
