Getting Started
LlamaAgents at a glance
Section titled “LlamaAgents at a glance”LlamaAgents helps you build and deploy small, workflow‑driven agentic apps using LlamaIndex, locally and on LlamaCloud. Define LlamaIndex workflows, run them as durable APIs that can pause for input, optionally add a UI, and deploy to LlamaCloud in seconds.
LlamaAgents is for developers and teams building automation, internal tools, and app‑like agent experiences without heavy infrastructure work.
Build and ship small, focused agentic apps—fast. Start from either our templated LlamaIndex workflow apps, or from workflows you’ve already prototyped, iterate locally, and deploy to LlamaCloud right from your terminal in seconds.
- Write LlamaIndex Python workflows, and serve them as an API. For example, make a request to process incoming files, analyze them, and return the result or forward them on to another system.
- Workflow runs are durable, and can wait indefinitely for human or other external inputs before proceeding.
- Optionally add a UI for user-driven applications. Make custom chat applications, data extraction and review applications.
- Deploy your app in seconds to LlamaCloud. Call it as an API with your API key, or visit it secured with your LlamaCloud login.
LlamaAgents is built on top of LlamaIndex’s (soon-to-be) open source LlamaDeploy v2. LlamaDeploy is a toolchain to create, develop, and deploy workflows. The llamactl
command line interface (CLI) is the main point of entry to developing LlamaDeploy applications: It can scaffold LlamaDeploy based projects with llamactl init
, serve them with llamactl serve
, and deploy with llamactl deployments create
.
In addition to LlamaDeploy, LlamaIndex published additional SDKs to facilitate rapid development:
- Our
llama-cloud-services
JS and Python SDKs offer a simple way to persist ad hoc Agent Data. Read more here. - Our
@llamaindex/ui
React library offers off-the-shelf components and hooks to facilitate developing workflow-driven UIs.
Getting Started with LlamaDeploy
Section titled “Getting Started with LlamaDeploy”LlamaDeploy is composed of the llamactl
CLI for development. llamactl
bootstraps an application server that manages running and persisting your workflows, and a control plane for managing cloud deployments of applications. It has some system pre-requisites that must be installed in order to work:
- Make sure you have
uv
installed.uv
is a Python package manager and build tool.llamactl
integrates with it in order to quickly manage your project’s build and dependencies. - Likewise, Node.js is required for UI development. You can use your node package manager of choice (
npm
,pnpm
, oryarn
).
Install
Section titled “Install”Choose one:
- Try without installing:
uvx llamactl --help
- Install globally (recommended):
uv tool install -U llamactlllamactl --help
Initialize a project
Section titled “Initialize a project”llamactl
includes starter templates for both full‑stack UI apps, and headless (API only) workflows. Pick a template and customize it.
llamactl init
This will prompt for some details, and create a Python module that contains LlamaIndex workflows, plus an optional UI you can serve as a static frontend.
Application configuration is managed within your project’s pyproject.toml
, where you can define Python workflow instances that should be served, environment details, and configuration for how the UI should be built. See the Deployment Config Reference for details on all configurable fields.
Develop
Section titled “Develop”Once you have a project, you can run the dev server for your application:
llamactl serve
llamactl serve
will
- Install all required dependencies
- Read the workflows configured in your app’s
pyproject.toml
and serve them as an API - Start up and proxy the frontend development server, so you can seamlessly write a full stack application.
For example, with the following configuration, the app will be served at http://localhost:4501/deployments/my-package
. Make a POST
request to /deployments/my-package/workflows/my-workflow/run
to trigger the workflow in src/my_package/my_workflow.py
.
[project]name = "my-package"# ...[tool.llamadeploy.workflows]my-workflow = "my_package.my_workflow:workflow"
[tool.llamadeploy.ui]directory = "ui"
# from workflows import ...# ...workflow = MyWorkflow()
At this point, you can get to coding. The development server will detect changes as you save files. It will even resume in-progress workflows!
For more information about CLI flags available, see llamactl serve
.
For a more detailed reference on how to define and expose workflows, see Workflows & App Server API.
Create a Deployment
Section titled “Create a Deployment”LlamaDeploy applications can be rapidly deployed just by pointing to a source git repository. With the provided repository configuration, LlamaCloud will clone, build, and serve your app. It can even access GitHub private repositories by installing the LlamaDeploy GitHub app
Example:
git remote add origin https://github.com/org/repogit add -Agit commit -m 'Set up new app'git push -u origin main
Then, create a deployment:
llamactl deployments create
The first time you run this, you’ll be prompted to log into LlamaCloud. See llamactl auth
for more info
This will open an interactive Terminal UI (TUI). You can tab through fields, or even point and click with your mouse if your terminal supports it. All required fields should be automatically detected from your environment, but can be customized:
- Name: Human‑readable and URL‑safe; appears in your deployment URL
- Git repository: Public HTTP or private GitHub (install the LlamaCloud GitHub app for private repos)
- Git branch: Branch to pull and build from (use
llamactl deployments update
to roll forward). This can also be a tag or a git commit. - Secrets: Pre‑filled from your local
.env
; edit as needed. These cannot be read again after creation.
When you save, LlamaDeploy will verify that it has access to your repository, (and prompt you to install the GitHub app if not)
After creation, the TUI will show deployment status and logs.
- You can later use
llamactl deployments get
to view again. - You can add secrets or change branches with
llamactl deployments edit
. - If you update your source repo, run
llamactl deployments update
to roll a new version.
Next: Read about defining and exposing workflows in Workflows & App Server API.