Skip to content
LlamaAgents
llamactl

Getting Started

llamactl is the local development and deployment CLI for LlamaAgents. It can scaffold an app, run the app server locally, and manage cloud deployments from your terminal.

Before you start:

  • Install uv. llamactl uses it to manage Python environments and project dependencies.
  • Install git. Cloud deployments are built from source repositories.
  • Install Node.js if you are using a template with a frontend. For macOS and Linux, we recommend nvm. For Windows, we recommend Chocolatey.
  • Windows support is experimental. For the best experience, use WSL2. If you run directly on Windows, see the Windows guide.

Install llamactl globally with uv:

Terminal window
uv tool install -U llamactl

Or pin it to a project:

Terminal window
uv add --dev llamactl

Log in with your browser:

Terminal window
llamactl auth login

If browser login is not available, use an API key and project ID:

Terminal window
llamactl auth token --api-key "$LLAMA_CLOUD_API_KEY" --project "$LLAMA_AGENTS_PROJECT_ID"

For CI or other non-interactive environments, you can skip the stored profile and set environment variables instead:

Terminal window
export LLAMA_CLOUD_API_KEY="llx-..."
export LLAMA_AGENTS_PROJECT_ID="project-id"

See llamactl auth, llamactl environments, and llamactl projects for profile, environment, and project commands.

Create a new LlamaAgents project:

Terminal window
llamactl init

llamactl init opens a template picker and writes a project scaffold. Templates may include Python workflows only, or a Python app plus a frontend UI.

llamactl init uses symlinks. On Windows, enable Developer Mode with start ms-settings:developers before running the command.

The scaffold may include assistant-facing files such as AGENTS.md, CLAUDE.md, and GEMINI.md. They are optional and do not affect builds, runtime, or deployments.

Application configuration lives in your project’s pyproject.toml, or in llama_agents.yaml / llama_agents.toml. See the Deployment Config Reference for the schema.

From the project directory, start the local development server:

Terminal window
llamactl serve

llamactl serve installs dependencies, reads the workflows configured for the app, serves them as an API, and proxies the frontend development server when the app has a UI.

For example, this configuration serves my-workflow under the local deployment named my-package:

[project]
name = "my-package"
[tool.llamaagents.workflows]
my-workflow = "my_package.my_workflow:workflow"
[tool.llamaagents.ui]
directory = "ui"
src/my_package/my_workflow.py
workflow = MyWorkflow()

The local API is available at http://localhost:4501/deployments/my-package. To run the workflow, make a POST request to /deployments/my-package/workflows/my-workflow/run.

For flags, see llamactl serve. For workflow API details, see Workflows & App Server API.

Cloud deployments are built from a Git repository. Commit and push your project first:

Terminal window
git remote add origin https://github.com/org/repo
git add -A
git commit -m "Set up LlamaAgents app"
git push -u origin main

Then create the deployment:

Terminal window
llamactl deployments create

deployments create opens a YAML deployment spec in your $EDITOR. Review the detected repository, deployment config path, Git ref, and secrets. Save and close the file to create the deployment.

For non-interactive creation, pass a file:

Terminal window
llamactl deployments create -f deployment.yaml

Private GitHub repositories require LlamaCloud to have repository access. If access is missing, llamactl will return an error with the next step.

For repeatable deploys, generate an apply-shaped deployment spec:

Terminal window
llamactl deployments template > deployment.yaml

Edit the file, then apply it:

Terminal window
llamactl deployments apply -f deployment.yaml

apply creates the deployment when it does not exist and updates it when it does. Secret values can reference local environment variables:

name: my-agent
spec:
repo_url: https://github.com/org/repo
deployment_file_path: "."
git_ref: main
secrets:
OPENAI_API_KEY: ${OPENAI_API_KEY}

Run a dry run before changing cloud state:

Terminal window
llamactl deployments apply -f deployment.yaml --dry-run

List deployments in the active project:

Terminal window
llamactl deployments get

Fetch one deployment:

Terminal window
llamactl deployments get NAME

Stream logs:

Terminal window
llamactl deployments logs NAME --follow

Use -o json or -o yaml with deployments get when another tool needs structured output.

If the deployment tracks a branch, re-resolve that branch and start a new release from the latest commit:

Terminal window
llamactl deployments update NAME

To point at a specific branch, tag, or commit:

Terminal window
llamactl deployments update NAME --git-ref main

For push-mode deployments, update pushes local code before resolving the ref. If you have already pushed separately, use --no-push:

Terminal window
llamactl deployments update NAME --no-push

To edit the deployment spec in your editor:

Terminal window
llamactl deployments edit NAME

Or keep the spec in version control and re-apply it:

Terminal window
llamactl deployments apply -f deployment.yaml

If you prefer to build and deploy containers yourself, use llamactl pkg to generate container files for your app. See the pkg command reference.


Next: Read about defining and exposing workflows in Workflows & App Server API.