Getting Started
Getting Started with llamactl
Section titled “Getting Started with llamactl”llamactl is the local development and deployment CLI for LlamaAgents. It can scaffold an app, run the app server locally, and manage cloud deployments from your terminal.
Before you start:
- Install
uv.llamactluses it to manage Python environments and project dependencies. - Install
git. Cloud deployments are built from source repositories. - Install Node.js if you are using a template with a frontend. For macOS and Linux, we recommend
nvm. For Windows, we recommend Chocolatey. - Windows support is experimental. For the best experience, use WSL2. If you run directly on Windows, see the Windows guide.
Install
Section titled “Install”Install llamactl globally with uv:
uv tool install -U llamactlOr pin it to a project:
uv add --dev llamactlAuthenticate
Section titled “Authenticate”Log in with your browser:
llamactl auth loginIf browser login is not available, use an API key and project ID:
llamactl auth token --api-key "$LLAMA_CLOUD_API_KEY" --project "$LLAMA_AGENTS_PROJECT_ID"For CI or other non-interactive environments, you can skip the stored profile and set environment variables instead:
export LLAMA_CLOUD_API_KEY="llx-..."export LLAMA_AGENTS_PROJECT_ID="project-id"See llamactl auth, llamactl environments, and llamactl projects for profile, environment, and project commands.
Initialize a Project
Section titled “Initialize a Project”Create a new LlamaAgents project:
llamactl initllamactl init opens a template picker and writes a project scaffold. Templates may include Python workflows only, or a Python app plus a frontend UI.
llamactl init uses symlinks. On Windows, enable Developer Mode with start ms-settings:developers before running the command.
The scaffold may include assistant-facing files such as AGENTS.md, CLAUDE.md, and GEMINI.md. They are optional and do not affect builds, runtime, or deployments.
Application configuration lives in your project’s pyproject.toml, or in llama_agents.yaml / llama_agents.toml. See the Deployment Config Reference for the schema.
Run Locally
Section titled “Run Locally”From the project directory, start the local development server:
llamactl servellamactl serve installs dependencies, reads the workflows configured for the app, serves them as an API, and proxies the frontend development server when the app has a UI.
For example, this configuration serves my-workflow under the local deployment named my-package:
[project]name = "my-package"
[tool.llamaagents.workflows]my-workflow = "my_package.my_workflow:workflow"
[tool.llamaagents.ui]directory = "ui"workflow = MyWorkflow()The local API is available at http://localhost:4501/deployments/my-package. To run the workflow, make a POST request to /deployments/my-package/workflows/my-workflow/run.
For flags, see llamactl serve. For workflow API details, see Workflows & App Server API.
Create a Cloud Deployment
Section titled “Create a Cloud Deployment”Cloud deployments are built from a Git repository. Commit and push your project first:
git remote add origin https://github.com/org/repogit add -Agit commit -m "Set up LlamaAgents app"git push -u origin mainThen create the deployment:
llamactl deployments createdeployments create opens a YAML deployment spec in your $EDITOR. Review the detected repository, deployment config path, Git ref, and secrets. Save and close the file to create the deployment.
For non-interactive creation, pass a file:
llamactl deployments create -f deployment.yamlPrivate GitHub repositories require LlamaCloud to have repository access. If access is missing, llamactl will return an error with the next step.
Declarative Deployments
Section titled “Declarative Deployments”For repeatable deploys, generate an apply-shaped deployment spec:
llamactl deployments template > deployment.yamlEdit the file, then apply it:
llamactl deployments apply -f deployment.yamlapply creates the deployment when it does not exist and updates it when it does. Secret values can reference local environment variables:
name: my-agentspec: repo_url: https://github.com/org/repo deployment_file_path: "." git_ref: main secrets: OPENAI_API_KEY: ${OPENAI_API_KEY}Run a dry run before changing cloud state:
llamactl deployments apply -f deployment.yaml --dry-runInspect and Stream Logs
Section titled “Inspect and Stream Logs”List deployments in the active project:
llamactl deployments getFetch one deployment:
llamactl deployments get NAMEStream logs:
llamactl deployments logs NAME --followUse -o json or -o yaml with deployments get when another tool needs structured output.
Update a Deployment
Section titled “Update a Deployment”If the deployment tracks a branch, re-resolve that branch and start a new release from the latest commit:
llamactl deployments update NAMETo point at a specific branch, tag, or commit:
llamactl deployments update NAME --git-ref mainFor push-mode deployments, update pushes local code before resolving the ref. If you have already pushed separately, use --no-push:
llamactl deployments update NAME --no-pushTo edit the deployment spec in your editor:
llamactl deployments edit NAMEOr keep the spec in version control and re-apply it:
llamactl deployments apply -f deployment.yamlPackage for Self-Hosted Deployments
Section titled “Package for Self-Hosted Deployments”If you prefer to build and deploy containers yourself, use llamactl pkg to generate container files for your app. See the pkg command reference.
Next: Read about defining and exposing workflows in Workflows & App Server API.