Deployment Config Reference
LlamaDeploy reads configuration from your repository to run your app. The configuration is defined in your project’s pyproject.toml
.
pyproject.toml
Section titled “pyproject.toml”[tool.llamadeploy]name = "my-app"env_files = [".env"]
[tool.llamadeploy.workflows]workflow-one = "my_app.workflows:some_workflow"workflow-two = "my_app.workflows:another_workflow"
[tool.llamadeploy.ui]directory = "ui"build_output_dir = "ui/static"
Authentication
Section titled “Authentication”Deployments can be configured to automatically inject authentication for LlamaCloud.
[tool.llamadeploy]llama_cloud = true
When this is set:
- During development,
llamactl
prompts you to log in to LlamaCloud if you’re not already. After that, it injectsLLAMA_CLOUD_API_KEY
,LLAMA_CLOUD_PROJECT_ID
, andLLAMA_CLOUD_BASE_URL
into your Python server process and JavaScript build. - When deployed, LlamaCloud automatically injects a dedicated API key into the Python process. The frontend process receives a short-lived session cookie specific to each user visiting the application. Therefore, configure the project ID on the frontend API client so that LlamaCloud API requests from the frontend and backend are scoped to the same project ID.
.env
files
Section titled “.env files”Most apps need API keys (e.g., OpenAI). You can specify them via a .env
file and reference it in your config:
[tool.llamadeploy]env_files = [".env"]
Then set your secrets:
OPENAI_API_KEY=sk-xxxx
Alternative file formats (YAML/TOML)
Section titled “Alternative file formats (YAML/TOML)”If you prefer to keep your pyproject.toml
simple, you can write the same configuration in a llama_deploy.yaml
or llama_deploy.toml
file. All fields use the same structure and types; omit the tool.llamadeploy
prefix.
Schema
Section titled “Schema”DeploymentConfig fields
Section titled “DeploymentConfig fields”Field | Type | Default | Description |
---|---|---|---|
name | string | "default" | URL-safe deployment name. In pyproject.toml , if omitted it falls back to project.name . |
workflows | map<string,string> | — | Map of workflowName -> "module.path:workflow" . |
env_files | list<string> | [".env"] | Paths to env files to load. Relative to the config file. Duplicate entries are removed. |
env | map<string,string> | {} | Environment variables injected at runtime. |
llama_cloud | boolean | false | Indicates that a deployment connects to LlamaCloud. Set to true to automatically inject a LlamaCloud API key. |
ui | UIConfig | null | Optional UI configuration. directory is required if ui is present. |
UIConfig fields
Section titled “UIConfig fields”Field | Type | Default | Description |
---|---|---|---|
directory | string | — | Path to UI source, relative to the config directory. Required when ui is set. |
build_output_dir | string | ${directory}/dist | Built UI output directory. If set in TOML/pyproject.toml , the path is relative to the config file. If set via package.json (llamadeploy.build_output_dir ), it is resolved as ${directory}/${build_output_dir} . |
package_manager | string | "npm" (or inferred) | Package manager used to build the UI. If not set, inferred from package.json packageManager (e.g., pnpm@9.0.0 → pnpm ). |
build_command | string | "build" | NPM script name used to build. |
serve_command | string | "dev" | NPM script name used to serve in development. |
proxy_port | integer | 4502 | Port the app server proxies to in development. |
UI Integration via package.json
Section titled “UI Integration via package.json”Note: after setting ui.directory
so that package.json
can be found, you can configure the UI within it instead.
For example:
{ "name": "my-ui", "packageManager": "pnpm@9.7.0", "scripts": { "build": "vite build", "dev": "vite" }, "llamadeploy": { "build_output_dir": "dist", "package_manager": "pnpm", "build_command": "build", "serve_command": "dev", "proxy_port": 5173 }}