Get Started
Before You Get Started
Section titled âBefore You Get StartedâWelcome to LlamaCloud! Before you get started, please make sure you have the following prerequisites:
- LlamaCloud License Key. To obtain a LlamaCloud License Key, please contact us at support@llamaindex.ai.
- Kubernetes cluster
>=1.28.0
and a working installation ofkubectl
. - Helm
v3.7.0+
- To install Helm, please refer to the official Helm Documentation.
- OpenAI API Key or Azure OpenAI Credentials. Configuring OpenAI credentials is the easiest way to get started with your deployment.
- LlamaCloud tries to meet you at your organizationâs needs and supports configuring more than OpenAI LLMs in including Anthropic, Bedrock, Vertex AI, and more.
- Please refer to the docs in the Configuration section of the sidebar to learn more about configuring other LLMs.
- File Storage: LlamaCloud must leverage your cloud providerâs object storage to store files.
- Please follow the File Storage documentation to configure your deployment.
- Authentication Settings:
- OIDC. OIDC is our recommended authentication mode for production deployments.
- Basic Auth (email/password): As of July 24th, 2025 (
v0.5.0
), we support bothoidc
andbasic
authentication methods. This is a simpler authentication mode more suitable for staging deployments. - For more information, please refer to the Authentication Modes documentation.
- (For Production Deployments) Credentials to External Services (Postgres, MongoDB, Redis, RabbitMQ).
- We provide containerized versions of these dependencies to get started quickly. However, we recommend connecting LlamaCloud to externally managed services for production deployments.
- Please follow the Database and Queues documentation to configure your deployment.
Hardware Requirements
Section titled âHardware Requirementsâ- Linux Instances running x86 CPUs
- As of August 12th, 2024, we build only linux/amd64 images. arm64 is not supported at this moment.
- Ubuntu >=22.04
- >=12 vCPUs
- >=80Gbi Memory
Warning #1: LlamaParse, LlamaIndexâs proprietary document parser, can be a very resource-intensive deployment to run, especially if you want to maximize performance.
Warning #2: The base CPU/memory requirements may increase if you are running containerized deployments of LlamaCloud dependencies. (More information in the following section)
Configure and Install Your Deployment
Section titled âConfigure and Install Your DeploymentâThis section will walk you through the steps to configure a minimal LlamaCloud deployment.
Minimal values.yaml
configuration
Section titled âMinimal values.yaml configurationâTo get a minimal LlamaCloud deployment up and running, you can create a values.yaml
file with the following content:
global: config: licenseKey: "<REPLACE-WITH-LLAMACLOUD-LICENSE-KEY>" # existingLicenseKeySecret: "<uncomment-if-using-existing-secret>" authMode: "oidc"
backend: config: openAiApiKey: "<REPLACE-WITH-OPENAI-API-KEY>" # existingOpenAiApiKeySecret: "<uncomment-if-using-existing-secret>"
oidc: discoveryUrl:discoveryUrl: "https://login.microsoftonline.com/<your-tenant-id>/v2.0/.well-known/openid-configuration" clientId: "your-client-id" clientSecret: "your-client-secret" # existingSecretName: "oidc-secret"
llamaParse: config: openAiApiKey: "<REPLACE-WITH-OPENAI-API-KEY>" # existingOpenAiApiKeySecret: "<uncomment-if-using-existing-secret>"
global: cloudProvider: "azure" config: licenseKey: "<REPLACE-WITH-LLAMACLOUD-LICENSE-KEY>" # existingLicenseKeySecret: "<uncomment-if-using-existing-secret>"
backend: config: azureOpenAi: enabled: true existingSecret: "" # This is a legacy configuration that will be deprecated in the near future. # For those on legacy Helm Chart versions, please upgrade to use the latest recommended configuration instead -- see the Azure OpenAI documentation for more information. # https://docs.cloud.llamaindex.ai/self_hosting/configuration/azure-openai # key: "" # endpoint: "" # deploymentName: "" # apiVersion: ""
oidc: discoveryUrl: "https://login.microsoftonline.com/<your-tenant-id>/v2.0/.well-known/openid-configuration" clientId: "your-client-id" clientSecret: "your-client-secret" # existingSecretName: "oidc-secret"
llamaParse: config: # Deprecating soon, please use the azureOpenAi config below instead: # https://docs.cloud.llamaindex.ai/self_hosting/configuration/azure-openai azureOpenAi: enabled: true existingSecret: "" # This is a legacy configuration that will be deprecated in the near future. # For those on legacy Helm Chart versions, please upgrade to use the latest recommended configuration instead -- see the Azure OpenAI documentation for more information. # https://docs.cloud.llamaindex.ai/self_hosting/configuration/azure-openai # key: "" # endpoint: "" # deploymentName: "" # apiVersion: ""
global: config: licenseKey: "<REPLACE-WITH-LLAMACLOUD-LICENSE-KEY>" # existingLicenseKeySecret: "<uncomment-if-using-existing-secret>"
backend: config: basicAuth: enabled: true # validEmailDomain: "llamaindex.ai" # this is optional, but recommended for production deployments jwtSecret: "your-jwt-secret" # existingSecretName: "basic-auth-secret"
openAiApiKey: "<REPLACE-WITH-OPENAI-API-KEY>" # existingOpenAiApiKeySecret: "<uncomment-if-using-existing-secret>"
llamaParse: config: openAiApiKey: "<REPLACE-WITH-OPENAI-API-KEY>" # existingOpenAiApiKeySecret: "<uncomment-if-using-existing-secret>"
global: cloudProvider: "azure" config: licenseKey: "<REPLACE-WITH-LLAMACLOUD-LICENSE-KEY>" # existingLicenseKeySecret: "<uncomment-if-using-existing-secret>"
backend: config: azureOpenAi: enabled: true existingSecret: "" # This is a legacy configuration that will be deprecated in the near future. # For those on legacy Helm Chart versions, please upgrade to use the latest recommended configuration instead -- see the Azure OpenAI documentation for more information. # https://docs.cloud.llamaindex.ai/self_hosting/configuration/azure-openai # key: "" # endpoint: "" # deploymentName: "" # apiVersion: ""
basicAuth: enabled: true # validEmailDomain: "llamaindex.ai" # this is optional, but recommended for production deployments jwtSecret: "your-jwt-secret" # existingSecretName: "basic-auth-secret"
llamaParse: config: # Deprecating soon, please use the azureOpenAi config below instead: # https://docs.cloud.llamaindex.ai/self_hosting/configuration/azure-openai azureOpenAi: enabled: true existingSecret: "" # This is a legacy configuration that will be deprecated in the near future. # For those on legacy Helm Chart versions, please upgrade to use the latest recommended configuration instead -- see the Azure OpenAI documentation for more information. # https://docs.cloud.llamaindex.ai/self_hosting/configuration/azure-openai # key: "" # endpoint: "" # deploymentName: "" # apiVersion: ""
Important Notes
Section titled âImportant Notesâ- There are many more configuration options available for each component. To see the full values.yaml specification, please refer to the values.yaml file in the Helm chart repository.
- We also have in-depth guides for specific use cases and features. You can find them in the Configuration section of the sidebar.
- If youâre new to k8s or Helm or would like to see how common scenarios are configured, please refer to the examples directory in the Helm chart repository.
Install the Helm chart
Section titled âInstall the Helm chartâ# Add the Helm repositoryhelm repo add llamaindex https://run-llama.github.io/helm-charts
# Update your local Helm chart cachehelm repo update
# Install the Helm charthelm install llamacloud llamaindex/llamacloud -f values.yaml
If you want to install a specific version of the Helm chart, you can specify the version:
helm install llamacloud llamaindex/llamacloud --version <version> -f values.yaml
Validate the installation
Section titled âValidate the installationâAfter installation, you will see the following output:
NAME: llamacloudLAST DEPLOYED: <timestamp>NAMESPACE: defaultSTATUS: deployedREVISION: 1TEST SUITE: NoneNOTES:Welcome to LlamaCloud!
View your deployment with the following:
kubectl --namespace default get pods
To view LlamaCloud UI in your browser, run the following:
kubectl --namespace default port-forward svc/llamacloud-frontend 3000:3000
If you list the pods with kubectl get pods
, you should see the following pods:
NAME READY STATUS RESTARTS AGEllamacloud-backend-6f9dccd58d-4xnt5 1/1 Running 0 3mllamacloud-frontend-5cbf7d6c66-fvr5r 1/1 Running 0 3mllamacloud-jobs-service-79bb857444-8h9vg 1/1 Running 0 3mllamacloud-jobs-worker-648f45ccb7-8sqst 1/1 Running 0 3mllamacloud-llamaparse-658bdf7-6vqnz 1/1 Running 0 3mllamacloud-llamaparse-658bdf7-vvm5q 1/1 Running 0 3mllamacloud-llamaparse-ocr-7544cccdcc-29gvn 1/1 Running 0 3mllamacloud-llamaparse-ocr-7544cccdcc-6nvjt 1/1 Running 0 3mllamacloud-mongodb-784cf4bf9c-g9bcx 1/1 Running 0 3mllamacloud-postgresql-0 1/1 Running 0 3mllamacloud-rabbitmq-0 1/1 Running 0 3mllamacloud-redis-master-0 1/1 Running 0 3mllamacloud-redis-replicas-0 1/1 Running 0 3mllamacloud-redis-replicas-1 1/1 Running 0 3mllamacloud-redis-replicas-2 1/1 Running 0 3mllamacloud-usage-5768b788c4-pxfhr 1/1 Running 0 3m
Port forward the frontend service to access the LlamaCloud UI:
kubectl --namespace default port-forward svc/llamacloud-frontend 3000:3000
Open your web browser and navigate to http://localhost:3000
. You should see the LlamaCloud UI.
Next Steps
Section titled âNext Stepsâ- Configuring Authentication Modes
- Configuring File Storage
- Configuring Ingress
- Configuring Database and Queues
- Configuring LLMs
- Tuning LlamaCloud Services