Centralized Provider Configuration
Self-Hosting Documentation Access
This section requires a password to access. Interested in self-hosting? Contact sales to learn more.
Centralized LLM provider configuration allows you to configure LLM providers with custom credentials and API gateways. This configuration method enables use cases like routing through API gateways (Portkey, LiteLLM) or using custom credentials.
Use Cases
Section titled “Use Cases”- Custom API Gateways: Route LLM requests through gateways like Portkey or LiteLLM
- Custom Endpoints: Use custom base URLs for proxies or regional endpoints
- Custom Headers: Add custom HTTP headers per provider instance (e.g., for gateway authentication)
- Multiple Credentials: Configure multiple provider instances with different API keys
Configuration Structure
Section titled “Configuration Structure”Add provider configurations to your Helm values under config.llms.providerConfigs:
config: llms: providerConfigs: - id: "my-config-name" # User-defined identifier (can be anything) provider: "openai" # Provider type: "openai", "anthropic", or "azure" model_id: "openai-gpt-4o" # LlamaCloud model identifier (fixed values, see below) provider_model_name: "gpt-4o" # Optional: Provider-specific model name override enabled: true # Enable/disable this configuration credentials: # Provider-specific credentials api_key: "sk-..." base_url: "https://custom.api.endpoint" # Optional custom endpoint headers: # Custom HTTP headers (optional) X-Custom-Header: "value"Supported Providers
Section titled “Supported Providers”OpenAI (Fully Supported)
Section titled “OpenAI (Fully Supported)”- id: "openai-primary" provider: "openai" model_id: "openai-gpt-4o" credentials: api_key: "sk-..." # Required org_id: "org-..." # Optional base_url: "https://api.openai.com/v1" # Optional headers: # Optional X-Custom-Header: "value"Supported model_id values:
openai-gpt-4oopenai-gpt-4o-miniopenai-gpt-4-1openai-gpt-4-1-miniopenai-gpt-4-1-nanoopenai-gpt-5openai-gpt-5-miniopenai-gpt-5-nano
Anthropic (Fully Supported)
Section titled “Anthropic (Fully Supported)”- id: "anthropic-primary" provider: "anthropic" model_id: "anthropic-sonnet-4.5" credentials: api_key: "sk-ant-..." # Required base_url: "https://api.anthropic.com" # Optional headers: # Optional X-Custom-Header: "value"Supported model_id values:
anthropic-sonnet-4.5anthropic-sonnet-4.0anthropic-haiku-4.5anthropic-haiku-3.5anthropic-opus-3
Azure OpenAI (Fully Supported)
Section titled “Azure OpenAI (Fully Supported)”- id: "azure-sweden" provider: "azure" model_id: "openai-gpt-4o" credentials: api_key: "..." # Required endpoint: "https://your-resource.openai.azure.com" # Required deployment_id: "gpt-4o" # Optional api_version: "2024-08-06" # Optional headers: # Optional X-Custom-Header: "value"Supported model_id values:
openai-gpt-4oopenai-gpt-4o-miniopenai-gpt-4-1openai-gpt-4-1-miniopenai-gpt-4-1-nanoopenai-gpt-5openai-gpt-5-miniopenai-gpt-5-nano
Common Use Cases
Section titled “Common Use Cases”Custom API Gateway
Section titled “Custom API Gateway”Use a custom API gateway or proxy (e.g., Portkey, LiteLLM):
config: llms: providerConfigs: - id: "portkey-openai" provider: "openai" model_id: "openai-gpt-4o-mini" provider_model_name: "@openai/gpt-4o-mini" enabled: true credentials: api_key: "your-portkey-api-key" base_url: "https://api.portkey.ai/v1" headers: x-portkey-api-key: "your-portkey-api-key"
- id: "portkey-anthropic" provider: "anthropic" model_id: "anthropic-sonnet-4.5" provider_model_name: "@anthropic/claude-sonnet-4-5" enabled: true credentials: api_key: "your-portkey-api-key" base_url: "https://api.portkey.ai" headers: x-portkey-api-key: "your-portkey-api-key" x-portkey-strict-open-ai-compliance: "False"Verification
Section titled “Verification”After configuration, verify your setup:
-
Verify in Admin UI: Check the LlamaCloud admin interface for available models
-
Test parsing: Upload a document to LlamaParse to confirm the configured providers are working