Heroku LLM Managed Inference
The llama-index-llms-heroku
package contains LlamaIndex integrations for building applications with models on Heroku’s Managed Inference platform. This integration allows you to easily connect to and use AI models deployed on Heroku’s infrastructure.
Installation
Section titled “Installation”%pip install llama-index-llms-heroku
1. Create a Heroku App
Section titled “1. Create a Heroku App”First, create an app in Heroku:
heroku create $APP_NAME
2. Create and Attach AI Models
Section titled “2. Create and Attach AI Models”Create and attach a chat model to your app:
heroku ai:models:create -a $APP_NAME claude-3-5-haiku
3. Export Configuration Variables
Section titled “3. Export Configuration Variables”Export the required configuration variables:
export INFERENCE_KEY=$(heroku config:get INFERENCE_KEY -a $APP_NAME)export INFERENCE_MODEL_ID=$(heroku config:get INFERENCE_MODEL_ID -a $APP_NAME)export INFERENCE_URL=$(heroku config:get INFERENCE_URL -a $APP_NAME)
Basic Usage
Section titled “Basic Usage”from llama_index.llms.heroku import Herokufrom llama_index.core.llms import ChatMessage, MessageRole
# Initialize the Heroku LLMllm = Heroku()
# Create chat messagesmessages = [ ChatMessage( role=MessageRole.SYSTEM, content="You are a helpful assistant." ), ChatMessage( role=MessageRole.USER, content="What are the most popular house pets in North America?", ),]
# Get responseresponse = llm.chat(messages)print(response)
Using Environment Variables
Section titled “Using Environment Variables”The integration automatically reads from environment variables:
import os
# Set environment variablesos.environ["INFERENCE_KEY"] = "your-inference-key"os.environ["INFERENCE_URL"] = "https://us.inference.heroku.com"os.environ["INFERENCE_MODEL_ID"] = "claude-3-5-haiku"
# Initialize without parametersllm = Heroku()
Using Parameters
Section titled “Using Parameters”You can also pass parameters directly:
import os
llm = Heroku( model=os.getenv("INFERENCE_MODEL_ID", "claude-3-5-haiku"), api_key=os.getenv("INFERENCE_KEY", "your-inference-key"), inference_url=os.getenv( "INFERENCE_URL", "https://us.inference.heroku.com" ), max_tokens=1024,)
Available Models
Section titled “Available Models”For a complete list of available models, see the Heroku Managed Inference documentation.
Error Handling
Section titled “Error Handling”The integration includes proper error handling for common issues:
- Missing API key
- Invalid inference URL
- Missing model configuration
Additional Information
Section titled “Additional Information”For more information about Heroku Managed Inference, visit the official documentation.