Skip to content
LlamaIndex Documentation
Search
Ctrl
K
Cancel
Python
Twitter
LinkedIn
Bluesky
GitHub
Select theme
Dark
Light
Auto
LlamaCloud
Welcome to LlamaCloud
Parse
Overview of LlamaParse
Getting Started
Presets and Modes
Advanced Parsing Modes
Auto Mode
Output
Modes and Presets
Features
Parsing options
Multimodal Parsing
Python Usage
Layout Extraction
Metadata
Cache options
Structured Output (Beta)
Webhook
Supported Document Types
Job predictability
Selecting what to parse
LlamaParse Document Pipeline Triggers
Parsing instructions (deprecated)
Prompts
LlamaParse FAQ
LlamaParse API v2 Guide
Examples
LlamaParse Examples
Parse and Analyze Excel Spreadsheets with LlamaParse
Parse with Additional Prompts
Migration Guide: Parse Upload Endpoint v1 to v2
Extract
Getting Started
Getting Started with LlamaExtract
LlamaExtract REST API
LlamaExtract Python SDK
Using the LlamaExtract Web UI
Examples
LlamaExtract Examples
Auto-Generate Schema for Extraction
Extract Data from Financial Reports - with Citations and Reasoning
Features
LlamaExtract Core Concepts
LlamaExtract Extensions
LlamaExtract Configuration Options
LlamaExtract Performance Tips
LlamaExtract Schema Design
LlamaExtract Schema Restrictions
LlamaExtract Privacy
LlamaExtract Pricing and Usage Data
Classify
Examples
LlamaClassify Examples
Classify Contract Types
Getting Started
Getting Started with Classify
Classify Python SDK
Index
Getting Started
Getting Started with LlamaCloud
LlamaCloud Quick Start
Usage Guides
LlamaCloud API & Clients Guide
LlamaCloud Framework Integration
LlamaCloud Usage Guides
LlamaCloud No-code UI Guide
How-to Guides
Files
Extracting Figures from Documents
Getting Started with LlamaCloud Index & Agents
Integrations
Data Sinks
Data Sinks
AstraDB
Azure AI Search
Managed Data Sink
Milvus
MongoDB Atlas Vector Search
Pinecone
Qdrant
Data Sources
LlamaCloud Data Sources
Azure Blob Storage Data Source
Box Storage Data Source
Confluence Data Source
File Upload Data Source
Google Drive Data Source
Jira Data Source
Notion Data Source
Microsoft OneDrive Data Source
S3 Data Source
Microsoft SharePoint Data Source
Slack Data Source
Embedding Models
Embedding Models
Azure Embedding
Bedrock Embedding
Cohere Embedding
Gemini Embedding
HuggingFace Embedding
OpenAI Embedding
Multi-Environments
Parsing & Transformation in LlamaCloud
Retrieval
Advanced
Basic
Composite Retrieval
Image Retrieval
Retrieval Modes
General
Webhooks
API Key
Self-Hosting
Architecture
Basic Setup
Auth
Databases and Queues
Azure Service Bus as Job Queue
Overview
File Storage
Ingress
LLM Integrations
Anthropic API Setup
AWS Bedrock Setup
Azure OpenAI Setup
Google Gemini API Setup
Google Vertex AI Setup
OpenAI Setup
Overview
Frequently Asked Questions
Get Started
Tuning
Service Configurations
Credit Pricing & Usage
Organizations
Cookbooks
Cookbooks
Enterprise Rollout
Regions
Managing Your Subscription
Cloud API Reference 🔗
Workflows
LlamaIndex Workflows
Common patterns
Branching
Fan-In/Fan-Out
Human-in-the-Loop Workflows
Loops
State
Tracing Workflows
Visualizing Workflows
Tutorials
Express agent
1. Introduction
2. Agent Loop
3. Adding State
4. Adding HITL
5. Express Agent
Workflows API Reference 🔗
Framework
Welcome to LlamaIndex.TS
Getting started
High-Level Concepts
Create-Llama
Code examples
Installation
Installation
Next.js Applications
Server APIs & Backends
Serverless Functions
Troubleshooting
Integration
Langtrace
MCP Toolbox For Databases
OpenLLMetry
Vercel
Migration
Migrating from v0.8 to v0.9
Deprecated
Agent
Agents
Modules
Agents
Agent Workflows
Low-Level LLM Execution
Define workflows using natural language
Tool
Tools
Workflows
Data
Documents and Nodes
Data index
Index
Managed Index
Ingestion pipeline
Ingestion Pipeline
Transformations
Transformations
Metadata Extraction
Node Parsers / Text Splitters
Memory
Memory
Readers
Loading Data
DiscordReader
JSONReader
Llama parse
LlamaParse
Image Retrieval
JSON Mode
Stores
Storage
Chat stores
Chat Stores
Doc stores
Document Stores
Index stores
Index Stores
Kv stores
Key-Value Stores
Vector stores
Vector Stores
Qdrant Vector Store
Supabase Vector Store
Evaluation
Evaluating
Correctness Evaluator
Faithfulness Evaluator
Relevancy Evaluator
Models
Embeddings
Embedding
DeepInfra
Gemini
HuggingFace
Jina AI
MistralAI
MixedbreadAI
Ollama
OpenAI
Together
VoyageAI
Llms
Large Language Models (LLMs)
Anthropic
Azure OpenAI
Bedrock
DeepInfra
DeepSeek LLM
Fireworks LLM
Gemini
Groq
LLama2
Mistral
Ollama
OpenAI
Perplexity LLM
Portkey LLM
Together LLM
Prompt
Prompts
Rag
ChatEngine
Node postprocessors
Node Postprocessors
Cohere Reranker
Jina AI Reranker
MixedbreadAI
Query engines
QueryEngine
Metadata Filtering
Router Query Engine
Response Synthesizer
Retriever
Ui
Using @llamaindex/chat-ui
Using LlamaIndex Server
More
Tutorials
Agents
1. Setup
2. Create a basic agent
3. Using a local model via Ollama
4. Adding Retrieval-Augmented Generation (RAG)
5. A RAG agent that does math
6. Adding LlamaParse
7. Adding persistent vector storage
Basic Agent
Custom Model Per Request
Local LLMs
Rag
Retrieval Augmented Generation (RAG)
Concepts
Structured data extraction
Workflows
Framework API Reference 🔗
Twitter
LinkedIn
Bluesky
GitHub
Select theme
Dark
Light
Auto
LlamaExtract Examples
Extract Data from Financial Reports with Citations & Reasoning
For more detailed examples on how to use the Python SDK, visit
our GitHub repo
.