Batch
Create Batch Job
List Batch Jobs
Get Batch Job Status
Cancel Batch Job
ModelsExpand Collapse
BatchGetStatusResponse = object { job, progress_percentage }
BatchJob Items
List Batch Job Items
Get Item Processing Results
ModelsExpand Collapse
JobItemListResponse = object { item_id, item_name, status, 7 more }
Detailed information about an item in a batch job.
JobItemGetProcessingResultsResponse = object { item_id, item_name, processing_results }
Response containing all processing results for an item.
processing_results: optional array of object { item_id, job_config, job_type, 5 more }
List of all processing operations performed on this item
job_config: object { correlation_id, job_name, parameters, 6 more } or ClassifyJob { id, project_id, rules, 9 more }
Job configuration used for processing
BatchParseJobRecordCreate = object { correlation_id, job_name, parameters, 6 more }
Batch-specific parse job record for batch processing.
This model contains the metadata and configuration for a batch parse job, but excludes file-specific information. It’s used as input to the batch parent workflow and combined with DirectoryFile data to create full ParseJobRecordCreate instances for each file.
Attributes: job_name: Must be PARSE_RAW_FILE partitions: Partitions for job output location parameters: Generic parse configuration (BatchParseJobConfig) session_id: Upstream request ID for tracking correlation_id: Correlation ID for cross-service tracking parent_job_execution_id: Parent job execution ID if nested user_id: User who created the job project_id: Project this job belongs to webhook_url: Optional webhook URL for job completion notifications
correlation_id: optional string
The correlation ID for this job. Used for tracking the job across services.
parameters: optional object { adaptive_long_table, aggressive_table_extraction, annotate_links, 122 more }
Generic parse job configuration for batch processing.
This model contains the parsing configuration that applies to all files in a batch, but excludes file-specific fields like file_name, file_id, etc. Those file-specific fields are populated from DirectoryFile data when creating individual ParseJobRecordCreate instances for each file.
The fields in this model should be generic settings that apply uniformly to all files being processed in the batch.
output_s3_path_prefix: optional string
If specified, llamaParse will save the output to the specified path. All output file will use this ‘prefix’ should be a valid s3:// url
webhook_configurations: optional array of object { webhook_events, webhook_headers, webhook_output_format, webhook_url }
Outbound webhook endpoints to notify on job status changes
webhook_events: optional array of "extract.pending" or "extract.success" or "extract.error" or 14 more
Events to subscribe to (e.g. ‘parse.success’, ‘extract.error’). If null, all events are delivered.
webhook_headers: optional map[string]
Custom HTTP headers sent with each webhook request (e.g. auth tokens)
partitions: optional map[string]
The partitions for this execution. Used for determining where to save job output.