Skip to content

Beta

BetaAgent Data

Get Agent Data
client.beta.agentData.get(stringitemID, AgentDataGetParams { organization_id, project_id } query?, RequestOptionsoptions?): AgentData { data, deployment_name, id, 4 more }
GET/api/v1/beta/agent-data/{item_id}
Update Agent Data
client.beta.agentData.update(stringitemID, AgentDataUpdateParams { data, organization_id, project_id } params, RequestOptionsoptions?): AgentData { data, deployment_name, id, 4 more }
PUT/api/v1/beta/agent-data/{item_id}
Delete Agent Data
client.beta.agentData.delete(stringitemID, AgentDataDeleteParams { organization_id, project_id } params?, RequestOptionsoptions?): AgentDataDeleteResponse
DELETE/api/v1/beta/agent-data/{item_id}
Create Agent Data
client.beta.agentData.create(AgentDataCreateParams { data, deployment_name, organization_id, 2 more } params, RequestOptionsoptions?): AgentData { data, deployment_name, id, 4 more }
POST/api/v1/beta/agent-data
Search Agent Data
client.beta.agentData.search(AgentDataSearchParams { deployment_name, organization_id, project_id, 7 more } params, RequestOptionsoptions?): PaginatedCursorPost<AgentData { data, deployment_name, id, 4 more } >
POST/api/v1/beta/agent-data/:search
Aggregate Agent Data
client.beta.agentData.aggregate(AgentDataAggregateParams { deployment_name, organization_id, project_id, 9 more } params, RequestOptionsoptions?): PaginatedCursorPost<AgentDataAggregateResponse { group_key, count, first_item } >
POST/api/v1/beta/agent-data/:aggregate
Delete Agent Data By Query
client.beta.agentData.deleteByQuery(AgentDataDeleteByQueryParams { deployment_name, organization_id, project_id, 2 more } params, RequestOptionsoptions?): AgentDataDeleteByQueryResponse { deleted_count }
POST/api/v1/beta/agent-data/:delete
ModelsExpand Collapse
AgentData { data, deployment_name, id, 4 more }

API Result for a single agent data item

data: Record<string, unknown>
deployment_name: string
id?: string | null
collection?: string
created_at?: string | null
project_id?: string | null
updated_at?: string | null
AgentDataDeleteResponse = Record<string, string>
AgentDataAggregateResponse { group_key, count, first_item }

API Result for a single group in the aggregate response

group_key: Record<string, unknown>
count?: number | null
first_item?: Record<string, unknown> | null
AgentDataDeleteByQueryResponse { deleted_count }

API response for bulk delete operation

deleted_count: number

BetaSheets

Create Spreadsheet Job
client.beta.sheets.create(SheetCreateParams { file_id, organization_id, project_id, config } params, RequestOptionsoptions?): SheetsJob { id, config, created_at, 10 more }
POST/api/v1/beta/sheets/jobs
List Spreadsheet Jobs
client.beta.sheets.list(SheetListParams { created_at_on_or_after, created_at_on_or_before, include_results, 6 more } query?, RequestOptionsoptions?): PaginatedCursor<SheetsJob { id, config, created_at, 10 more } >
GET/api/v1/beta/sheets/jobs
Get Spreadsheet Job
client.beta.sheets.get(stringspreadsheetJobID, SheetGetParams { include_results, organization_id, project_id } query?, RequestOptionsoptions?): SheetsJob { id, config, created_at, 10 more }
GET/api/v1/beta/sheets/jobs/{spreadsheet_job_id}
Get Result Region
client.beta.sheets.getResultTable("table" | "extra" | "cell_metadata"regionType, SheetGetResultTableParams { spreadsheet_job_id, region_id, expires_at_seconds, 2 more } params, RequestOptionsoptions?): PresignedURL { expires_at, url, form_fields }
GET/api/v1/beta/sheets/jobs/{spreadsheet_job_id}/regions/{region_id}/result/{region_type}
Delete Spreadsheet Job
client.beta.sheets.deleteJob(stringspreadsheetJobID, SheetDeleteJobParams { organization_id, project_id } params?, RequestOptionsoptions?): SheetDeleteJobResponse
DELETE/api/v1/beta/sheets/jobs/{spreadsheet_job_id}
ModelsExpand Collapse
SheetsJob { id, config, created_at, 10 more }

A spreadsheet parsing job

id: string

The ID of the job

config: SheetsParsingConfig { extraction_range, flatten_hierarchical_tables, generate_additional_metadata, 5 more }

Configuration for the parsing job

extraction_range?: string | null

A1 notation of the range to extract a single region from. If None, the entire sheet is used.

flatten_hierarchical_tables?: boolean

Return a flattened dataframe when a detected table is recognized as hierarchical.

generate_additional_metadata?: boolean

Whether to generate additional metadata (title, description) for each extracted region.

include_hidden_cells?: boolean

Whether to include hidden cells when extracting regions from the spreadsheet.

sheet_names?: Array<string> | null

The names of the sheets to extract regions from. If empty, all sheets will be processed.

specialization?: string | null

Optional specialization mode for domain-specific extraction. Supported values: ‘financial-standard’, ‘financial-enhanced’, ‘financial-precise’. Default None uses the general-purpose pipeline.

table_merge_sensitivity?: "strong" | "weak"

Influences how likely similar-looking regions are merged into a single table. Useful for spreadsheets that either have sparse tables (strong merging) or many distinct tables close together (weak merging).

One of the following:
"strong"
"weak"
use_experimental_processing?: boolean

Enables experimental processing. Accuracy may be impacted.

created_at: string

When the job was created

file_id: string | null

The ID of the input file

formatuuid
project_id: string

The ID of the project

formatuuid
status: StatusEnum

The status of the parsing job

One of the following:
"PENDING"
"SUCCESS"
"ERROR"
"PARTIAL_SUCCESS"
"CANCELLED"
updated_at: string

When the job was last updated

user_id: string

The ID of the user

errors?: Array<string>

Any errors encountered

Deprecatedfile?: File { id, name, project_id, 11 more } | null

Schema for a file.

id: string

Unique identifier

formatuuid
name: string
project_id: string

The ID of the project that the file belongs to

formatuuid
created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

The ID of the data source that the file belongs to

formatuuid
expires_at?: string | null

The expiration date for the file. Files past this date can be deleted.

formatdate-time
external_file_id?: string | null

The ID of the file in the external system

file_size?: number | null

Size of the file in bytes

minimum0
file_type?: string | null

File type (e.g. pdf, docx, etc.)

maxLength3000
minLength1
last_modified_at?: string | null

The last modified time of the file

formatdate-time
permission_info?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Permission information for the file

One of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
purpose?: string | null

The intended purpose of the file (e.g., ‘user_data’, ‘parse’, ‘extract’, ‘split’, ‘classify’)

resource_info?: Record<string, Record<string, unknown> | Array<unknown> | string | 2 more | null> | null

Resource information for the file

One of the following:
Record<string, unknown>
Array<unknown>
string
number
boolean
updated_at?: string | null

Update datetime

formatdate-time
regions?: Array<Region>

All extracted regions (populated when job is complete)

location: string

Location of the region in the spreadsheet

region_type: string

Type of the extracted region

sheet_name: string

Worksheet name where region was found

description?: string | null

Generated description for the region

region_id?: string

Unique identifier for this region within the file

title?: string | null

Generated title for the region

success?: boolean | null

Whether the job completed successfully

worksheet_metadata?: Array<WorksheetMetadata>

Metadata for each processed worksheet (populated when job is complete)

sheet_name: string

Name of the worksheet

description?: string | null

Generated description of the worksheet

title?: string | null

Generated title for the worksheet

SheetsParsingConfig { extraction_range, flatten_hierarchical_tables, generate_additional_metadata, 5 more }

Configuration for spreadsheet parsing and region extraction

extraction_range?: string | null

A1 notation of the range to extract a single region from. If None, the entire sheet is used.

flatten_hierarchical_tables?: boolean

Return a flattened dataframe when a detected table is recognized as hierarchical.

generate_additional_metadata?: boolean

Whether to generate additional metadata (title, description) for each extracted region.

include_hidden_cells?: boolean

Whether to include hidden cells when extracting regions from the spreadsheet.

sheet_names?: Array<string> | null

The names of the sheets to extract regions from. If empty, all sheets will be processed.

specialization?: string | null

Optional specialization mode for domain-specific extraction. Supported values: ‘financial-standard’, ‘financial-enhanced’, ‘financial-precise’. Default None uses the general-purpose pipeline.

table_merge_sensitivity?: "strong" | "weak"

Influences how likely similar-looking regions are merged into a single table. Useful for spreadsheets that either have sparse tables (strong merging) or many distinct tables close together (weak merging).

One of the following:
"strong"
"weak"
use_experimental_processing?: boolean

Enables experimental processing. Accuracy may be impacted.

SheetDeleteJobResponse = unknown

BetaDirectories

Create Directory
client.beta.directories.create(DirectoryCreateParams { name, organization_id, project_id, 2 more } params, RequestOptionsoptions?): DirectoryCreateResponse { id, name, project_id, 5 more }
POST/api/v1/beta/directories
List Directories
client.beta.directories.list(DirectoryListParams { data_source_id, include_deleted, name, 5 more } query?, RequestOptionsoptions?): PaginatedCursor<DirectoryListResponse { id, name, project_id, 5 more } >
GET/api/v1/beta/directories
Get Directory
client.beta.directories.get(stringdirectoryID, DirectoryGetParams { organization_id, project_id } query?, RequestOptionsoptions?): DirectoryGetResponse { id, name, project_id, 5 more }
GET/api/v1/beta/directories/{directory_id}
Update Directory
client.beta.directories.update(stringdirectoryID, DirectoryUpdateParams { organization_id, project_id, description, name } params, RequestOptionsoptions?): DirectoryUpdateResponse { id, name, project_id, 5 more }
PATCH/api/v1/beta/directories/{directory_id}
Delete Directory
client.beta.directories.delete(stringdirectoryID, DirectoryDeleteParams { organization_id, project_id } params?, RequestOptionsoptions?): void
DELETE/api/v1/beta/directories/{directory_id}
ModelsExpand Collapse
DirectoryCreateResponse { id, name, project_id, 5 more }

API response schema for a directory.

id: string

Unique identifier for the directory.

name: string

Human-readable name for the directory.

minLength1
project_id: string

Project the directory belongs to.

created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source id the directory syncs from. Null if just manual uploads.

deleted_at?: string | null

Optional timestamp of when the directory was deleted. Null if not deleted.

formatdate-time
description?: string | null

Optional description shown to users.

updated_at?: string | null

Update datetime

formatdate-time
DirectoryListResponse { id, name, project_id, 5 more }

API response schema for a directory.

id: string

Unique identifier for the directory.

name: string

Human-readable name for the directory.

minLength1
project_id: string

Project the directory belongs to.

created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source id the directory syncs from. Null if just manual uploads.

deleted_at?: string | null

Optional timestamp of when the directory was deleted. Null if not deleted.

formatdate-time
description?: string | null

Optional description shown to users.

updated_at?: string | null

Update datetime

formatdate-time
DirectoryGetResponse { id, name, project_id, 5 more }

API response schema for a directory.

id: string

Unique identifier for the directory.

name: string

Human-readable name for the directory.

minLength1
project_id: string

Project the directory belongs to.

created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source id the directory syncs from. Null if just manual uploads.

deleted_at?: string | null

Optional timestamp of when the directory was deleted. Null if not deleted.

formatdate-time
description?: string | null

Optional description shown to users.

updated_at?: string | null

Update datetime

formatdate-time
DirectoryUpdateResponse { id, name, project_id, 5 more }

API response schema for a directory.

id: string

Unique identifier for the directory.

name: string

Human-readable name for the directory.

minLength1
project_id: string

Project the directory belongs to.

created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source id the directory syncs from. Null if just manual uploads.

deleted_at?: string | null

Optional timestamp of when the directory was deleted. Null if not deleted.

formatdate-time
description?: string | null

Optional description shown to users.

updated_at?: string | null

Update datetime

formatdate-time

BetaDirectoriesFiles

Add Directory File
client.beta.directories.files.add(stringdirectoryID, FileAddParams { file_id, organization_id, project_id, 3 more } params, RequestOptionsoptions?): FileAddResponse { id, directory_id, display_name, 8 more }
POST/api/v1/beta/directories/{directory_id}/files
List Directory Files
client.beta.directories.files.list(stringdirectoryID, FileListParams { display_name, display_name_contains, file_id, 6 more } query?, RequestOptionsoptions?): PaginatedCursor<FileListResponse { id, directory_id, display_name, 8 more } >
GET/api/v1/beta/directories/{directory_id}/files
Get Directory File
client.beta.directories.files.get(stringdirectoryFileID, FileGetParams { directory_id, organization_id, project_id } params, RequestOptionsoptions?): FileGetResponse { id, directory_id, display_name, 8 more }
GET/api/v1/beta/directories/{directory_id}/files/{directory_file_id}
Update Directory File
client.beta.directories.files.update(stringdirectoryFileID, FileUpdateParams { body_directory_id, organization_id, project_id, 3 more } params, RequestOptionsoptions?): FileUpdateResponse { id, directory_id, display_name, 8 more }
PATCH/api/v1/beta/directories/{directory_id}/files/{directory_file_id}
Delete Directory File
client.beta.directories.files.delete(stringdirectoryFileID, FileDeleteParams { directory_id, organization_id, project_id } params, RequestOptionsoptions?): void
DELETE/api/v1/beta/directories/{directory_id}/files/{directory_file_id}
Upload File To Directory
client.beta.directories.files.upload(stringdirectoryID, FileUploadParams { upload_file, organization_id, project_id, 3 more } params, RequestOptionsoptions?): FileUploadResponse { id, directory_id, display_name, 8 more }
POST/api/v1/beta/directories/{directory_id}/files/upload
ModelsExpand Collapse
FileAddResponse { id, directory_id, display_name, 8 more }

API response schema for a directory file.

id: string

Unique identifier for the directory file.

directory_id: string

Directory the file belongs to.

display_name: string

Display name for the file.

minLength1
project_id: string

Project the directory file belongs to.

unique_id: string

Unique identifier for the file in the directory

minLength1
created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source credential associated with the file.

deleted_at?: string | null

Soft delete marker when the file is removed upstream or by user action.

formatdate-time
file_id?: string | null

File ID for the storage location.

metadata?: Record<string, string | number | boolean | null>

Merged metadata from all sources. Higher-priority sources override lower.

One of the following:
string
number
boolean
updated_at?: string | null

Update datetime

formatdate-time
FileListResponse { id, directory_id, display_name, 8 more }

API response schema for a directory file.

id: string

Unique identifier for the directory file.

directory_id: string

Directory the file belongs to.

display_name: string

Display name for the file.

minLength1
project_id: string

Project the directory file belongs to.

unique_id: string

Unique identifier for the file in the directory

minLength1
created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source credential associated with the file.

deleted_at?: string | null

Soft delete marker when the file is removed upstream or by user action.

formatdate-time
file_id?: string | null

File ID for the storage location.

metadata?: Record<string, string | number | boolean | null>

Merged metadata from all sources. Higher-priority sources override lower.

One of the following:
string
number
boolean
updated_at?: string | null

Update datetime

formatdate-time
FileGetResponse { id, directory_id, display_name, 8 more }

API response schema for a directory file.

id: string

Unique identifier for the directory file.

directory_id: string

Directory the file belongs to.

display_name: string

Display name for the file.

minLength1
project_id: string

Project the directory file belongs to.

unique_id: string

Unique identifier for the file in the directory

minLength1
created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source credential associated with the file.

deleted_at?: string | null

Soft delete marker when the file is removed upstream or by user action.

formatdate-time
file_id?: string | null

File ID for the storage location.

metadata?: Record<string, string | number | boolean | null>

Merged metadata from all sources. Higher-priority sources override lower.

One of the following:
string
number
boolean
updated_at?: string | null

Update datetime

formatdate-time
FileUpdateResponse { id, directory_id, display_name, 8 more }

API response schema for a directory file.

id: string

Unique identifier for the directory file.

directory_id: string

Directory the file belongs to.

display_name: string

Display name for the file.

minLength1
project_id: string

Project the directory file belongs to.

unique_id: string

Unique identifier for the file in the directory

minLength1
created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source credential associated with the file.

deleted_at?: string | null

Soft delete marker when the file is removed upstream or by user action.

formatdate-time
file_id?: string | null

File ID for the storage location.

metadata?: Record<string, string | number | boolean | null>

Merged metadata from all sources. Higher-priority sources override lower.

One of the following:
string
number
boolean
updated_at?: string | null

Update datetime

formatdate-time
FileUploadResponse { id, directory_id, display_name, 8 more }

API response schema for a directory file.

id: string

Unique identifier for the directory file.

directory_id: string

Directory the file belongs to.

display_name: string

Display name for the file.

minLength1
project_id: string

Project the directory file belongs to.

unique_id: string

Unique identifier for the file in the directory

minLength1
created_at?: string | null

Creation datetime

formatdate-time
data_source_id?: string | null

Optional data source credential associated with the file.

deleted_at?: string | null

Soft delete marker when the file is removed upstream or by user action.

formatdate-time
file_id?: string | null

File ID for the storage location.

metadata?: Record<string, string | number | boolean | null>

Merged metadata from all sources. Higher-priority sources override lower.

One of the following:
string
number
boolean
updated_at?: string | null

Update datetime

formatdate-time

BetaBatch

Create Batch Job
client.beta.batch.create(BatchCreateParams { job_config, organization_id, project_id, 5 more } params, RequestOptionsoptions?): BatchCreateResponse { id, job_type, project_id, 14 more }
POST/api/v1/beta/batch-processing
List Batch Jobs
client.beta.batch.list(BatchListParams { directory_id, job_type, limit, 4 more } query?, RequestOptionsoptions?): PaginatedBatchItems<BatchListResponse { id, job_type, project_id, 14 more } >
GET/api/v1/beta/batch-processing
Get Batch Job Status
client.beta.batch.getStatus(stringjobID, BatchGetStatusParams { organization_id, project_id } query?, RequestOptionsoptions?): BatchGetStatusResponse { job, progress_percentage }
GET/api/v1/beta/batch-processing/{job_id}
Cancel Batch Job
client.beta.batch.cancel(stringjobID, BatchCancelParams { organization_id, project_id, reason, temporalNamespace } params, RequestOptionsoptions?): BatchCancelResponse { job_id, message, processed_items, status }
POST/api/v1/beta/batch-processing/{job_id}/cancel
ModelsExpand Collapse
BatchCreateResponse { id, job_type, project_id, 14 more }

Response schema for a batch processing job.

id: string

Unique identifier for the batch job

job_type: "parse" | "extract" | "classify"

Type of processing operation (parse or classify)

One of the following:
"parse"
"extract"
"classify"
project_id: string

Project this job belongs to

status: "pending" | "running" | "dispatched" | 3 more

Current job status

One of the following:
"pending"
"running"
"dispatched"
"completed"
"failed"
"cancelled"
total_items: number

Total number of items in the job

completed_at?: string | null

Timestamp when job completed

formatdate-time
created_at?: string | null

Creation datetime

formatdate-time
directory_id?: string | null

Directory being processed

effective_at?: string
error_message?: string | null

Error message for the latest job attempt, if any.

failed_items?: number

Number of items that failed processing

job_record_id?: string | null

The job record ID associated with this status, if any.

processed_items?: number

Number of items processed so far

skipped_items?: number

Number of items skipped (already processed or size limit)

started_at?: string | null

Timestamp when job processing started

formatdate-time
updated_at?: string | null

Update datetime

formatdate-time
workflow_id?: string | null

Async job tracking ID

BatchListResponse { id, job_type, project_id, 14 more }

Response schema for a batch processing job.

id: string

Unique identifier for the batch job

job_type: "parse" | "extract" | "classify"

Type of processing operation (parse or classify)

One of the following:
"parse"
"extract"
"classify"
project_id: string

Project this job belongs to

status: "pending" | "running" | "dispatched" | 3 more

Current job status

One of the following:
"pending"
"running"
"dispatched"
"completed"
"failed"
"cancelled"
total_items: number

Total number of items in the job

completed_at?: string | null

Timestamp when job completed

formatdate-time
created_at?: string | null

Creation datetime

formatdate-time
directory_id?: string | null

Directory being processed

effective_at?: string
error_message?: string | null

Error message for the latest job attempt, if any.

failed_items?: number

Number of items that failed processing

job_record_id?: string | null

The job record ID associated with this status, if any.

processed_items?: number

Number of items processed so far

skipped_items?: number

Number of items skipped (already processed or size limit)

started_at?: string | null

Timestamp when job processing started

formatdate-time
updated_at?: string | null

Update datetime

formatdate-time
workflow_id?: string | null

Async job tracking ID

BatchGetStatusResponse { job, progress_percentage }

Detailed status response for a batch processing job.

job: Job { id, job_type, project_id, 14 more }

Response schema for a batch processing job.

id: string

Unique identifier for the batch job

job_type: "parse" | "extract" | "classify"

Type of processing operation (parse or classify)

One of the following:
"parse"
"extract"
"classify"
project_id: string

Project this job belongs to

status: "pending" | "running" | "dispatched" | 3 more

Current job status

One of the following:
"pending"
"running"
"dispatched"
"completed"
"failed"
"cancelled"
total_items: number

Total number of items in the job

completed_at?: string | null

Timestamp when job completed

formatdate-time
created_at?: string | null

Creation datetime

formatdate-time
directory_id?: string | null

Directory being processed

effective_at?: string
error_message?: string | null

Error message for the latest job attempt, if any.

failed_items?: number

Number of items that failed processing

job_record_id?: string | null

The job record ID associated with this status, if any.

processed_items?: number

Number of items processed so far

skipped_items?: number

Number of items skipped (already processed or size limit)

started_at?: string | null

Timestamp when job processing started

formatdate-time
updated_at?: string | null

Update datetime

formatdate-time
workflow_id?: string | null

Async job tracking ID

progress_percentage: number

Percentage of items processed (0-100)

maximum100
minimum0
BatchCancelResponse { job_id, message, processed_items, status }

Response after cancelling a batch job.

job_id: string

ID of the cancelled job

message: string

Confirmation message

processed_items: number

Number of items processed before cancellation

status: "pending" | "running" | "dispatched" | 3 more

New status (should be ‘cancelled’)

One of the following:
"pending"
"running"
"dispatched"
"completed"
"failed"
"cancelled"

BetaBatchJob Items

List Batch Job Items
client.beta.batch.jobItems.list(stringjobID, JobItemListParams { limit, offset, organization_id, 2 more } query?, RequestOptionsoptions?): PaginatedBatchItems<JobItemListResponse { item_id, item_name, status, 7 more } >
GET/api/v1/beta/batch-processing/{job_id}/items
Get Item Processing Results
client.beta.batch.jobItems.getProcessingResults(stringitemID, JobItemGetProcessingResultsParams { job_type, organization_id, project_id } query?, RequestOptionsoptions?): JobItemGetProcessingResultsResponse { item_id, item_name, processing_results }
GET/api/v1/beta/batch-processing/items/{item_id}/processing-results
ModelsExpand Collapse
JobItemListResponse { item_id, item_name, status, 7 more }

Detailed information about an item in a batch job.

item_id: string

ID of the item

item_name: string

Name of the item

status: "pending" | "processing" | "completed" | 3 more

Processing status of this item

One of the following:
"pending"
"processing"
"completed"
"failed"
"skipped"
"cancelled"
completed_at?: string | null

When processing completed for this item

formatdate-time
effective_at?: string
error_message?: string | null

Error message for the latest job attempt, if any.

job_id?: string | null

Job ID for the underlying processing job (links to parse/extract job results)

job_record_id?: string | null

The job record ID associated with this status, if any.

skip_reason?: string | null

Reason item was skipped (e.g., ‘already_processed’, ‘size_limit_exceeded’)

started_at?: string | null

When processing started for this item

formatdate-time
JobItemGetProcessingResultsResponse { item_id, item_name, processing_results }

Response containing all processing results for an item.

item_id: string

ID of the source item

item_name: string

Name of the source item

processing_results?: Array<ProcessingResult>

List of all processing operations performed on this item

item_id: string

Source item that was processed

job_config: BatchParseJobRecordCreate { correlation_id, job_name, parameters, 6 more } | ClassifyJob { id, project_id, rules, 9 more }

Job configuration used for processing

One of the following:
BatchParseJobRecordCreate { correlation_id, job_name, parameters, 6 more }

Batch-specific parse job record for batch processing.

This model contains the metadata and configuration for a batch parse job, but excludes file-specific information. It’s used as input to the batch parent workflow and combined with DirectoryFile data to create full ParseJobRecordCreate instances for each file.

Attributes: job_name: Must be PARSE_RAW_FILE partitions: Partitions for job output location parameters: Generic parse configuration (BatchParseJobConfig) session_id: Upstream request ID for tracking correlation_id: Correlation ID for cross-service tracking parent_job_execution_id: Parent job execution ID if nested user_id: User who created the job project_id: Project this job belongs to webhook_url: Optional webhook URL for job completion notifications

correlation_id?: string | null

The correlation ID for this job. Used for tracking the job across services.

formatuuid
job_name?: "parse_raw_file_job"
parameters?: Parameters | null

Generic parse job configuration for batch processing.

This model contains the parsing configuration that applies to all files in a batch, but excludes file-specific fields like file_name, file_id, etc. Those file-specific fields are populated from DirectoryFile data when creating individual ParseJobRecordCreate instances for each file.

The fields in this model should be generic settings that apply uniformly to all files being processed in the batch.

adaptive_long_table?: boolean | null
aggressive_table_extraction?: boolean | null
auto_mode?: boolean | null
auto_mode_configuration_json?: string | null
auto_mode_trigger_on_image_in_page?: boolean | null
auto_mode_trigger_on_regexp_in_page?: string | null
auto_mode_trigger_on_table_in_page?: boolean | null
auto_mode_trigger_on_text_in_page?: string | null
azure_openai_api_version?: string | null
azure_openai_deployment_name?: string | null
azure_openai_endpoint?: string | null
azure_openai_key?: string | null
bbox_bottom?: number | null
bbox_left?: number | null
bbox_right?: number | null
bbox_top?: number | null
bounding_box?: string | null
compact_markdown_table?: boolean | null
complemental_formatting_instruction?: string | null
content_guideline_instruction?: string | null
continuous_mode?: boolean | null
custom_metadata?: Record<string, unknown> | null

The custom metadata to attach to the documents.

disable_image_extraction?: boolean | null
disable_ocr?: boolean | null
disable_reconstruction?: boolean | null
do_not_cache?: boolean | null
do_not_unroll_columns?: boolean | null
enable_cost_optimizer?: boolean | null
extract_charts?: boolean | null
extract_layout?: boolean | null
extract_printed_page_number?: boolean | null
fast_mode?: boolean | null
formatting_instruction?: string | null
gpt4o_api_key?: string | null
gpt4o_mode?: boolean | null
guess_xlsx_sheet_name?: boolean | null
hide_footers?: boolean | null
hide_headers?: boolean | null
high_res_ocr?: boolean | null
html_make_all_elements_visible?: boolean | null
html_remove_fixed_elements?: boolean | null
html_remove_navigation_elements?: boolean | null
http_proxy?: string | null
ignore_document_elements_for_layout_detection?: boolean | null
images_to_save?: Array<"screenshot" | "embedded" | "layout"> | null
One of the following:
"screenshot"
"embedded"
"layout"
inline_images_in_markdown?: boolean | null
input_s3_path?: string | null
input_s3_region?: string | null

The region for the input S3 bucket.

input_url?: string | null
internal_is_screenshot_job?: boolean | null
invalidate_cache?: boolean | null
is_formatting_instruction?: boolean | null
job_timeout_extra_time_per_page_in_seconds?: number | null
job_timeout_in_seconds?: number | null
keep_page_separator_when_merging_tables?: boolean | null
lang?: string

The language.

languages?: Array<ParsingLanguages>
One of the following:
"af"
"az"
"bs"
"cs"
"cy"
"da"
"de"
"en"
"es"
"et"
"fr"
"ga"
"hr"
"hu"
"id"
"is"
"it"
"ku"
"la"
"lt"
"lv"
"mi"
"ms"
"mt"
"nl"
"no"
"oc"
"pi"
"pl"
"pt"
"ro"
"rs_latin"
"sk"
"sl"
"sq"
"sv"
"sw"
"tl"
"tr"
"uz"
"vi"
"ar"
"fa"
"ug"
"ur"
"bn"
"as"
"mni"
"ru"
"rs_cyrillic"
"be"
"bg"
"uk"
"mn"
"abq"
"ady"
"kbd"
"ava"
"dar"
"inh"
"che"
"lbe"
"lez"
"tab"
"tjk"
"hi"
"mr"
"ne"
"bh"
"mai"
"ang"
"bho"
"mah"
"sck"
"new"
"gom"
"sa"
"bgc"
"th"
"ch_sim"
"ch_tra"
"ja"
"ko"
"ta"
"te"
"kn"
layout_aware?: boolean | null
line_level_bounding_box?: boolean | null
markdown_table_multiline_header_separator?: string | null
max_pages?: number | null
max_pages_enforced?: number | null
merge_tables_across_pages_in_markdown?: boolean | null
model?: string | null
outlined_table_extraction?: boolean | null
output_pdf_of_document?: boolean | null
output_s3_path_prefix?: string | null

If specified, llamaParse will save the output to the specified path. All output file will use this ‘prefix’ should be a valid s3:// url

output_s3_region?: string | null

The region for the output S3 bucket.

output_tables_as_HTML?: boolean | null
outputBucket?: string | null

The output bucket.

page_error_tolerance?: number | null
page_header_prefix?: string | null
page_header_suffix?: string | null
page_prefix?: string | null
page_separator?: string | null
page_suffix?: string | null
parse_mode?: ParsingMode | null

Enum for representing the mode of parsing to be used.

One of the following:
"parse_page_without_llm"
"parse_page_with_llm"
"parse_page_with_lvm"
"parse_page_with_agent"
"parse_page_with_layout_agent"
"parse_document_with_llm"
"parse_document_with_lvm"
"parse_document_with_agent"
parsing_instruction?: string | null
pipeline_id?: string | null

The pipeline ID.

precise_bounding_box?: boolean | null
premium_mode?: boolean | null
presentation_out_of_bounds_content?: boolean | null
presentation_skip_embedded_data?: boolean | null
preserve_layout_alignment_across_pages?: boolean | null
preserve_very_small_text?: boolean | null
preset?: string | null
priority?: "low" | "medium" | "high" | "critical" | null

The priority for the request. This field may be ignored or overwritten depending on the organization tier.

One of the following:
"low"
"medium"
"high"
"critical"
project_id?: string | null
remove_hidden_text?: boolean | null
replace_failed_page_mode?: FailPageMode | null

Enum for representing the different available page error handling modes.

One of the following:
"raw_text"
"blank_page"
"error_message"
replace_failed_page_with_error_message_prefix?: string | null
replace_failed_page_with_error_message_suffix?: string | null
resource_info?: Record<string, unknown> | null

The resource info about the file

save_images?: boolean | null
skip_diagonal_text?: boolean | null
specialized_chart_parsing_agentic?: boolean | null
specialized_chart_parsing_efficient?: boolean | null
specialized_chart_parsing_plus?: boolean | null
specialized_image_parsing?: boolean | null
spreadsheet_extract_sub_tables?: boolean | null
spreadsheet_force_formula_computation?: boolean | null
spreadsheet_include_hidden_sheets?: boolean | null
strict_mode_buggy_font?: boolean | null
strict_mode_image_extraction?: boolean | null
strict_mode_image_ocr?: boolean | null
strict_mode_reconstruction?: boolean | null
structured_output?: boolean | null
structured_output_json_schema?: string | null
structured_output_json_schema_name?: string | null
system_prompt?: string | null
system_prompt_append?: string | null
take_screenshot?: boolean | null
target_pages?: string | null
tier?: string | null
type?: "parse"
use_vendor_multimodal_model?: boolean | null
user_prompt?: string | null
vendor_multimodal_api_key?: string | null
vendor_multimodal_model_name?: string | null
version?: string | null
webhook_configurations?: Array<WebhookConfiguration> | null

Outbound webhook endpoints to notify on job status changes

webhook_events?: Array<"extract.pending" | "extract.success" | "extract.error" | 14 more> | null

Events to subscribe to (e.g. ‘parse.success’, ‘extract.error’). If null, all events are delivered.

One of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.running"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers?: Record<string, string> | null

Custom HTTP headers sent with each webhook request (e.g. auth tokens)

webhook_output_format?: string | null

Response format sent to the webhook: ‘string’ (default) or ‘json’

webhook_url?: string | null

URL to receive webhook POST notifications

webhook_url?: string | null
parent_job_execution_id?: string | null

The ID of the parent job execution.

formatuuid
partitions?: Record<string, string>

The partitions for this execution. Used for determining where to save job output.

project_id?: string | null

The ID of the project this job belongs to.

formatuuid
session_id?: string | null

The upstream request ID that created this job. Used for tracking the job across services.

formatuuid
user_id?: string | null

The ID of the user that created this job

webhook_url?: string | null

The URL that needs to be called at the end of the parsing job.

ClassifyJob { id, project_id, rules, 9 more }

A classify job.

id: string

Unique identifier

formatuuid
project_id: string

The ID of the project

formatuuid
rules: Array<ClassifierRule { description, type } >

The rules to classify the files

description: string

Natural language description of what to classify. Be specific about the content characteristics that identify this document type.

maxLength500
minLength10
type: string

The document type to assign when this rule matches (e.g., ‘invoice’, ‘receipt’, ‘contract’)

maxLength50
minLength1
status: StatusEnum

The status of the classify job

One of the following:
"PENDING"
"SUCCESS"
"ERROR"
"PARTIAL_SUCCESS"
"CANCELLED"
user_id: string

The ID of the user

created_at?: string | null

Creation datetime

formatdate-time
effective_at?: string
error_message?: string | null

Error message for the latest job attempt, if any.

job_record_id?: string | null

The job record ID associated with this status, if any.

mode?: "FAST" | "MULTIMODAL"

The classification mode to use

One of the following:
"FAST"
"MULTIMODAL"
parsing_configuration?: ClassifyParsingConfiguration { lang, max_pages, target_pages }

The configuration for the parsing job

The language to parse the files in

One of the following:
"af"
"az"
"bs"
"cs"
"cy"
"da"
"de"
"en"
"es"
"et"
"fr"
"ga"
"hr"
"hu"
"id"
"is"
"it"
"ku"
"la"
"lt"
"lv"
"mi"
"ms"
"mt"
"nl"
"no"
"oc"
"pi"
"pl"
"pt"
"ro"
"rs_latin"
"sk"
"sl"
"sq"
"sv"
"sw"
"tl"
"tr"
"uz"
"vi"
"ar"
"fa"
"ug"
"ur"
"bn"
"as"
"mni"
"ru"
"rs_cyrillic"
"be"
"bg"
"uk"
"mn"
"abq"
"ady"
"kbd"
"ava"
"dar"
"inh"
"che"
"lbe"
"lez"
"tab"
"tjk"
"hi"
"mr"
"ne"
"bh"
"mai"
"ang"
"bho"
"mah"
"sck"
"new"
"gom"
"sa"
"bgc"
"th"
"ch_sim"
"ch_tra"
"ja"
"ko"
"ta"
"te"
"kn"
max_pages?: number | null

The maximum number of pages to parse

target_pages?: Array<number> | null

The pages to target for parsing (0-indexed, so first page is at 0)

updated_at?: string | null

Update datetime

formatdate-time
job_type: "parse" | "extract" | "classify"

Type of processing performed

One of the following:
"parse"
"extract"
"classify"
output_s3_path: string

Location of the processing output

parameters_hash: string

Content hash of the job configuration for dedup

processed_at: string

When this processing occurred

formatdate-time
result_id: string

Unique identifier for this result

output_metadata?: unknown

Metadata about processing output.

Currently empty - will be populated with job-type-specific metadata fields in the future.

BetaSplit

Create Split Job
client.beta.split.create(SplitCreateParams { document_input, organization_id, project_id, 2 more } params, RequestOptionsoptions?): SplitCreateResponse { id, categories, document_input, 8 more }
POST/api/v1/beta/split/jobs
List Split Jobs
client.beta.split.list(SplitListParams { created_at_on_or_after, created_at_on_or_before, job_ids, 5 more } query?, RequestOptionsoptions?): PaginatedCursor<SplitListResponse { id, categories, document_input, 8 more } >
GET/api/v1/beta/split/jobs
Get Split Job
client.beta.split.get(stringsplitJobID, SplitGetParams { organization_id, project_id } query?, RequestOptionsoptions?): SplitGetResponse { id, categories, document_input, 8 more }
GET/api/v1/beta/split/jobs/{split_job_id}
ModelsExpand Collapse
SplitCategory { name, description }

Category definition for document splitting.

name: string

Name of the category.

maxLength200
minLength1
description?: string | null

Optional description of what content belongs in this category.

maxLength2000
minLength1
SplitDocumentInput { type, value }

Document input specification for beta API.

type: string

Type of document input. Valid values are: file_id

value: string

Document identifier.

SplitResultResponse { segments }

Result of a completed split job.

segments: Array<SplitSegmentResponse { category, confidence_category, pages } >

List of document segments.

category: string

Category name this split belongs to.

confidence_category: string

Categorical confidence level. Valid values are: high, medium, low.

pages: Array<number>

1-indexed page numbers in this split.

SplitSegmentResponse { category, confidence_category, pages }

A segment of the split document.

category: string

Category name this split belongs to.

confidence_category: string

Categorical confidence level. Valid values are: high, medium, low.

pages: Array<number>

1-indexed page numbers in this split.

SplitCreateResponse { id, categories, document_input, 8 more }

Beta response — uses nested document_input object.

id: string

Unique identifier for the split job.

categories: Array<SplitCategory { name, description } >

Categories used for splitting.

name: string

Name of the category.

maxLength200
minLength1
description?: string | null

Optional description of what content belongs in this category.

maxLength2000
minLength1
document_input: SplitDocumentInput { type, value }

Document that was split.

type: string

Type of document input. Valid values are: file_id

value: string

Document identifier.

project_id: string

Project ID this job belongs to.

status: string

Current status of the job. Valid values are: pending, processing, completed, failed, cancelled.

user_id: string

User ID who created this job.

configuration_id?: string | null

Split configuration ID used for this job.

created_at?: string | null

Creation datetime

formatdate-time
error_message?: string | null

Error message if the job failed.

result?: SplitResultResponse { segments } | null

Result of a completed split job.

segments: Array<SplitSegmentResponse { category, confidence_category, pages } >

List of document segments.

category: string

Category name this split belongs to.

confidence_category: string

Categorical confidence level. Valid values are: high, medium, low.

pages: Array<number>

1-indexed page numbers in this split.

updated_at?: string | null

Update datetime

formatdate-time
SplitListResponse { id, categories, document_input, 8 more }

Beta response — uses nested document_input object.

id: string

Unique identifier for the split job.

categories: Array<SplitCategory { name, description } >

Categories used for splitting.

name: string

Name of the category.

maxLength200
minLength1
description?: string | null

Optional description of what content belongs in this category.

maxLength2000
minLength1
document_input: SplitDocumentInput { type, value }

Document that was split.

type: string

Type of document input. Valid values are: file_id

value: string

Document identifier.

project_id: string

Project ID this job belongs to.

status: string

Current status of the job. Valid values are: pending, processing, completed, failed, cancelled.

user_id: string

User ID who created this job.

configuration_id?: string | null

Split configuration ID used for this job.

created_at?: string | null

Creation datetime

formatdate-time
error_message?: string | null

Error message if the job failed.

result?: SplitResultResponse { segments } | null

Result of a completed split job.

segments: Array<SplitSegmentResponse { category, confidence_category, pages } >

List of document segments.

category: string

Category name this split belongs to.

confidence_category: string

Categorical confidence level. Valid values are: high, medium, low.

pages: Array<number>

1-indexed page numbers in this split.

updated_at?: string | null

Update datetime

formatdate-time
SplitGetResponse { id, categories, document_input, 8 more }

Beta response — uses nested document_input object.

id: string

Unique identifier for the split job.

categories: Array<SplitCategory { name, description } >

Categories used for splitting.

name: string

Name of the category.

maxLength200
minLength1
description?: string | null

Optional description of what content belongs in this category.

maxLength2000
minLength1
document_input: SplitDocumentInput { type, value }

Document that was split.

type: string

Type of document input. Valid values are: file_id

value: string

Document identifier.

project_id: string

Project ID this job belongs to.

status: string

Current status of the job. Valid values are: pending, processing, completed, failed, cancelled.

user_id: string

User ID who created this job.

configuration_id?: string | null

Split configuration ID used for this job.

created_at?: string | null

Creation datetime

formatdate-time
error_message?: string | null

Error message if the job failed.

result?: SplitResultResponse { segments } | null

Result of a completed split job.

segments: Array<SplitSegmentResponse { category, confidence_category, pages } >

List of document segments.

category: string

Category name this split belongs to.

confidence_category: string

Categorical confidence level. Valid values are: high, medium, low.

pages: Array<number>

1-indexed page numbers in this split.

updated_at?: string | null

Update datetime

formatdate-time