Extract
List Extract Jobs
Get Extract Job
Delete Extract Job
Validate Extraction Schema
Generate Extraction Schema
ModelsExpand Collapse
class ExtractConfiguration: …
Extraction configuration combining parse and extract settings.
Extract-specific configuration options including the data schema
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON schema used for extraction
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extraction algorithm version to use (e.g., '2026-01-08', 'latest')
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Extraction scope: per_doc, per_page, or per_table_row
system_prompt: Optional[str]
Custom system prompt for extraction
tier: Optional[Literal["cost_effective", "agentic"]]
Extraction tier: cost_effective (10 credits) or agentic (20 credits)
parse_config_id: Optional[str]
Parse config ID used for extraction
parse_tier: Optional[str]
Parse tier to use for extraction (e.g. fast, cost_effective, agentic).
class ExtractJobMetadata: …
Extraction metadata.
field_metadata: Optional[ExtractedFieldMetadata]
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
parse_job_id: Optional[str]
Reference to the ParseJob ID used for parsing
parse_tier: Optional[str]
Parse tier used for parsing the document
usage: Optional[ExtractJobUsage]
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
class ExtractJobUsage: …
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
class ExtractOptions: …
Extract-specific configuration options.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON schema used for extraction
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extraction algorithm version to use (e.g., '2026-01-08', 'latest')
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Extraction scope: per_doc, per_page, or per_table_row
system_prompt: Optional[str]
Custom system prompt for extraction
tier: Optional[Literal["cost_effective", "agentic"]]
Extraction tier: cost_effective (10 credits) or agentic (20 credits)
class ExtractV2Job: …
An extraction job.
id: str
Unique job identifier (job_id)
created_at: datetime
Creation timestamp
parameters: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Job configuration parameters (includes parse_config_id, extract_options)
project_id: str
Project this job belongs to
status: Literal["PENDING", "THROTTLED", "RUNNING", 3 more]
Current status of the job
type: Literal["url", "file_id", "parse_job_id"]
Type of document input.
updated_at: datetime
Last update timestamp
value: str
Document identifier (URL, file ID, or parse job ID).
configuration_id: Optional[str]
Extract configuration ID (ProductConfiguration) used for this job (if any)
error_message: Optional[str]
Error message if failed
extract_metadata: Optional[ExtractJobMetadata]
Extraction metadata.
field_metadata: Optional[ExtractedFieldMetadata]
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
parse_job_id: Optional[str]
Reference to the ParseJob ID used for parsing
parse_tier: Optional[str]
Parse tier used for parsing the document
usage: Optional[ExtractJobUsage]
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]
Extracted data (object or array depending on extraction_target)
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
class ExtractV2JobCreate: …
Request to create an extraction job. Provide configuration_id or inline config.
type: Literal["url", "file_id", "parse_job_id"]
Type of document input.
value: str
Document identifier (URL, file ID, or parse job ID).
config: Optional[ExtractConfiguration]
Extraction configuration combining parse and extract settings.
Extract-specific configuration options including the data schema
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
JSON schema used for extraction
cite_sources: Optional[bool]
Include citations in results
confidence_scores: Optional[bool]
Include confidence scores in results
extract_version: Optional[str]
Extraction algorithm version to use (e.g., '2026-01-08', 'latest')
extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]
Extraction scope: per_doc, per_page, or per_table_row
system_prompt: Optional[str]
Custom system prompt for extraction
tier: Optional[Literal["cost_effective", "agentic"]]
Extraction tier: cost_effective (10 credits) or agentic (20 credits)
parse_config_id: Optional[str]
Parse config ID used for extraction
parse_tier: Optional[str]
Parse tier to use for extraction (e.g. fast, cost_effective, agentic).
configuration_id: Optional[str]
Saved extract configuration ID (mutually exclusive with config)
The outbound webhook configurations
webhook_events: Optional[List[Literal["extract.pending", "extract.success", "extract.error", 14 more]]]
List of event names to subscribe to
webhook_headers: Optional[Dict[str, str]]
Custom HTTP headers to include with webhook requests.
webhook_output_format: Optional[str]
The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json
webhook_url: Optional[str]
The URL to send webhook notifications to.
class ExtractV2JobQueryResponse: …
Paginated list of extraction jobs.
The list of items.
id: str
Unique job identifier (job_id)
created_at: datetime
Creation timestamp
parameters: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Job configuration parameters (includes parse_config_id, extract_options)
project_id: str
Project this job belongs to
status: Literal["PENDING", "THROTTLED", "RUNNING", 3 more]
Current status of the job
type: Literal["url", "file_id", "parse_job_id"]
Type of document input.
updated_at: datetime
Last update timestamp
value: str
Document identifier (URL, file ID, or parse job ID).
configuration_id: Optional[str]
Extract configuration ID (ProductConfiguration) used for this job (if any)
error_message: Optional[str]
Error message if failed
extract_metadata: Optional[ExtractJobMetadata]
Extraction metadata.
field_metadata: Optional[ExtractedFieldMetadata]
Metadata for extracted fields including document, page, and row level info.
document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
parse_job_id: Optional[str]
Reference to the ParseJob ID used for parsing
parse_tier: Optional[str]
Parse tier used for parsing the document
usage: Optional[ExtractJobUsage]
Extraction usage metrics.
num_document_tokens: Optional[int]
Number of document tokens
num_output_tokens: Optional[int]
Number of output tokens
num_pages_extracted: Optional[int]
Number of pages extracted
extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]
Extracted data (object or array depending on extraction_target)
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
next_page_token: Optional[str]
A token, which can be sent as page_token to retrieve the next page. If this field is omitted, there are no subsequent pages.
total_size: Optional[int]
The total number of items available. This is only populated when specifically requested. The value may be an estimate and can be used for display purposes only.
class ExtractV2SchemaGenerateRequest: …
Request schema for generating an extraction schema.
data_schema: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], str, null]]
Optional schema to validate, refine, or extend
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
file_id: Optional[str]
Optional file ID to analyze for schema generation
name: Optional[str]
Name for the generated configuration (auto-generated if omitted)
prompt: Optional[str]
Natural language description of the data structure to extract
class ExtractV2SchemaValidateRequest: …
Request schema for validating an extraction schema.
data_schema: Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], str]
Schema to validate
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
class ExtractV2SchemaValidateResponse: …
Response schema for schema validation.
data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Validated JSON schema
class ExtractedFieldMetadata: …
Metadata for extracted fields including document, page, and row level info.