Skip to content

Create Extract Job

extract.create(ExtractCreateParams**kwargs) -> ExtractV2Job
POST/api/v2/extract

Create a new extraction job.

Provide exactly one of configuration_id (saved configuration) or inline config.

ParametersExpand Collapse
type: Literal["url", "file_id", "parse_job_id"]

Type of document input.

Accepts one of the following:
"url"
"file_id"
"parse_job_id"
value: str

Document identifier (URL, file ID, or parse job ID).

organization_id: Optional[str]
project_id: Optional[str]
config: Optional[ExtractConfigurationParam]

Extraction configuration combining parse and extract settings.

extract_options: ExtractOptions

Extract-specific configuration options including the data schema

data_schema: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

JSON schema used for extraction

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
cite_sources: Optional[bool]

Include citations in results

confidence_scores: Optional[bool]

Include confidence scores in results

extract_version: Optional[str]

Extraction algorithm version to use (e.g., '2026-01-08', 'latest')

extraction_target: Optional[Literal["per_doc", "per_page", "per_table_row"]]

Extraction scope: per_doc, per_page, or per_table_row

Accepts one of the following:
"per_doc"
"per_page"
"per_table_row"
system_prompt: Optional[str]

Custom system prompt for extraction

tier: Optional[Literal["cost_effective", "agentic"]]

Extraction tier: cost_effective (10 credits) or agentic (20 credits)

Accepts one of the following:
"cost_effective"
"agentic"
parse_config_id: Optional[str]

Parse config ID used for extraction

parse_tier: Optional[str]

Parse tier to use for extraction (e.g. fast, cost_effective, agentic).

configuration_id: Optional[str]

Saved extract configuration ID (mutually exclusive with config)

webhook_configurations: Optional[Iterable[WebhookConfigurationParam]]

The outbound webhook configurations

webhook_events: Optional[List[Literal["extract.pending", "extract.success", "extract.error", 14 more]]]

List of event names to subscribe to

Accepts one of the following:
"extract.pending"
"extract.success"
"extract.error"
"extract.partial_success"
"extract.cancelled"
"parse.pending"
"parse.running"
"parse.success"
"parse.error"
"parse.partial_success"
"parse.cancelled"
"classify.pending"
"classify.success"
"classify.error"
"classify.partial_success"
"classify.cancelled"
"unmapped_event"
webhook_headers: Optional[Dict[str, str]]

Custom HTTP headers to include with webhook requests.

webhook_output_format: Optional[str]

The output format to use for the webhook. Defaults to string if none supplied. Currently supported values: string, json

webhook_url: Optional[str]

The URL to send webhook notifications to.

ReturnsExpand Collapse
class ExtractV2Job:

An extraction job.

id: str

Unique job identifier (job_id)

created_at: datetime

Creation timestamp

formatdate-time
parameters: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

Job configuration parameters (includes parse_config_id, extract_options)

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
project_id: str

Project this job belongs to

status: Literal["PENDING", "THROTTLED", "RUNNING", 3 more]

Current status of the job

Accepts one of the following:
"PENDING"
"THROTTLED"
"RUNNING"
"COMPLETED"
"FAILED"
"CANCELLED"
type: Literal["url", "file_id", "parse_job_id"]

Type of document input.

Accepts one of the following:
"url"
"file_id"
"parse_job_id"
updated_at: datetime

Last update timestamp

formatdate-time
value: str

Document identifier (URL, file ID, or parse job ID).

configuration_id: Optional[str]

Extract configuration ID (ProductConfiguration) used for this job (if any)

error_message: Optional[str]

Error message if failed

extract_metadata: Optional[ExtractJobMetadata]

Extraction metadata.

field_metadata: Optional[ExtractedFieldMetadata]

Metadata for extracted fields including document, page, and row level info.

document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
parse_job_id: Optional[str]

Reference to the ParseJob ID used for parsing

parse_tier: Optional[str]

Parse tier used for parsing the document

usage: Optional[ExtractJobUsage]

Extraction usage metrics.

num_document_tokens: Optional[int]

Number of document tokens

num_output_tokens: Optional[int]

Number of output tokens

num_pages_extracted: Optional[int]

Number of pages extracted

extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]

Extracted data (object or array depending on extraction_target)

Accepts one of the following:
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool

Create Extract Job

import os
from llama_cloud import LlamaCloud

client = LlamaCloud(
    api_key=os.environ.get("LLAMA_CLOUD_API_KEY"),  # This is the default and can be omitted
)
extract_v2_job = client.extract.create(
    type="url",
    value="value",
)
print(extract_v2_job.id)
{
  "id": "id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "parameters": {
    "foo": {
      "foo": "bar"
    }
  },
  "project_id": "project_id",
  "status": "PENDING",
  "type": "url",
  "updated_at": "2019-12-27T18:11:19.117Z",
  "value": "value",
  "configuration_id": "configuration_id",
  "error_message": "error_message",
  "extract_metadata": {
    "field_metadata": {
      "document_metadata": {
        "foo": {
          "foo": "bar"
        }
      },
      "page_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ],
      "row_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ]
    },
    "parse_job_id": "parse_job_id",
    "parse_tier": "parse_tier",
    "usage": {
      "num_document_tokens": 0,
      "num_output_tokens": 0,
      "num_pages_extracted": 0
    }
  },
  "extract_result": {
    "foo": {
      "foo": "bar"
    }
  }
}
Returns Examples
{
  "id": "id",
  "created_at": "2019-12-27T18:11:19.117Z",
  "parameters": {
    "foo": {
      "foo": "bar"
    }
  },
  "project_id": "project_id",
  "status": "PENDING",
  "type": "url",
  "updated_at": "2019-12-27T18:11:19.117Z",
  "value": "value",
  "configuration_id": "configuration_id",
  "error_message": "error_message",
  "extract_metadata": {
    "field_metadata": {
      "document_metadata": {
        "foo": {
          "foo": "bar"
        }
      },
      "page_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ],
      "row_metadata": [
        {
          "foo": {
            "foo": "bar"
          }
        }
      ]
    },
    "parse_job_id": "parse_job_id",
    "parse_tier": "parse_tier",
    "usage": {
      "num_document_tokens": 0,
      "num_output_tokens": 0,
      "num_pages_extracted": 0
    }
  },
  "extract_result": {
    "foo": {
      "foo": "bar"
    }
  }
}