Skip to content

List Extract Jobs

extract.list(ExtractListParams**kwargs) -> SyncPaginatedCursor[ExtractV2Job]
GET/api/v2/extract

List extraction jobs with optional filtering and pagination.

ParametersExpand Collapse
configuration_id: Optional[str]

Filter by configuration ID

document_input_type: Optional[str]

Filter by document input type (file_id or parse_job_id)

document_input_value: Optional[str]

Filter by document input value

organization_id: Optional[str]
page_size: Optional[int]

Number of items per page

page_token: Optional[str]

Token for pagination

project_id: Optional[str]
status: Optional[Literal["PENDING", "THROTTLED", "RUNNING", 3 more]]

Filter by status

Accepts one of the following:
"PENDING"
"THROTTLED"
"RUNNING"
"COMPLETED"
"FAILED"
"CANCELLED"
ReturnsExpand Collapse
class ExtractV2Job:

An extraction job.

id: str

Unique job identifier (job_id)

created_at: datetime

Creation timestamp

formatdate-time
parameters: Dict[str, Union[Dict[str, object], List[object], str, 3 more]]

Job configuration parameters (includes parse_config_id, extract_options)

Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
project_id: str

Project this job belongs to

status: Literal["PENDING", "THROTTLED", "RUNNING", 3 more]

Current status of the job

Accepts one of the following:
"PENDING"
"THROTTLED"
"RUNNING"
"COMPLETED"
"FAILED"
"CANCELLED"
type: Literal["url", "file_id", "parse_job_id"]

Type of document input.

Accepts one of the following:
"url"
"file_id"
"parse_job_id"
updated_at: datetime

Last update timestamp

formatdate-time
value: str

Document identifier (URL, file ID, or parse job ID).

configuration_id: Optional[str]

Extract configuration ID (ProductConfiguration) used for this job (if any)

error_message: Optional[str]

Error message if failed

extract_metadata: Optional[ExtractJobMetadata]

Extraction metadata.

field_metadata: Optional[ExtractedFieldMetadata]

Metadata for extracted fields including document, page, and row level info.

document_metadata: Optional[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
page_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
row_metadata: Optional[List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
parse_job_id: Optional[str]

Reference to the ParseJob ID used for parsing

parse_tier: Optional[str]

Parse tier used for parsing the document

usage: Optional[ExtractJobUsage]

Extraction usage metrics.

num_document_tokens: Optional[int]

Number of document tokens

num_output_tokens: Optional[int]

Number of output tokens

num_pages_extracted: Optional[int]

Number of pages extracted

extract_result: Optional[Union[Dict[str, Union[Dict[str, object], List[object], str, 3 more]], List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]], null]]

Extracted data (object or array depending on extraction_target)

Accepts one of the following:
Dict[str, Union[Dict[str, object], List[object], str, 3 more]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool
List[Dict[str, Union[Dict[str, object], List[object], str, 3 more]]]
Accepts one of the following:
Dict[str, object]
List[object]
str
float
bool

List Extract Jobs

import os
from llama_cloud import LlamaCloud

client = LlamaCloud(
    api_key=os.environ.get("LLAMA_CLOUD_API_KEY"),  # This is the default and can be omitted
)
page = client.extract.list()
page = page.items[0]
print(page.id)
{
  "items": [
    {
      "id": "id",
      "created_at": "2019-12-27T18:11:19.117Z",
      "parameters": {
        "foo": {
          "foo": "bar"
        }
      },
      "project_id": "project_id",
      "status": "PENDING",
      "type": "url",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "value": "value",
      "configuration_id": "configuration_id",
      "error_message": "error_message",
      "extract_metadata": {
        "field_metadata": {
          "document_metadata": {
            "foo": {
              "foo": "bar"
            }
          },
          "page_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ],
          "row_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ]
        },
        "parse_job_id": "parse_job_id",
        "parse_tier": "parse_tier",
        "usage": {
          "num_document_tokens": 0,
          "num_output_tokens": 0,
          "num_pages_extracted": 0
        }
      },
      "extract_result": {
        "foo": {
          "foo": "bar"
        }
      }
    }
  ],
  "next_page_token": "next_page_token",
  "total_size": 0
}
Returns Examples
{
  "items": [
    {
      "id": "id",
      "created_at": "2019-12-27T18:11:19.117Z",
      "parameters": {
        "foo": {
          "foo": "bar"
        }
      },
      "project_id": "project_id",
      "status": "PENDING",
      "type": "url",
      "updated_at": "2019-12-27T18:11:19.117Z",
      "value": "value",
      "configuration_id": "configuration_id",
      "error_message": "error_message",
      "extract_metadata": {
        "field_metadata": {
          "document_metadata": {
            "foo": {
              "foo": "bar"
            }
          },
          "page_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ],
          "row_metadata": [
            {
              "foo": {
                "foo": "bar"
              }
            }
          ]
        },
        "parse_job_id": "parse_job_id",
        "parse_tier": "parse_tier",
        "usage": {
          "num_document_tokens": 0,
          "num_output_tokens": 0,
          "num_pages_extracted": 0
        }
      },
      "extract_result": {
        "foo": {
          "foo": "bar"
        }
      }
    }
  ],
  "next_page_token": "next_page_token",
  "total_size": 0
}