Skip to main content

Overview

Every Aifano endpoint has an async variant (/parse_async, /extract_async, /split_async, /edit_async, /pipeline_async). Async endpoints return a job_id immediately, and you poll for results or receive them via webhook. Use async processing when:
  • Documents are large (50+ pages)
  • You’re processing batches of documents
  • You don’t need results immediately
  • You want to avoid request timeouts

Submitting an Async Job

Add _async to any endpoint:
curl -X POST "https://platform.aifano.com/parse_async" \
  -H "Authorization: Bearer $AIFANO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"input": "aifano://large-report.pdf"}'

Response

{
  "job_id": "job_a1b2c3d4e5f6"
}

Polling for Results

Use GET /job/{job_id} to check the status:
curl "https://platform.aifano.com/job/job_a1b2c3d4e5f6" \
  -H "Authorization: Bearer $AIFANO_API_KEY"

Job Statuses

StatusDescription
PENDINGJob is queued and waiting to be processed
RUNNINGJob is currently being processed
COMPLETEDJob finished successfully — results are available
FAILEDJob failed — check the error field for details

Completed Job Response

{
  "job_id": "job_a1b2c3d4e5f6",
  "status": "COMPLETED",
  "result": {
    "type": "full",
    "chunks": [...]
  }
}

Cancelling a Job

Cancel a running or pending job:
curl -X POST "https://platform.aifano.com/cancel/job_a1b2c3d4e5f6" \
  -H "Authorization: Bearer $AIFANO_API_KEY"

Async Endpoints

Sync EndpointAsync EndpointDescription
POST /parsePOST /parse_asyncDocument parsing
POST /extractPOST /extract_asyncData extraction
POST /splitPOST /split_asyncDocument splitting
POST /editPOST /edit_asyncDocument editing
POST /pipelinePOST /pipeline_asyncPipeline execution

Best Practices

Poll every 2–5 seconds. Avoid polling more frequently than once per second.
Set a maximum number of polling attempts (e.g., 150 attempts × 2s = 5 minutes) to avoid infinite loops.
Always check for FAILED status and inspect the error field. Implement retry logic for transient errors.
Submit all jobs first, then poll for results. This maximizes throughput and avoids sequential bottlenecks.