Async jobs
POST /v1/jobs — queue a conversion and poll (or webhook) for the result.
POST /v1/jobs always queues a conversion regardless of file size. Use this when you need predictable async behaviour — for example, when running conversions inside a queue worker that can't block on a multi-minute HTTP request.
When to use it #
- Files larger than 100 MB
- Long-running conversions (large videos, big PDF batches)
- You want a webhook callback instead of polling
- Queue workers that need to release the request thread immediately
Request #
POST /v1/jobs HTTP/1.1
Host: changethisfile.com
Authorization: Bearer ctf_sk_...
multipart/form-data with the same fields as /v1/convert,
plus an optional `webhook_url` field.| Field | Type | Required | Description |
|---|---|---|---|
file | binary | yes | The input file. |
target | string | yes | Target format extension. |
source | string | no | Source format extension. Auto-detected from the filename and (if needed) magic bytes. |
webhook_url | string (URL) | no | We'll POST a signed event here when the job finishes. |
Response — 202 Accepted #
{
"job_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"status": "queued",
"status_url": "https://changethisfile.com/v1/jobs/f47ac10b-58cc-4372-a567-0e02b2c3d479"
}The Location header echoes the status_url for HTTP-friendly clients.
Polling #
GET /v1/jobs/{job_id}
Authorization: Bearer ctf_sk_...Returns the current state. While the job is in flight, status is queued or processing. When it terminates, status is completed or failed.
{
"job_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"status": "completed",
"created_at": "2026-04-25T01:23:45Z",
"updated_at": "2026-04-25T01:24:12Z",
"source_format": "mov",
"target_format": "mp4",
"result": {
"file_url": "https://r2.cloudflare.com/...signed...",
"size": 47281824,
"duration_ms": 27214,
"source_format": "mov",
"target_format": "mp4"
}
}Signed download URLs are valid for 24 hours. Job records are retained for 7 days, after which GET /v1/jobs/{id} returns 404.
Polling cadence #
- Start polling at 2-second intervals for the first 30 seconds.
- Back off exponentially after that (
2s → 4s → 8s → 15s → 30s). - Set a hard cap of 15 minutes for very large videos / ebooks.
- Or — better — pass
webhook_urland skip polling entirely.
Listing jobs #
GET /v1/jobs?limit=20&cursor=2026-04-24T00%3A00%3A00Z
Authorization: Bearer ctf_sk_...Returns the most recent limit jobs (default 20, max 100). next_cursor is an ISO created_at string — pass it in the next call's cursor= parameter to page backwards.
{
"data": [...],
"has_more": true,
"next_cursor": "2026-04-24T13:42:00Z"
}Webhook callbacks #
When you supply webhook_url, the server POSTs an event to that URL when the job terminates (success or failure):
POST /your-webhook HTTP/1.1
Content-Type: application/json
User-Agent: ChangeThisFile-Webhook/1.0
X-CTF-Event: job.completed
X-CTF-Signature: t=1735689600,v1=ab12cd…
{
"event": "job.completed",
"data": {
"job_id": "f47ac10b-…",
"status": "completed",
"result": { "file_url": "https://r2…", "size": 47281824, "duration_ms": 27214 }
}
}Verify the signature on every delivery — see Webhooks.
End-to-end examples #
# 1. Submit (source auto-detected from .mov filename)
JOB=$(curl -s -X POST https://changethisfile.com/v1/jobs \
-H "Authorization: Bearer ctf_sk_..." \
-F "file=@big-video.mov" \
-F "target=mp4" \
-F "webhook_url=https://example.com/webhooks/ctf")
JOB_ID=$(echo "$JOB" | jq -r .job_id)
# 2. Poll
while true; do
STATE=$(curl -s "https://changethisfile.com/v1/jobs/$JOB_ID" \
-H "Authorization: Bearer ctf_sk_...")
STATUS=$(echo "$STATE" | jq -r .status)
[ "$STATUS" = "completed" ] || [ "$STATUS" = "failed" ] && break
sleep 2
done
# 3. Download
URL=$(echo "$STATE" | jq -r .result.file_url)
curl -o out.mp4 "$URL"
import requests, time
KEY = "ctf_sk_..."
H = {"Authorization": f"Bearer {KEY}"}
# 1. Submit
with open("big-video.mov", "rb") as f:
job = requests.post(
"https://changethisfile.com/v1/jobs",
headers=H,
files={"file": f},
data={"target": "mp4", "webhook_url": "https://example.com/webhooks/ctf"},
).json()
# 2. Poll
while job["status"] in ("queued", "processing"):
time.sleep(2)
job = requests.get(f"https://changethisfile.com/v1/jobs/{job['job_id']}", headers=H).json()
# 3. Download
if job["status"] == "completed":
open("out.mp4", "wb").write(requests.get(job["result"]["file_url"]).content)
else:
print("failed:", job["error"])
import { readFileSync, writeFileSync } from 'node:fs';
const KEY = 'ctf_sk_...';
const H = { Authorization: `Bearer ${KEY}` };
// 1. Submit
const form = new FormData();
form.append('file', new Blob([readFileSync('big-video.mov')]), 'big-video.mov');
form.append('target', 'mp4');
form.append('webhook_url', 'https://example.com/webhooks/ctf');
let job = await fetch('https://changethisfile.com/v1/jobs', {
method: 'POST', headers: H, body: form,
}).then(r => r.json());
// 2. Poll
while (job.status === 'queued' || job.status === 'processing') {
await new Promise(r => setTimeout(r, 2000));
job = await fetch(`https://changethisfile.com/v1/jobs/${job.job_id}`, { headers: H })
.then(r => r.json());
}
// 3. Download
if (job.status === 'completed') {
const buf = await fetch(job.result.file_url).then(r => r.arrayBuffer());
writeFileSync('out.mp4', new Uint8Array(buf));
} else {
console.error('failed:', job.error);
}
Failure modes #
A job that exits in status: "failed" exposes an error block with the same envelope as inline errors:
{
"job_id": "...",
"status": "failed",
"error": {
"code": "conversion_failed",
"message": "FFmpeg exited with code 1",
"details": { "stderr": "Invalid data found when processing input" }
}
}The conversion server runs FFmpeg with a 120s timeout, LibreOffice with 120s, and Calibre with 180s. Jobs that exceed these limits surface as conversion_failed with a timeout detail.