Retrieve an Edit
Poll for the status and result of an edit job.
The Python SDK handles this automatically. Use process() — it submits the job, polls until completion, and returns the result. You don't need to call get_edit() directly.
result = client.process("https://example.com/episode.mp3", fillers=True)
result.audio.download("cleaned.mp3")Legacy: manual polling with get_edit()
Use this only if you submitted a job via create_edit() and need to check its status separately — for example in a background worker or webhook handler.
edit = client.get_edit("edit_abc123")
print(edit.status) # PENDING, PREPROCESSING, CLASSIFICATION, EDITING, POSTPROCESSING, EXPORT, SUCCESS, FAILURE, RETRY
if edit.status == "SUCCESS":
print(edit.result.download_url)The JavaScript SDK handles this automatically. Use process() — it submits the job, polls until completion, and resolves with the result. You don't need to call getEdit() directly.
const result = await client.process('https://example.com/episode.mp3', { fillers: true });
console.log(result.audio.url);Legacy: manual polling with getEdit()
Use this only if you submitted a job via createEdit() and need to check its status separately.
const edit = await client.getEdit('edit_abc123');
console.log(edit.status); // PENDING, PREPROCESSING, CLASSIFICATION, EDITING, POSTPROCESSING, EXPORT, SUCCESS, FAILURE, RETRY
if (edit.status === 'SUCCESS' && edit.result && 'download_url' in edit.result) {
console.log(edit.result.download_url);
}getEdit() / get_edit() returns the raw API polling payload with fields like download_url, transcription, summarization, and social_content. The higher-level process() helpers reshape that into SDK-friendly fields like result.audio.url and result.transcript.
A recent live social_content=true check returned a populated social_content object with newsletter, twitter_thread, and linkedin, and remained in POSTPROCESSING for several polls before SUCCESS.
Endpoint
GET /v2/edits/{edit_id}Poll this endpoint after submitting a job until status is SUCCESS or FAILURE.
curl https://api.cleanvoice.ai/v2/edits/edit_abc123 \
-H "X-API-Key: $CLEANVOICE_API_KEY"Response
{
"task_id": "edit_abc123",
"status": "SUCCESS",
"result": {
"download_url": "https://storage.cleanvoice.ai/cleaned/episode.mp3",
"transcription": null,
"summarization": null,
"social_content": []
}
}Real successful responses are usually larger than this. In live API calls, result.transcription contains paragraph and word timing data, result.summarization contains title/summary/chapter fields, and result.social_content is an object with newsletter, twitter_thread, and linkedin when social content generation is enabled. See the REST retrieve page for trimmed anonymized examples of those full payloads.
Status values
| Status | Description |
|---|---|
PENDING | Job is queued |
STARTED | Processing in progress |
PREPROCESSING | Input analysis and setup is in progress |
CLASSIFICATION | Cleanvoice is classifying events in the media |
EDITING | The main edit pass is running |
POSTPROCESSING | Final cleanup and result assembly is running |
EXPORT | Output files are being written |
SUCCESS | Done — result is available |
FAILURE | Processing failed |
RETRY | Temporary failure, retrying automatically |
Polling loop
while true; do
RESPONSE=$(curl -s https://api.cleanvoice.ai/v2/edits/edit_abc123 \
-H "X-API-Key: $CLEANVOICE_API_KEY")
STATUS=$(echo $RESPONSE | jq -r '.status')
echo "Status: $STATUS"
if [ "$STATUS" = "SUCCESS" ] || [ "$STATUS" = "FAILURE" ]; then
echo $RESPONSE
break
fi
sleep 10
doneStart polling after ~30 seconds. Poll every 10 seconds after that. Processing typically takes ~30s for short clips, 5–10 min for 1h files.