Predictions
Understanding async generation tasks — submit, poll, and retrieve results
What are Predictions?
When you submit an image or video generation request to Atlas Cloud, the task doesn't complete immediately. Instead, you receive a prediction ID that you can use to track the task's progress and retrieve the result when it's ready.
This asynchronous pattern is used for all non-LLM generation tasks (images, videos, etc.) because these tasks can take anywhere from a few seconds to several minutes to complete.
Prediction Lifecycle
┌─────────┐ ┌────────────┐ ┌───────────┐
│ Submit │ ──→ │ Processing │ ──→ │ Completed │
│ Task │ │ │ │ │
└─────────┘ └────────────┘ └───────────┘
│
▼
┌───────────┐
│ Failed │
└───────────┘Status values:
processing— The task is being processed by the modelcompleted— Generation is done, output is availablefailed— Generation failed, error details are available
Submit a Task
Image Generation
import requests
response = requests.post(
"https://api.atlascloud.ai/api/v1/model/generateImage",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "seedream-3.0",
"prompt": "A beautiful mountain landscape at golden hour"
}
)
data = response.json()
prediction_id = data.get("predictionId")
print(f"Task submitted: {prediction_id}")Video Generation
response = requests.post(
"https://api.atlascloud.ai/api/v1/model/generateVideo",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "kling-v2.0",
"prompt": "Ocean waves crashing on a rocky shore at sunset"
}
)
data = response.json()
prediction_id = data.get("predictionId")Poll for Results
Use the prediction ID to check the task status and retrieve the output:
import requests
import time
def wait_for_result(prediction_id, api_key, interval=5, timeout=300):
"""Poll for generation result with timeout."""
elapsed = 0
while elapsed < timeout:
response = requests.get(
f"https://api.atlascloud.ai/api/v1/model/getResult?predictionId={prediction_id}",
headers={"Authorization": f"Bearer {api_key}"}
)
result = response.json()
status = result.get("status")
if status == "completed":
return result.get("output")
elif status == "failed":
raise Exception(f"Generation failed: {result.get('error')}")
print(f"Status: {status} ({elapsed}s elapsed)")
time.sleep(interval)
elapsed += interval
raise TimeoutError(f"Task did not complete within {timeout}s")
# Usage
output = wait_for_result("your-prediction-id", "your-api-key")
print(f"Result: {output}")Node.js Example
async function waitForResult(predictionId, apiKey, interval = 5000, timeout = 300000) {
const startTime = Date.now();
while (Date.now() - startTime < timeout) {
const response = await fetch(
`https://api.atlascloud.ai/api/v1/model/getResult?predictionId=${predictionId}`,
{ headers: { Authorization: `Bearer ${apiKey}` } }
);
const result = await response.json();
if (result.status === "completed") return result.output;
if (result.status === "failed") throw new Error(`Failed: ${result.error}`);
console.log(`Status: ${result.status}`);
await new Promise((r) => setTimeout(r, interval));
}
throw new Error("Timeout");
}
const output = await waitForResult("your-prediction-id", "your-api-key");
console.log(`Result: ${output}`);cURL Example
curl "https://api.atlascloud.ai/api/v1/model/getResult?predictionId=your-prediction-id" \
-H "Authorization: Bearer your-api-key"Polling Best Practices
- Start with longer intervals: Use 5-second intervals for video generation, 2-second intervals for image generation
- Set a timeout: Always set a maximum wait time to avoid infinite polling
- Handle failures gracefully: Check for
failedstatus and handle errors appropriately - Log progress: Print status updates so users know the task is still running
Typical Generation Times
| Task Type | Typical Time |
|---|---|
| Image generation | 2–10 seconds |
| Video generation | 30 seconds – 3 minutes |
| Image-to-video | 30 seconds – 3 minutes |
| Image tools (upscale, etc.) | 5–15 seconds |
Actual times vary depending on the model, parameters (resolution, duration), and current load.
Error Handling
When a prediction fails, the result includes an error message:
{
"status": "failed",
"error": "Invalid parameter: resolution not supported by this model"
}Common failure reasons:
- Invalid model parameters
- Input image URL is inaccessible
- Prompt contains disallowed content
- Insufficient account balance
See the API Reference for full error code documentation.