Open and Advanced Large-Scale Video Generative Models.

Open and Advanced Large-Scale Video Generative Models.
Cada execução custará 0.04. Com $10 você pode executar aproximadamente 250 vezes.
Você pode continuar com:
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "atlascloud/wan-2.2/image-to-video-lora",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Instale o pacote necessário para a sua linguagem de programação.
pip install requestsTodas as solicitações de API requerem autenticação por meio de uma chave de API. Você pode obter sua chave de API no painel do Atlas Cloud.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Nunca exponha sua chave de API em código do lado do cliente ou repositórios públicos. Use variáveis de ambiente ou um proxy de backend.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Envie uma solicitação de geração assíncrona. A API retorna um ID de predição que você pode usar para verificar o status e obter o resultado.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "atlascloud/wan-2.2/image-to-video-lora",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Consulte o endpoint de predição para verificar o status atual da sua solicitação.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingA solicitação ainda está sendo processada.completedA geração está completa. As saídas estão disponíveis.succeededA geração foi bem-sucedida. As saídas estão disponíveis.failedA geração falhou. Verifique o campo de erro.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Envie arquivos para o armazenamento do Atlas Cloud e obtenha uma URL que pode ser usada nas suas solicitações de API. Use multipart/form-data para enviar.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Os seguintes parâmetros são aceitos no corpo da solicitação.
Nenhum parâmetro disponível.
{
"model": "atlascloud/wan-2.2/image-to-video-lora"
}A API retorna uma resposta de predição com as URL de saída geradas.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}O Atlas Cloud Skills integra mais de 300 modelos de IA diretamente no seu assistente de codificação com IA. Um comando para instalar e depois use linguagem natural para gerar imagens, vídeos e conversar com LLM.
npx skills add AtlasCloudAI/atlas-cloud-skillsObtenha sua chave de API no painel do Atlas Cloud e defina-a como variável de ambiente.
export ATLASCLOUD_API_KEY="your-api-key-here"Após a instalação, você pode usar linguagem natural no seu assistente de IA para acessar todos os modelos do Atlas Cloud.
O Atlas Cloud MCP Server conecta seu IDE com mais de 300 modelos de IA através do Model Context Protocol. Funciona com qualquer cliente compatível com MCP.
npx -y atlascloud-mcpAdicione a seguinte configuração ao arquivo de configuração de MCP do seu IDE.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Schema não disponívelVocê precisa fazer login para acessar o histórico de solicitações do modelo.
Fazer Login| Field | Description |
|---|---|
| Model Name | Wan 2.2 Image-to-Video LoRA |
| Developed by | Alibaba Tongyi Wanxiang Lab |
| Model Type | Image-to-Video Generation with LoRA Support |
| Resolution | 480p, 720p (via VSR upscaling) |
| Frame Rate | 30 fps |
| Duration | 3–10 seconds |
| Related Links | GitHub: https://github.com/Wan-Video/Wan2.2, Hugging Face: https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B, Paper (arXiv): https://arxiv.org/abs/2503.20314 |
Wan 2.2 is a significant upgrade to the Wan series of foundational video models, designed to push the boundaries of generative AI in video creation. This image-to-video LoRA variant takes a reference image as the first frame and generates a high-quality video, with full support for custom LoRA weights to fine-tune the generation style, motion characteristics, or subject identity.
The model generates videos at 480p natively and supports 720p output via Video Super Resolution (VSR) upscaling, delivering smooth 30 fps playback at both resolutions.
Effective MoE Architecture: Wan 2.2 integrates a Mixture-of-Experts (MoE) architecture into the video diffusion model. Specialized expert models handle different stages of the denoising process, increasing model capacity without raising computational costs. The model has 27B total parameters with only 14B active during any given step.
Cinematic-Level Aesthetics: Trained on a meticulously curated dataset with detailed labels for cinematic properties like lighting, composition, and color tone. This allows generation of videos with precise and controllable artistic styles, achieving a professional, cinematic look.
Complex Motion Generation: Trained on a vastly expanded dataset (+65.6% more images and +83.2% more videos compared to Wan 2.1), Wan 2.2 demonstrates superior ability to generate complex and realistic motion with enhanced generalization across motions, semantics, and aesthetics.
Custom LoRA Support: This variant supports user-provided LoRA weights for fine-grained style and motion control. Three separate LoRA input channels are available:
high_noise_loras — Applied to the high-noise expert (transformer stage), influencing overall structure and layout.low_noise_loras — Applied to the low-noise expert (transformer_2 stage), influencing fine details and textures.loras — General-purpose LoRA input where the module is auto-inferred from the safetensors filename.VSR-Enhanced Output: All output videos are delivered at 30 fps. When 720p resolution is selected, the model leverages Video Super Resolution to upscale from a 480p base generation, preserving fine details while achieving higher resolution output.
The architecture is built upon the Diffusion Transformer (DiT) paradigm with a Mixture-of-Experts (MoE) framework:
The transition between experts is dynamically determined by the signal-to-noise ratio (SNR) during generation. Custom LoRA weights can be applied to each expert independently, enabling precise control over different aspects of the generation pipeline.