Open and Advanced Large-Scale Video Generative Models.

Open and Advanced Large-Scale Video Generative Models.
Votre requête coûtera 0.032 par exécution. Avec $10, vous pouvez exécuter ce modèle environ 312 fois.
Vous pouvez continuer avec :
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.2-spicy/video-extend",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Installez le package requis pour votre langage.
pip install requestsToutes les requêtes API nécessitent une authentification via une clé API. Vous pouvez obtenir votre clé API depuis le tableau de bord Atlas Cloud.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}N'exposez jamais votre clé API dans du code côté client ou dans des dépôts publics. Utilisez plutôt des variables d'environnement ou un proxy backend.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Soumettez une requête de génération asynchrone. L'API renvoie un identifiant de prédiction que vous pouvez utiliser pour vérifier le statut et récupérer le résultat.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.2-spicy/video-extend",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Interrogez le point de terminaison de prédiction pour vérifier le statut actuel de votre requête.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingLa requête est encore en cours de traitement.completedLa génération est terminée. Les résultats sont disponibles.succeededLa génération a réussi. Les résultats sont disponibles.failedLa génération a échoué. Vérifiez le champ d'erreur.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Téléchargez des fichiers vers le stockage Atlas Cloud et obtenez une URL utilisable dans vos requêtes API. Utilisez multipart/form-data pour le téléchargement.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Les paramètres suivants sont acceptés dans le corps de la requête.
Aucun paramètre disponible.
{
"model": "alibaba/wan-2.2-spicy/video-extend"
}L'API renvoie une réponse de prédiction avec les URL des résultats générés.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills intègre plus de 300 modèles d'IA directement dans votre assistant de codage IA. Une seule commande pour installer, puis utilisez le langage naturel pour générer des images, des vidéos et discuter avec des LLM.
npx skills add AtlasCloudAI/atlas-cloud-skillsObtenez votre clé API depuis le tableau de bord Atlas Cloud et définissez-la comme variable d'environnement.
export ATLASCLOUD_API_KEY="your-api-key-here"Une fois installé, vous pouvez utiliser le langage naturel dans votre assistant IA pour accéder à tous les modèles Atlas Cloud.
Le serveur MCP Atlas Cloud connecte votre IDE avec plus de 300 modèles d'IA via le Model Context Protocol. Compatible avec tout client compatible MCP.
npx -y atlascloud-mcpAjoutez la configuration suivante au fichier de paramètres MCP de votre IDE.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Schéma non disponibleVous devez vous connecter pour accéder à l'historique de vos requêtes de modèle.
Se Connecter| Field | Description |
|---|---|
| Model Name | Wan 2.2 |
| Developed by | Alibaba Tongyi Wanxiang Lab |
| Release Date | July 28, 2025 |
| Model Type | Video Generation |
| Related Links | GitHub: https://github.com/Wan-Video/Wan2.2, Hugging Face: https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B, Paper (arXiv): https://arxiv.org/abs/2503.20314 |
Wan 2.2 is a significant upgrade to the Wan series of foundational video models, designed to push the boundaries of generative AI in video creation. The primary goal of Wan 2.2 is to provide an open and advanced suite of tools for generating high-quality, cinematic videos from various inputs, including text, images, and audio. Its core contribution lies in making state-of-the-art video generation technology accessible to a broader community of researchers and creators through open-sourcing its models and code. The project emphasizes cinematic aesthetics, complex motion generation, and computational efficiency, introducing several key innovations to achieve these aims.
Wan 2.2 introduces several groundbreaking features that set it apart from previous models:
Effective MoE Architecture: Wan 2.2 is the first model to successfully integrate a Mixture-of-Experts (MoE) architecture into a video diffusion model. This design uses specialized expert models for different stages of the denoising process, which significantly increases the model's capacity without raising computational costs. The model has a total of 27B parameters, but only 14B are active during any given step.
Cinematic-Level Aesthetics: The model was trained on a meticulously curated dataset with detailed labels for cinematic properties like lighting, composition, and color tone. This allows users to generate videos with precise and controllable artistic styles, achieving a professional, cinematic look.
Complex Motion Generation: By training on a vastly expanded dataset (+65.6% more images and +83.2% more videos compared to Wan 2.1), Wan 2.2 demonstrates a superior ability to generate complex and realistic motion. It shows enhanced generalization across various motions, semantics, and aesthetics.
Efficient High-Definition Video: The suite includes a highly efficient 5B model (TI2V-5B) that utilizes an advanced VAE for high-compression video generation. It can produce 720p video at 24 fps and is capable of running on consumer-grade GPUs like the NVIDIA RTX 4090, making high-definition AI video generation more accessible.
The architecture of Wan 2.2 is built upon the Diffusion Transformer (DiT) paradigm and incorporates several key technical advancements.
The primary models in the Wan 2.2 suite, such as the T2V-A14B, employ a Mixture-of-Experts (MoE) architecture. This framework consists of two main expert models:
The transition between these experts is dynamically determined by the signal-to-noise ratio (SNR) during generation. This MoE design allows the model to have a large parameter count (27B total) while keeping the number of active parameters (14B) and computational load comparable to smaller models.
Wan 2.2 is offered in several variants, each tailored for different tasks and computational resources.
| Model Variant | Total Parameters | Key Feature | Supported Tasks |
|---|---|---|---|
| T2V-A14B | ~27B (14B active) | MoE for Text-to-Video | Text-to-Video |
| I2V-A14B | ~27B (14B active) | MoE for Image-to-Video | Image-to-Video |
| TI2V-5B | 5B | High-Compression VAE | Text-to-Video, Image-to-Video |
| S2V-14B | ~27B (14B active) | MoE for Speech-to-Video | Speech-to-Video |
| Animate-14B | ~27B (14B active) | MoE for Animation | Character Animation & Replacement |
Wan 2.2 is designed for a wide range of creative and academic applications. Its various models support a comprehensive set of downstream tasks, making it a versatile tool for digital artists, filmmakers, researchers, and developers.