Open and Advanced Large-Scale Video Generative Models.

Open and Advanced Large-Scale Video Generative Models.
كل مرة ستكلف 0.032 مع $10 يمكنك التشغيل حوالي 312 مرة
يمكنك المتابعة بـ:
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.2-spicy/video-extend",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()قم بتثبيت الحزمة المطلوبة للغة البرمجة الخاصة بك.
pip install requestsتتطلب جميع طلبات API المصادقة عبر مفتاح API. يمكنك الحصول على مفتاح API الخاص بك من لوحة تحكم Atlas Cloud.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}لا تكشف أبدًا مفتاح API الخاص بك في الكود من جانب العميل أو المستودعات العامة. استخدم متغيرات البيئة أو وكيل الخادم الخلفي بدلاً من ذلك.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())أرسل طلب توليد غير متزامن. تُرجع API معرّف التنبؤ الذي يمكنك استخدامه للتحقق من الحالة واسترداد النتيجة.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.2-spicy/video-extend",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}استعلم عن نقطة نهاية التنبؤ للتحقق من الحالة الحالية لطلبك.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingلا يزال الطلب قيد المعالجة.completedاكتمل التوليد. المخرجات متاحة.succeededنجح التوليد. المخرجات متاحة.failedفشل التوليد. تحقق من حقل الخطأ.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}ارفع الملفات إلى تخزين Atlas Cloud واحصل على URL يمكنك استخدامه في طلبات API الخاصة بك. استخدم multipart/form-data للرفع.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}المعاملات التالية مقبولة في نص الطلب.
لا توجد معاملات متاحة.
{
"model": "alibaba/wan-2.2-spicy/video-extend"
}تُرجع API استجابة تنبؤ تحتوي على عناوين URL للمخرجات المولّدة.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}يدمج Atlas Cloud Skills أكثر من 300 نموذج ذكاء اصطناعي مباشرة في مساعد البرمجة بالذكاء الاصطناعي الخاص بك. أمر واحد للتثبيت، ثم استخدم اللغة الطبيعية لتوليد الصور ومقاطع الفيديو والدردشة مع LLM.
npx skills add AtlasCloudAI/atlas-cloud-skillsاحصل على مفتاح API الخاص بك من لوحة تحكم Atlas Cloud وعيّنه كمتغير بيئة.
export ATLASCLOUD_API_KEY="your-api-key-here"بمجرد التثبيت، يمكنك استخدام اللغة الطبيعية في مساعد الذكاء الاصطناعي الخاص بك للوصول إلى جميع نماذج Atlas Cloud.
يربط Atlas Cloud MCP Server بيئة التطوير الخاصة بك بأكثر من 300 نموذج ذكاء اصطناعي عبر Model Context Protocol. يعمل مع أي عميل متوافق مع MCP.
npx -y atlascloud-mcpأضف التكوين التالي إلى ملف إعدادات MCP في بيئة التطوير الخاصة بك.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}المخطط غير متاح| Field | Description |
|---|---|
| Model Name | Wan 2.2 |
| Developed by | Alibaba Tongyi Wanxiang Lab |
| Release Date | July 28, 2025 |
| Model Type | Video Generation |
| Related Links | GitHub: https://github.com/Wan-Video/Wan2.2, Hugging Face: https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B, Paper (arXiv): https://arxiv.org/abs/2503.20314 |
Wan 2.2 is a significant upgrade to the Wan series of foundational video models, designed to push the boundaries of generative AI in video creation. The primary goal of Wan 2.2 is to provide an open and advanced suite of tools for generating high-quality, cinematic videos from various inputs, including text, images, and audio. Its core contribution lies in making state-of-the-art video generation technology accessible to a broader community of researchers and creators through open-sourcing its models and code. The project emphasizes cinematic aesthetics, complex motion generation, and computational efficiency, introducing several key innovations to achieve these aims.
Wan 2.2 introduces several groundbreaking features that set it apart from previous models:
Effective MoE Architecture: Wan 2.2 is the first model to successfully integrate a Mixture-of-Experts (MoE) architecture into a video diffusion model. This design uses specialized expert models for different stages of the denoising process, which significantly increases the model's capacity without raising computational costs. The model has a total of 27B parameters, but only 14B are active during any given step.
Cinematic-Level Aesthetics: The model was trained on a meticulously curated dataset with detailed labels for cinematic properties like lighting, composition, and color tone. This allows users to generate videos with precise and controllable artistic styles, achieving a professional, cinematic look.
Complex Motion Generation: By training on a vastly expanded dataset (+65.6% more images and +83.2% more videos compared to Wan 2.1), Wan 2.2 demonstrates a superior ability to generate complex and realistic motion. It shows enhanced generalization across various motions, semantics, and aesthetics.
Efficient High-Definition Video: The suite includes a highly efficient 5B model (TI2V-5B) that utilizes an advanced VAE for high-compression video generation. It can produce 720p video at 24 fps and is capable of running on consumer-grade GPUs like the NVIDIA RTX 4090, making high-definition AI video generation more accessible.
The architecture of Wan 2.2 is built upon the Diffusion Transformer (DiT) paradigm and incorporates several key technical advancements.
The primary models in the Wan 2.2 suite, such as the T2V-A14B, employ a Mixture-of-Experts (MoE) architecture. This framework consists of two main expert models:
The transition between these experts is dynamically determined by the signal-to-noise ratio (SNR) during generation. This MoE design allows the model to have a large parameter count (27B total) while keeping the number of active parameters (14B) and computational load comparable to smaller models.
Wan 2.2 is offered in several variants, each tailored for different tasks and computational resources.
| Model Variant | Total Parameters | Key Feature | Supported Tasks |
|---|---|---|---|
| T2V-A14B | ~27B (14B active) | MoE for Text-to-Video | Text-to-Video |
| I2V-A14B | ~27B (14B active) | MoE for Image-to-Video | Image-to-Video |
| TI2V-5B | 5B | High-Compression VAE | Text-to-Video, Image-to-Video |
| S2V-14B | ~27B (14B active) | MoE for Speech-to-Video | Speech-to-Video |
| Animate-14B | ~27B (14B active) | MoE for Animation | Character Animation & Replacement |
Wan 2.2 is designed for a wide range of creative and academic applications. Its various models support a comprehensive set of downstream tasks, making it a versatile tool for digital artists, filmmakers, researchers, and developers.