
Wan 2.2 spicy Image-to-Video API by Alibaba
Open and Advanced Large-Scale Video Generative Models.
输入
输出
空闲每次运行将花费 $0.03。$10 可运行约 333 次。
代码示例
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.2-spicy/image-to-video",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()安装
安装所需的依赖包。
pip install requests认证
所有 API 请求需要通过 API Key 进行认证。您可以在 Atlas Cloud 控制台获取 API Key。
export ATLASCLOUD_API_KEY="your-api-key-here"HTTP 请求头
import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}切勿在客户端代码或公开仓库中暴露您的 API Key。请使用环境变量或后端代理。
提交请求
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())提交请求
提交一个异步生成请求。API 返回一个 prediction ID,您可以用它来检查状态和获取结果。
/api/v1/model/generateVideo请求体
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "alibaba/wan-2.2-spicy/image-to-video",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")响应
{
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}检查状态
轮询 prediction 端点以检查请求的当前状态。
/api/v1/model/prediction/{prediction_id}轮询示例
import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)状态值
processing请求仍在处理中。completed生成完成,输出可用。succeeded生成成功,输出可用。failed生成失败,请检查 error 字段。完成响应
{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}上传文件
将文件上传到 Atlas Cloud 存储,获取可在 API 请求中使用的 URL。使用 multipart/form-data 上传。
/api/v1/model/uploadMedia上传示例
import requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")响应
{
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Input Schema
以下参数在请求体中被接受。
暂无可用参数。
请求体示例
{
"model": "alibaba/wan-2.2-spicy/image-to-video"
}Output Schema
API 返回包含生成输出 URL 的 prediction 响应。
响应示例
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills
Atlas Cloud Skills 将 300+ AI 模型直接集成到您的 AI 编程助手中。一条命令安装,即可用自然语言生成图像、视频和与 LLM 对话。
支持的客户端
安装
npx skills add AtlasCloudAI/atlas-cloud-skills设置 API Key
从 Atlas Cloud 控制台获取 API Key,并将其设置为环境变量。
export ATLASCLOUD_API_KEY="your-api-key-here"功能
安装后,您可以在 AI 助手中使用自然语言访问所有 Atlas Cloud 模型。
MCP Server
Atlas Cloud MCP Server 通过 Model Context Protocol 将您的 IDE 与 300+ AI 模型连接。支持任何兼容 MCP 的客户端。
支持的客户端
安装
npx -y atlascloud-mcp配置
将以下配置添加到您的 IDE 的 MCP 设置文件中。
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}可用工具
API Schema
Schema 不可用Wan 2.2: Open and Advanced Large-Scale Video Generative Model by Alibaba Wanxiang
Model Card Overview
| Field | Description |
|---|---|
| Model Name | Wan 2.2 |
| Developed by | Alibaba Tongyi Wanxiang Lab |
| Release Date | July 28, 2025 |
| Model Type | Video Generation |
| Related Links | GitHub: https://github.com/Wan-Video/Wan2.2, Hugging Face: https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B, Paper (arXiv): https://arxiv.org/abs/2503.20314 |
Introduction
Wan 2.2 is a significant upgrade to the Wan series of foundational video models, designed to push the boundaries of generative AI in video creation. The primary goal of Wan 2.2 is to provide an open and advanced suite of tools for generating high-quality, cinematic videos from various inputs, including text, images, and audio. Its core contribution lies in making state-of-the-art video generation technology accessible to a broader community of researchers and creators through open-sourcing its models and code. The project emphasizes cinematic aesthetics, complex motion generation, and computational efficiency, introducing several key innovations to achieve these aims.
Key Features & Innovations
Wan 2.2 introduces several groundbreaking features that set it apart from previous models:
-
Effective MoE Architecture: Wan 2.2 is the first model to successfully integrate a Mixture-of-Experts (MoE) architecture into a video diffusion model. This design uses specialized expert models for different stages of the denoising process, which significantly increases the model's capacity without raising computational costs. The model has a total of 27B parameters, but only 14B are active during any given step.
-
Cinematic-Level Aesthetics: The model was trained on a meticulously curated dataset with detailed labels for cinematic properties like lighting, composition, and color tone. This allows users to generate videos with precise and controllable artistic styles, achieving a professional, cinematic look.
-
Complex Motion Generation: By training on a vastly expanded dataset (+65.6% more images and +83.2% more videos compared to Wan 2.1), Wan 2.2 demonstrates a superior ability to generate complex and realistic motion. It shows enhanced generalization across various motions, semantics, and aesthetics.
-
Efficient High-Definition Video: The suite includes a highly efficient 5B model (TI2V-5B) that utilizes an advanced VAE for high-compression video generation. It can produce 720p video at 24 fps and is capable of running on consumer-grade GPUs like the NVIDIA RTX 4090, making high-definition AI video generation more accessible.
Model Architecture & Technical Details
The architecture of Wan 2.2 is built upon the Diffusion Transformer (DiT) paradigm and incorporates several key technical advancements.
Core Architecture
The primary models in the Wan 2.2 suite, such as the T2V-A14B, employ a Mixture-of-Experts (MoE) architecture. This framework consists of two main expert models:
- High-Noise Expert: Activated during the initial stages of the denoising process, this expert focuses on establishing the overall structure and layout of the video.
- Low-Noise Expert: Activated in the later stages, this expert is responsible for refining the details, textures, and fine-grained motion of the video.
The transition between these experts is dynamically determined by the signal-to-noise ratio (SNR) during generation. This MoE design allows the model to have a large parameter count (27B total) while keeping the number of active parameters (14B) and computational load comparable to smaller models.
Key Parameters & Variants
Wan 2.2 is offered in several variants, each tailored for different tasks and computational resources.
| Model Variant | Total Parameters | Key Feature | Supported Tasks |
|---|---|---|---|
| T2V-A14B | ~27B (14B active) | MoE for Text-to-Video | Text-to-Video |
| I2V-A14B | ~27B (14B active) | MoE for Image-to-Video | Image-to-Video |
| TI2V-5B | 5B | High-Compression VAE | Text-to-Video, Image-to-Video |
| S2V-14B | ~27B (14B active) | MoE for Speech-to-Video | Speech-to-Video |
| Animate-14B | ~27B (14B active) | MoE for Animation | Character Animation & Replacement |
Intended Use & Applications
Wan 2.2 is designed for a wide range of creative and academic applications. Its various models support a comprehensive set of downstream tasks, making it a versatile tool for digital artists, filmmakers, researchers, and developers.
- Cinematic Video Production: Generating high-fidelity video clips with specific artistic styles for short films, advertisements, or social media content.
- Storyboarding and Pre-visualization: Quickly creating video mockups from text descriptions or still images to visualize scenes.
- Character Animation: Animating static character images or replacing characters in existing videos with new ones while preserving motion and expression.
- Audio-Driven Content: Producing videos that are synchronized with speech or other audio tracks, suitable for creating animated avatars or visualizing audio content.
- Academic Research: Serving as a powerful, open-source foundation model for researchers exploring advancements in video generation, AI ethics, and multimodal AI.
- Creative Content Generation: Enabling artists and creators to explore new forms of digital art and storytelling by combining text, images, and audio to produce unique video content.






