alibaba/wan-2.2-spicy/video-extend

Open and Advanced Large-Scale Video Generative Models.

VIDEO-TO-VIDEO
Wan-2.2-spicy Video Extend
影片轉影片

Open and Advanced Large-Scale Video Generative Models.

輸入

正在載入參數設定...

輸出

閒置
生成的影片將在這裡顯示
設定參數後點擊執行開始生成

每次執行將花費 0.032。$10 可執行約 312 次。

你可以繼續:

參數

程式碼範例

import requests
import time

# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "alibaba/wan-2.2-spicy/video-extend",
    "prompt": "A beautiful sunset over the ocean with gentle waves",
    "width": 512,
    "height": 512,
    "duration": 3,
    "fps": 24,
}

generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]

# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"

def check_status():
    while True:
        response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
        result = response.json()

        if result["data"]["status"] in ["completed", "succeeded"]:
            print("Generated video:", result["data"]["outputs"][0])
            return result["data"]["outputs"][0]
        elif result["data"]["status"] == "failed":
            raise Exception(result["data"]["error"] or "Generation failed")
        else:
            # Still processing, wait 2 seconds
            time.sleep(2)

video_url = check_status()

安裝

為您的程式語言安裝所需的套件。

bash
pip install requests

驗證

所有 API 請求都需要透過 API 金鑰進行驗證。您可以從 Atlas Cloud 儀表板取得 API 金鑰。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

HTTP 標頭

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
請妥善保管您的 API 金鑰

切勿在客戶端程式碼或公開儲存庫中暴露您的 API 金鑰。請改用環境變數或後端代理。

提交請求

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "prompt": "A beautiful landscape"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

提交請求

提交非同步生成請求。API 會傳回一個預測 ID,您可以用它來檢查狀態並取得結果。

POST/api/v1/model/generateVideo

請求主體

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}

data = {
    "model": "alibaba/wan-2.2-spicy/video-extend",
    "input": {
        "prompt": "A beautiful sunset over the ocean with gentle waves"
    }
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")

回應

{
  "id": "pred_abc123",
  "status": "processing",
  "model": "model-name",
  "created_at": "2025-01-01T00:00:00Z"
}

檢查狀態

輪詢預測端點以檢查請求的當前狀態。

GET/api/v1/model/prediction/{prediction_id}

輪詢範例

import requests
import time

prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

while True:
    response = requests.get(url, headers=headers)
    result = response.json()
    status = result["data"]["status"]
    print(f"Status: {status}")

    if status in ["completed", "succeeded"]:
        output_url = result["data"]["outputs"][0]
        print(f"Output URL: {output_url}")
        break
    elif status == "failed":
        print(f"Error: {result['data'].get('error', 'Unknown')}")
        break

    time.sleep(3)

狀態值

processing請求仍在處理中。
completed生成完成。輸出已可取得。
succeeded生成成功。輸出已可取得。
failed生成失敗。請檢查錯誤欄位。

完成回應

{
  "data": {
    "id": "pred_abc123",
    "status": "completed",
    "outputs": [
      "https://storage.atlascloud.ai/outputs/result.mp4"
    ],
    "metrics": {
      "predict_time": 45.2
    },
    "created_at": "2025-01-01T00:00:00Z",
    "completed_at": "2025-01-01T00:00:10Z"
  }
}

上傳檔案

上傳檔案至 Atlas Cloud 儲存空間並取得 URL,可用於您的 API 請求。使用 multipart/form-data 上傳。

POST/api/v1/model/uploadMedia

上傳範例

import requests

url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

with open("image.png", "rb") as f:
    files = {"file": ("image.png", f, "image/png")}
    response = requests.post(url, headers=headers, files=files)

result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")

回應

{
  "data": {
    "download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
    "file_name": "image.png",
    "content_type": "image/png",
    "size": 1024000
  }
}

輸入 Schema

以下參數可在請求主體中使用。

總計: 0必填: 0選填: 0

無可用參數。

範例請求主體

json
{
  "model": "alibaba/wan-2.2-spicy/video-extend"
}

輸出 Schema

API 傳回包含生成輸出 URL 的預測回應。

idstringrequired
Unique identifier for the prediction.
statusstringrequired
Current status of the prediction.
processingcompletedsucceededfailed
modelstringrequired
The model used for generation.
outputsarray[string]
Array of output URLs. Available when status is "completed".
errorstring
Error message if status is "failed".
metricsobject
Performance metrics.
predict_timenumber
Time taken for video generation in seconds.
created_atstringrequired
ISO 8601 timestamp when the prediction was created.
Format: date-time
completed_atstring
ISO 8601 timestamp when the prediction was completed.
Format: date-time

範例回應

json
{
  "id": "pred_abc123",
  "status": "completed",
  "model": "model-name",
  "outputs": [
    "https://storage.atlascloud.ai/outputs/result.mp4"
  ],
  "metrics": {
    "predict_time": 45.2
  },
  "created_at": "2025-01-01T00:00:00Z",
  "completed_at": "2025-01-01T00:00:10Z"
}

Atlas Cloud Skills

Atlas Cloud Skills 將 300 多個 AI 模型直接整合至您的 AI 程式碼助手。一鍵安裝,即可使用自然語言生成圖片、影片,以及與 LLM 對話。

支援的客戶端

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ 支援的客戶端

安裝

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

設定 API 金鑰

從 Atlas Cloud 儀表板取得 API 金鑰,並設為環境變數。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

功能

安裝完成後,您可以在 AI 助手中使用自然語言存取所有 Atlas Cloud 模型。

圖片生成使用 Nano Banana 2、Z-Image 等模型生成圖片。
影片創作使用 Kling、Vidu、Veo 等從文字或圖片創建影片。
LLM 對話與 Qwen、DeepSeek 及其他大型語言模型對話。
媒體上傳上傳本機檔案,用於圖片編輯和圖片轉影片工作流程。

MCP Server

Atlas Cloud MCP Server 透過 Model Context Protocol 將您的 IDE 與 300 多個 AI 模型連接。支援任何 MCP 相容的客戶端。

支援的客戶端

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ 支援的客戶端

安裝

bash
npx -y atlascloud-mcp

設定

將以下設定新增至您 IDE 的 MCP 設定檔中。

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

可用工具

atlas_generate_image根據文字提示生成圖片。
atlas_generate_video從文字或圖片創建影片。
atlas_chat與大型語言模型對話。
atlas_list_models瀏覽 300 多個可用的 AI 模型。
atlas_quick_generate一步完成內容創建,自動選擇模型。
atlas_upload_media上傳本機檔案用於 API 工作流程。

API Schema

Schema 不可用

請登入以檢視請求歷史

您需要登入才能存取模型請求歷史記錄。

登入

Wan 2.2: Open and Advanced Large-Scale Video Generative Model by Alibaba Wanxiang

Model Card Overview

FieldDescription
Model NameWan 2.2
Developed byAlibaba Tongyi Wanxiang Lab
Release DateJuly 28, 2025
Model TypeVideo Generation
Related LinksGitHub: https://github.com/Wan-Video/Wan2.2, Hugging Face: https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B, Paper (arXiv): https://arxiv.org/abs/2503.20314

Introduction

Wan 2.2 is a significant upgrade to the Wan series of foundational video models, designed to push the boundaries of generative AI in video creation. The primary goal of Wan 2.2 is to provide an open and advanced suite of tools for generating high-quality, cinematic videos from various inputs, including text, images, and audio. Its core contribution lies in making state-of-the-art video generation technology accessible to a broader community of researchers and creators through open-sourcing its models and code. The project emphasizes cinematic aesthetics, complex motion generation, and computational efficiency, introducing several key innovations to achieve these aims.

Key Features & Innovations

Wan 2.2 introduces several groundbreaking features that set it apart from previous models:

  • Effective MoE Architecture: Wan 2.2 is the first model to successfully integrate a Mixture-of-Experts (MoE) architecture into a video diffusion model. This design uses specialized expert models for different stages of the denoising process, which significantly increases the model's capacity without raising computational costs. The model has a total of 27B parameters, but only 14B are active during any given step.

  • Cinematic-Level Aesthetics: The model was trained on a meticulously curated dataset with detailed labels for cinematic properties like lighting, composition, and color tone. This allows users to generate videos with precise and controllable artistic styles, achieving a professional, cinematic look.

  • Complex Motion Generation: By training on a vastly expanded dataset (+65.6% more images and +83.2% more videos compared to Wan 2.1), Wan 2.2 demonstrates a superior ability to generate complex and realistic motion. It shows enhanced generalization across various motions, semantics, and aesthetics.

  • Efficient High-Definition Video: The suite includes a highly efficient 5B model (TI2V-5B) that utilizes an advanced VAE for high-compression video generation. It can produce 720p video at 24 fps and is capable of running on consumer-grade GPUs like the NVIDIA RTX 4090, making high-definition AI video generation more accessible.

Model Architecture & Technical Details

The architecture of Wan 2.2 is built upon the Diffusion Transformer (DiT) paradigm and incorporates several key technical advancements.

Core Architecture

The primary models in the Wan 2.2 suite, such as the T2V-A14B, employ a Mixture-of-Experts (MoE) architecture. This framework consists of two main expert models:

  1. High-Noise Expert: Activated during the initial stages of the denoising process, this expert focuses on establishing the overall structure and layout of the video.
  2. Low-Noise Expert: Activated in the later stages, this expert is responsible for refining the details, textures, and fine-grained motion of the video.

The transition between these experts is dynamically determined by the signal-to-noise ratio (SNR) during generation. This MoE design allows the model to have a large parameter count (27B total) while keeping the number of active parameters (14B) and computational load comparable to smaller models.

Key Parameters & Variants

Wan 2.2 is offered in several variants, each tailored for different tasks and computational resources.

Model VariantTotal ParametersKey FeatureSupported Tasks
T2V-A14B~27B (14B active)MoE for Text-to-VideoText-to-Video
I2V-A14B~27B (14B active)MoE for Image-to-VideoImage-to-Video
TI2V-5B5BHigh-Compression VAEText-to-Video, Image-to-Video
S2V-14B~27B (14B active)MoE for Speech-to-VideoSpeech-to-Video
Animate-14B~27B (14B active)MoE for AnimationCharacter Animation & Replacement

Intended Use & Applications

Wan 2.2 is designed for a wide range of creative and academic applications. Its various models support a comprehensive set of downstream tasks, making it a versatile tool for digital artists, filmmakers, researchers, and developers.

  • Cinematic Video Production: Generating high-fidelity video clips with specific artistic styles for short films, advertisements, or social media content.
  • Storyboarding and Pre-visualization: Quickly creating video mockups from text descriptions or still images to visualize scenes.
  • Character Animation: Animating static character images or replacing characters in existing videos with new ones while preserving motion and expression.
  • Audio-Driven Content: Producing videos that are synchronized with speech or other audio tracks, suitable for creating animated avatars or visualizing audio content.
  • Academic Research: Serving as a powerful, open-source foundation model for researchers exploring advancements in video generation, AI ethics, and multimodal AI.
  • Creative Content Generation: Enabling artists and creators to explore new forms of digital art and storytelling by combining text, images, and audio to produce unique video content.

300+ 模型,即刻開啟,

探索全部模型