atlascloud/wan-2.2/image-to-video-lora

Open and Advanced Large-Scale Video Generative Models.

IMAGE-TO-VIDEONEW
首頁
探索
atlascloud/wan-2.2/image-to-video-lora
Wan-2.2 Image-to-video Lora
圖生影片

Open and Advanced Large-Scale Video Generative Models.

輸入

正在載入參數設定...

輸出

閒置
生成的影片將在這裡顯示
設定參數後點擊執行開始生成

每次執行將花費 0.04。$10 可執行約 250 次。

你可以繼續:

參數

程式碼範例

import requests
import time

# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "atlascloud/wan-2.2/image-to-video-lora",
    "prompt": "A beautiful sunset over the ocean with gentle waves",
    "width": 512,
    "height": 512,
    "duration": 3,
    "fps": 24,
}

generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]

# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"

def check_status():
    while True:
        response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
        result = response.json()

        if result["data"]["status"] in ["completed", "succeeded"]:
            print("Generated video:", result["data"]["outputs"][0])
            return result["data"]["outputs"][0]
        elif result["data"]["status"] == "failed":
            raise Exception(result["data"]["error"] or "Generation failed")
        else:
            # Still processing, wait 2 seconds
            time.sleep(2)

video_url = check_status()

安裝

為您的程式語言安裝所需的套件。

bash
pip install requests

驗證

所有 API 請求都需要透過 API 金鑰進行驗證。您可以從 Atlas Cloud 儀表板取得 API 金鑰。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

HTTP 標頭

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
請妥善保管您的 API 金鑰

切勿在客戶端程式碼或公開儲存庫中暴露您的 API 金鑰。請改用環境變數或後端代理。

提交請求

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "prompt": "A beautiful landscape"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

提交請求

提交非同步生成請求。API 會傳回一個預測 ID,您可以用它來檢查狀態並取得結果。

POST/api/v1/model/generateVideo

請求主體

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}

data = {
    "model": "atlascloud/wan-2.2/image-to-video-lora",
    "input": {
        "prompt": "A beautiful sunset over the ocean with gentle waves"
    }
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")

回應

{
  "id": "pred_abc123",
  "status": "processing",
  "model": "model-name",
  "created_at": "2025-01-01T00:00:00Z"
}

檢查狀態

輪詢預測端點以檢查請求的當前狀態。

GET/api/v1/model/prediction/{prediction_id}

輪詢範例

import requests
import time

prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

while True:
    response = requests.get(url, headers=headers)
    result = response.json()
    status = result["data"]["status"]
    print(f"Status: {status}")

    if status in ["completed", "succeeded"]:
        output_url = result["data"]["outputs"][0]
        print(f"Output URL: {output_url}")
        break
    elif status == "failed":
        print(f"Error: {result['data'].get('error', 'Unknown')}")
        break

    time.sleep(3)

狀態值

processing請求仍在處理中。
completed生成完成。輸出已可取得。
succeeded生成成功。輸出已可取得。
failed生成失敗。請檢查錯誤欄位。

完成回應

{
  "data": {
    "id": "pred_abc123",
    "status": "completed",
    "outputs": [
      "https://storage.atlascloud.ai/outputs/result.mp4"
    ],
    "metrics": {
      "predict_time": 45.2
    },
    "created_at": "2025-01-01T00:00:00Z",
    "completed_at": "2025-01-01T00:00:10Z"
  }
}

上傳檔案

上傳檔案至 Atlas Cloud 儲存空間並取得 URL,可用於您的 API 請求。使用 multipart/form-data 上傳。

POST/api/v1/model/uploadMedia

上傳範例

import requests

url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

with open("image.png", "rb") as f:
    files = {"file": ("image.png", f, "image/png")}
    response = requests.post(url, headers=headers, files=files)

result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")

回應

{
  "data": {
    "download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
    "file_name": "image.png",
    "content_type": "image/png",
    "size": 1024000
  }
}

輸入 Schema

以下參數可在請求主體中使用。

總計: 0必填: 0選填: 0

無可用參數。

範例請求主體

json
{
  "model": "atlascloud/wan-2.2/image-to-video-lora"
}

輸出 Schema

API 傳回包含生成輸出 URL 的預測回應。

idstringrequired
Unique identifier for the prediction.
statusstringrequired
Current status of the prediction.
processingcompletedsucceededfailed
modelstringrequired
The model used for generation.
outputsarray[string]
Array of output URLs. Available when status is "completed".
errorstring
Error message if status is "failed".
metricsobject
Performance metrics.
predict_timenumber
Time taken for video generation in seconds.
created_atstringrequired
ISO 8601 timestamp when the prediction was created.
Format: date-time
completed_atstring
ISO 8601 timestamp when the prediction was completed.
Format: date-time

範例回應

json
{
  "id": "pred_abc123",
  "status": "completed",
  "model": "model-name",
  "outputs": [
    "https://storage.atlascloud.ai/outputs/result.mp4"
  ],
  "metrics": {
    "predict_time": 45.2
  },
  "created_at": "2025-01-01T00:00:00Z",
  "completed_at": "2025-01-01T00:00:10Z"
}

Atlas Cloud Skills

Atlas Cloud Skills 將 300 多個 AI 模型直接整合至您的 AI 程式碼助手。一鍵安裝,即可使用自然語言生成圖片、影片,以及與 LLM 對話。

支援的客戶端

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ 支援的客戶端

安裝

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

設定 API 金鑰

從 Atlas Cloud 儀表板取得 API 金鑰,並設為環境變數。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

功能

安裝完成後,您可以在 AI 助手中使用自然語言存取所有 Atlas Cloud 模型。

圖片生成使用 Nano Banana 2、Z-Image 等模型生成圖片。
影片創作使用 Kling、Vidu、Veo 等從文字或圖片創建影片。
LLM 對話與 Qwen、DeepSeek 及其他大型語言模型對話。
媒體上傳上傳本機檔案,用於圖片編輯和圖片轉影片工作流程。

MCP Server

Atlas Cloud MCP Server 透過 Model Context Protocol 將您的 IDE 與 300 多個 AI 模型連接。支援任何 MCP 相容的客戶端。

支援的客戶端

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ 支援的客戶端

安裝

bash
npx -y atlascloud-mcp

設定

將以下設定新增至您 IDE 的 MCP 設定檔中。

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

可用工具

atlas_generate_image根據文字提示生成圖片。
atlas_generate_video從文字或圖片創建影片。
atlas_chat與大型語言模型對話。
atlas_list_models瀏覽 300 多個可用的 AI 模型。
atlas_quick_generate一步完成內容創建,自動選擇模型。
atlas_upload_media上傳本機檔案用於 API 工作流程。

API Schema

Schema 不可用

請登入以檢視請求歷史

您需要登入才能存取模型請求歷史記錄。

登入

Wan 2.2: Open and Advanced Large-Scale Video Generative Model by Alibaba Wanxiang

Model Card Overview

FieldDescription
Model NameWan 2.2 Image-to-Video LoRA
Developed byAlibaba Tongyi Wanxiang Lab
Model TypeImage-to-Video Generation with LoRA Support
Resolution480p, 720p (via VSR upscaling)
Frame Rate30 fps
Duration3–10 seconds
Related LinksGitHub: https://github.com/Wan-Video/Wan2.2, Hugging Face: https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B, Paper (arXiv): https://arxiv.org/abs/2503.20314

Introduction

Wan 2.2 is a significant upgrade to the Wan series of foundational video models, designed to push the boundaries of generative AI in video creation. This image-to-video LoRA variant takes a reference image as the first frame and generates a high-quality video, with full support for custom LoRA weights to fine-tune the generation style, motion characteristics, or subject identity.

The model generates videos at 480p natively and supports 720p output via Video Super Resolution (VSR) upscaling, delivering smooth 30 fps playback at both resolutions.

Key Features & Innovations

  • Effective MoE Architecture: Wan 2.2 integrates a Mixture-of-Experts (MoE) architecture into the video diffusion model. Specialized expert models handle different stages of the denoising process, increasing model capacity without raising computational costs. The model has 27B total parameters with only 14B active during any given step.

  • Cinematic-Level Aesthetics: Trained on a meticulously curated dataset with detailed labels for cinematic properties like lighting, composition, and color tone. This allows generation of videos with precise and controllable artistic styles, achieving a professional, cinematic look.

  • Complex Motion Generation: Trained on a vastly expanded dataset (+65.6% more images and +83.2% more videos compared to Wan 2.1), Wan 2.2 demonstrates superior ability to generate complex and realistic motion with enhanced generalization across motions, semantics, and aesthetics.

  • Custom LoRA Support: This variant supports user-provided LoRA weights for fine-grained style and motion control. Three separate LoRA input channels are available:

    • high_noise_loras — Applied to the high-noise expert (transformer stage), influencing overall structure and layout.
    • low_noise_loras — Applied to the low-noise expert (transformer_2 stage), influencing fine details and textures.
    • loras — General-purpose LoRA input where the module is auto-inferred from the safetensors filename.
  • VSR-Enhanced Output: All output videos are delivered at 30 fps. When 720p resolution is selected, the model leverages Video Super Resolution to upscale from a 480p base generation, preserving fine details while achieving higher resolution output.

Model Architecture

The architecture is built upon the Diffusion Transformer (DiT) paradigm with a Mixture-of-Experts (MoE) framework:

  1. High-Noise Expert: Activated during initial denoising stages, establishing overall structure and layout.
  2. Low-Noise Expert: Activated in later stages, refining details, textures, and fine-grained motion.

The transition between experts is dynamically determined by the signal-to-noise ratio (SNR) during generation. Custom LoRA weights can be applied to each expert independently, enabling precise control over different aspects of the generation pipeline.

Intended Use & Applications

  • Stylized Video Production: Generating videos with custom visual styles by applying LoRA weights trained on specific aesthetic data.
  • Character & Subject Consistency: Using identity-preserving LoRAs to maintain consistent characters across multiple video generations.
  • Cinematic Video Production: Generating high-fidelity video clips from reference images for short films, advertisements, or social media content.
  • Creative Experimentation: Combining multiple LoRAs to explore novel visual effects and motion styles.
  • Academic Research: Serving as a powerful foundation model for researchers exploring LoRA-based fine-tuning techniques in video generation.

300+ 模型,即刻開啟,

探索全部模型