Wan2.2 Media Models

Wan2.2 Media Models

Wan 2.5 是 Alibaba 最先進的多模態影片生成模型,能夠從文字或圖像生成高保真、音畫同步的影片。它在 480p 至 1080p 的輸出中提供逼真的動態、自然的光影以及強大的提示詞(prompt)一致性,非常適合創意和專業級生產工作流程。

探索領先模型

Atlas Cloud 為您提供最新的行業領先創意模型。

Wan-2.2 Animate
影片轉影片
brand logo

Wan-2.2 Animate

Open and Advanced Large-Scale Video Generative Models.

$0.04/秒
Wan-2.2 t2v 5b 720p Lora
NEW
HOT
文生影片
brand logo

Wan-2.2 t2v 5b 720p Lora

Open and Advanced Large-Scale Video Generative Models.

$0.1/秒
Wan-2.2 i2v 5b 720p Lora
NEW
HOT
圖生影片
brand logo

Wan-2.2 i2v 5b 720p Lora

Open and Advanced Large-Scale Video Generative Models.

$0.1/秒
Wan-2.2 t2v 480p Lora Ultra Fast
文生影片
brand logo

Wan-2.2 t2v 480p Lora Ultra Fast

Open and Advanced Large-Scale Video Generative Models.

$0.1/秒
Wan-2.2 i2v 720p Ultra Fast
NEW
HOT
圖生影片
brand logo

Wan-2.2 i2v 720p Ultra Fast

Open and Advanced Large-Scale Video Generative Models.

$0.1/秒
Wan-2.2 i2v 720p Lora Ultra Fast
NEW
HOT
圖生影片
brand logo

Wan-2.2 i2v 720p Lora Ultra Fast

Open and Advanced Large-Scale Video Generative Models.

$0.15/秒
Wan-2.2 i2v 480p Ultra Fast
NEW
HOT
圖生影片
brand logo

Wan-2.2 i2v 480p Ultra Fast

Open and Advanced Large-Scale Video Generative Models.

$0.05/秒
Wan-2.2 i2v 480p Lora Ultra Fast
NEW
HOT
圖生影片
brand logo

Wan-2.2 i2v 480p Lora Ultra Fast

Open and Advanced Large-Scale Video Generative Models.

$0.1/秒
Wan-2.2-spicy Image-to-video
圖生影片
brand logo

Wan-2.2-spicy Image-to-video

Open and Advanced Large-Scale Video Generative Models.

$0.15/秒
Wan-2.2-spicy Image-to-video Lora
圖生影片
brand logo

Wan-2.2-spicy Image-to-video Lora

Open and Advanced Large-Scale Video Generative Models.

$0.2/秒
Wan-2.2-spicy Video Extend Lora
影片轉影片
brand logo

Wan-2.2-spicy Video Extend Lora

Open and Advanced Large-Scale Video Generative Models.

$0.2/秒
Wan-2.2-spicy Video Extend
影片轉影片
brand logo

Wan-2.2-spicy Video Extend

Open and Advanced Large-Scale Video Generative Models.

$0.15/秒
Wan-2.2 t2v 480p Ultra Fast
NEW
文生影片
brand logo

Wan-2.2 t2v 480p Ultra Fast

Open and Advanced Large-Scale Video Generative Models.

$0.05/秒
Wan-2.2 t2v 5b 720p
NEW
文生影片
brand logo

Wan-2.2 t2v 5b 720p

Open and Advanced Large-Scale Video Generative Models.

$0.05/秒
Wan-2.2 i2v 5b 720p
NEW
圖生影片
brand logo

Wan-2.2 i2v 5b 720p

Open and Advanced Large-Scale Video Generative Models.

$0.05/秒
Wan-2.2 i2v 480p
NEW
圖生影片
brand logo

Wan-2.2 i2v 480p

Open and Advanced Large-Scale Video Generative Models.

$0.15/秒
Wan-2.2 i2v 720p
NEW
圖生影片
brand logo

Wan-2.2 i2v 720p

Open and Advanced Large-Scale Video Generative Models.

$0.3/秒
Wan-2.2 t2v 480p
NEW
文生影片
brand logo

Wan-2.2 t2v 480p

Open and Advanced Large-Scale Video Generative Models.

$0.15/秒
Wan-2.2 t2v 720p
NEW
文生影片
brand logo

Wan-2.2 t2v 720p

Open and Advanced Large-Scale Video Generative Models.

$0.3/秒
Wan-2.2 Video Character Swap
NEW
圖生影片
brand logo

Wan-2.2 Video Character Swap

The Wan video character swap model replaces the main character in a video with a character from an image. This model preserves the scene, lighting, and tone of the original video to ensure a seamless result.

$0.126/秒
Wan-2.2 Image To Animation
NEW
圖生影片
brand logo

Wan-2.2 Image To Animation

The Wan image-to-animation model generates a video of a moving person based on a character image and a reference video.

$0.084/秒

Wan2.2 Media Models 的核心亮點

Atlas Cloud 為您提供業界領先的最新創意模型。

End-to-End Visual Generation

Create and transform images and videos from text, images, or existing clips in one unified model suite.

High-Fidelity Output

Maintain photorealistic detail across edits and animation.

Animate Images Naturally

Turn a single photo into smooth, coherent video with realistic motion and timing.

Creative Control

Edit with prompts, sketches, or styles at object level.

Multilingual Prompts

Understand English, Chinese, and more equally well.

Production Ready

Fast, cost-efficient, and API-ready for scale.

使用 Wan2.2 Media Models 可以做什麼

Atlas Cloud 為您提供業界領先的最新創意模型。

Generate realistic and cinematic videos directly from text or image prompts using Wan 2.2’s Mixture-of-Experts diffusion architecture.

Animate a still image into smooth, coherent video motion with strong temporal stability and expressive detail.

Transform input scenes or frames by changing style, lighting, or composition while preserving structure and motion consistency.

Produce high-quality videos efficiently with optimized inference and improved motion control compared to Wan 2.1.

Integrate Wan 2.2 models into creative, research, or production pipelines for controllable t2v and i2v generation.

為何在 Atlas Cloud 使用 Wan2.2 Media Models

將先進的 Wan2.2 Media Models 模型與 Atlas Cloud 的 GPU 加速平台相結合,提供無與倫比的效能、可擴展性和開發體驗。

效能與靈活性

低延遲:
GPU 最佳化推理,實現即時回應。

統一 API:
一次整合,暢用 Wan2.2 Media Models、GPT、Gemini 和 DeepSeek。

透明定價:
按 Token 計費,支援 Serverless 模式。

企業與規模

開發者體驗:
SDK、資料分析、微調工具和模板一應俱全。

可靠性:
99.99% 可用性、RBAC 權限控制、合規日誌。

安全與合規:
SOC 2 Type II 認證、HIPAA 合規、美國資料主權。

探索更多系列

Seedance 2.0 Video Models

Seedance 2.0(by Bytedance) is a multimodal video generation model that redefines "controllable creation," moving beyond the limitations of text or start/end frames. It supports quad-modal inputs—text, image, video, and audio—and introduces an industry-leading "Universal Reference" system. By precisely replicating the composition, camera movement, and character actions from reference assets, Seedance 2.0 solves critical issues with character consistency and physical coherence, empowering creators to act as true "directors" with deep control over their output.

檢視系列

Vidu Video Models

Vidu (by ShengShu Technology) is a foundational video model built on the proprietary U-ViT architecture, combining the strengths of Diffusion and Transformer models. It features superior semantic understanding and generation capabilities, producing coherent, fluid visuals that adhere to physical laws without the need for interpolation. With exceptional spatiotemporal consistency and a deep understanding of diverse cultural elements, Vidu empowers professional filmmakers and creators with a stable, efficient, and imaginative tool for video production.

檢視系列

MiniMax LLM Models

MiniMax is a large language model developed by MiniMax AI, focused on efficient reasoning, long-context understanding, and scalable text generation. It is designed for complex tasks such as dialogue systems, document analysis, content creation, and AI agents. With an emphasis on high performance at lower computational cost, MiniMax is well suited for enterprise applications and developer use cases where stability, efficiency, and cost control are important.

檢視系列

GLM LLM Models

GLM (General Language Model) is a large language model developed by ZAI (Zhipu AI) for text understanding, generation, and reasoning. It supports both Chinese and English and performs well in dialogue, content creation, translation, and code assistance. GLM is widely used in chatbots, enterprise AI systems, and developer applications due to its stable performance and versatility.

檢視系列

Moonshot LLM Models

Kimi is a large language model developed by Moonshot AI, designed for reasoning, coding, and long-context understanding. It performs well in complex tasks such as code generation, analysis, and intelligent assistants. With strong performance and efficient architecture, Kimi is suitable for enterprise AI applications and developer use cases. Its balance of capability and cost makes it an increasingly popular choice in the LLM ecosystem.

檢視系列

Open AI Model Families

Explore OpenAI’s language and video models on Atlas Cloud: ChatGPT for advanced reasoning and interaction, and Sora-2 for physics-aware video generation.

檢視系列

Van Video Models

Van Model is a flagship video model family, perfectly retaining the cinematic visuals and complex dynamics of 3D VAE and Flow Matching. By leveraging proprietary compute distillation, it breaks the "quality equals cost" barrier to deliver extreme inference speeds and ultra-low costs. This makes Van the premier engine for enterprises and developers seeking high-frequency, scalable video production on a budget.

檢視系列

Kling 3.0 Video Models

Kling AI Video 3.0 (by Kuaishou) is a groundbreaking model designed to bridge the worlds of sound and visuals through its unique Single-pass architecture. By simultaneously generating visuals, natural voiceovers, sound effects, and ambient atmosphere, it eliminates the disjointed workflows of traditional tools. This true audio-visual integration simplifies complex post-production, providing creators with an immersive storytelling solution that significantly boosts both creative depth and output efficiency.

檢視系列

Veo3.1 Video Models

Veo 3.1 (by Google) is a flagship generative video model that sets a new standard for cinematic AI by deeply integrating semantic capabilities to deliver cinematic visuals, synchronized audio, and complex storytelling in a single workflow. Distinguishing itself through superior adherence to cinematic terminology and physics-based consistency, it offers professional filmmakers an unparalleled tool for transforming scripts into coherent, high-fidelity productions with precise directorial control.

檢視系列

Sora-2 Video Models

The Sora-2 family from OpenAI is the next-generation video + audio generation model, enabling both text-to-video and image-to-video outputs with synchronized dialogue, sound effect, improved physical realism, and fine-grained control.

檢視系列

Nano Banana Image Models

Nano Banana is a fast, lightweight image generation model for playful, vibrant visuals. Optimized for speed and accessibility, it creates high-quality images with smooth shapes, bold colors, and clear compositions—perfect for mascots, stickers, icons, social posts, and fun branding.

檢視系列

Wan2.6 Video Models

Wan 2.6 is a next-generation AI video generation model from Alibaba’s Tongyi Lab, designed for professional-quality, multimodal video creation. It combines advanced narrative understanding, multi-shot storytelling, and native audio–visual synchronization to produce smooth 1080p videos up to 15 s long from text and reference inputs. Wan 2.6 also supports character consistency and role-guided generation, enabling creators to turn scripts into cohesive scenes with seamless motion and lip syncing. Its efficiency and rich creative control make it ideal for short films, advertising, social media content, and automated video workflows.

檢視系列

Seedance 2.0 Video Models

Seedance 2.0(by Bytedance) is a multimodal video generation model that redefines "controllable creation," moving beyond the limitations of text or start/end frames. It supports quad-modal inputs—text, image, video, and audio—and introduces an industry-leading "Universal Reference" system. By precisely replicating the composition, camera movement, and character actions from reference assets, Seedance 2.0 solves critical issues with character consistency and physical coherence, empowering creators to act as true "directors" with deep control over their output.

檢視系列

Vidu Video Models

Vidu (by ShengShu Technology) is a foundational video model built on the proprietary U-ViT architecture, combining the strengths of Diffusion and Transformer models. It features superior semantic understanding and generation capabilities, producing coherent, fluid visuals that adhere to physical laws without the need for interpolation. With exceptional spatiotemporal consistency and a deep understanding of diverse cultural elements, Vidu empowers professional filmmakers and creators with a stable, efficient, and imaginative tool for video production.

檢視系列

MiniMax LLM Models

MiniMax is a large language model developed by MiniMax AI, focused on efficient reasoning, long-context understanding, and scalable text generation. It is designed for complex tasks such as dialogue systems, document analysis, content creation, and AI agents. With an emphasis on high performance at lower computational cost, MiniMax is well suited for enterprise applications and developer use cases where stability, efficiency, and cost control are important.

檢視系列

GLM LLM Models

GLM (General Language Model) is a large language model developed by ZAI (Zhipu AI) for text understanding, generation, and reasoning. It supports both Chinese and English and performs well in dialogue, content creation, translation, and code assistance. GLM is widely used in chatbots, enterprise AI systems, and developer applications due to its stable performance and versatility.

檢視系列

Moonshot LLM Models

Kimi is a large language model developed by Moonshot AI, designed for reasoning, coding, and long-context understanding. It performs well in complex tasks such as code generation, analysis, and intelligent assistants. With strong performance and efficient architecture, Kimi is suitable for enterprise AI applications and developer use cases. Its balance of capability and cost makes it an increasingly popular choice in the LLM ecosystem.

檢視系列

Open AI Model Families

Explore OpenAI’s language and video models on Atlas Cloud: ChatGPT for advanced reasoning and interaction, and Sora-2 for physics-aware video generation.

檢視系列

Van Video Models

Van Model is a flagship video model family, perfectly retaining the cinematic visuals and complex dynamics of 3D VAE and Flow Matching. By leveraging proprietary compute distillation, it breaks the "quality equals cost" barrier to deliver extreme inference speeds and ultra-low costs. This makes Van the premier engine for enterprises and developers seeking high-frequency, scalable video production on a budget.

檢視系列

Kling 3.0 Video Models

Kling AI Video 3.0 (by Kuaishou) is a groundbreaking model designed to bridge the worlds of sound and visuals through its unique Single-pass architecture. By simultaneously generating visuals, natural voiceovers, sound effects, and ambient atmosphere, it eliminates the disjointed workflows of traditional tools. This true audio-visual integration simplifies complex post-production, providing creators with an immersive storytelling solution that significantly boosts both creative depth and output efficiency.

檢視系列

Veo3.1 Video Models

Veo 3.1 (by Google) is a flagship generative video model that sets a new standard for cinematic AI by deeply integrating semantic capabilities to deliver cinematic visuals, synchronized audio, and complex storytelling in a single workflow. Distinguishing itself through superior adherence to cinematic terminology and physics-based consistency, it offers professional filmmakers an unparalleled tool for transforming scripts into coherent, high-fidelity productions with precise directorial control.

檢視系列

Sora-2 Video Models

The Sora-2 family from OpenAI is the next-generation video + audio generation model, enabling both text-to-video and image-to-video outputs with synchronized dialogue, sound effect, improved physical realism, and fine-grained control.

檢視系列

Nano Banana Image Models

Nano Banana is a fast, lightweight image generation model for playful, vibrant visuals. Optimized for speed and accessibility, it creates high-quality images with smooth shapes, bold colors, and clear compositions—perfect for mascots, stickers, icons, social posts, and fun branding.

檢視系列

Wan2.6 Video Models

Wan 2.6 is a next-generation AI video generation model from Alibaba’s Tongyi Lab, designed for professional-quality, multimodal video creation. It combines advanced narrative understanding, multi-shot storytelling, and native audio–visual synchronization to produce smooth 1080p videos up to 15 s long from text and reference inputs. Wan 2.6 also supports character consistency and role-guided generation, enabling creators to turn scripts into cohesive scenes with seamless motion and lip syncing. Its efficiency and rich creative control make it ideal for short films, advertising, social media content, and automated video workflows.

檢視系列