alibaba/wan-2.6/image-to-video

A speed-optimized image-to-video option that prioritizes lower latency while retaining strong visual fidelity. Ideal for iteration, batch generation, and prompt testing.

IMAGE-TO-VIDEONEW
image-to-video

A speed-optimized image-to-video option that prioritizes lower latency while retaining strong visual fidelity. Ideal for iteration, batch generation, and prompt testing.

Alibaba WAN 2.6 Image-to-Video Model

Alibaba WAN 2.6 is an advanced image-to-video model on Alibaba Cloud’s DashScope. It generates high-quality videos from images and supports output resolutions of 720p and 1080p.

What makes it stand out?

  • More affordable: Wan 2.6 is more streamlined and cost-effective - reducing creator expenses and offering more options.

  • One-pass A/V sync: Wan 2.6 creates a fully synchronized video (audio/voiceover + lip-sync) from a single, well-structured prompt - no separate recording or manual alignment required.

  • Multilingual friendly: Wan 2.6 reliably processes like Chinese prompts for A/V-synced videos.

  • Longer duration & more video size options: Wan 2.6 delivers up to 10 seconds and 6 aspect/size options, enabling more storytelling room and publishing flexibility.

  • Multi-shot storytelling: Generates cohesive multi-shot narratives, keeping key details consistent across shots and offering auto shot-split for simple prompts.

  • Video reference generation: Uses a reference video's appearance and voice to guide new videos; supports human or arbitrary subjects, single or dual performers.

  • 15s long videos: Produces videos up to 15 seconds, expanding temporal capacity for richer storytelling.

Designed For

  • Marketing teams: Fast, polished demos/tutorials—low cost, consistent style.

  • Global enterprises: Multilingual, lip-synced videos with subtitles for efficient localization.

  • Storytellers & YouTubers: Immersive narratives while maintaining cadence and quality—driving growth.

  • Corporate training teams: HD videos over docs—clearer key points, better communication.

Pricing

The table below lists prices for easy comparsion.

Output ResolutionDuration (5s)Duration (10s)
720p$0.5$1
1080p$0.75$1.5

Billing Rules

  • Minimum charge: 5 seconds

  • Per-second rate = (price per 5 seconds) ÷ 5

  • Billed duration = video length in seconds (rounded up), with a 5-second minimum

  • Total cost = billed duration × per-second rate (by output resolution)

How to Use

  1. Write your prompt.

  2. Upload an audio file (optional) for voice/music.

  3. Choose the video size (resolution/aspect).

  4. Select the video duration (e.g., 5s / 10s).

  5. Submit and wait for processing.

  6. Preview and download the result.

Specifications in Depth

Overview:

Model Provider:QWEN
Model Type:image-to-video
Deployment:Inferencing API; Playground
Pricing:$0.0750/second

Key Specs:

Size Cap:up to width × height (user-configurable)
LoRA Support:No
Seed Options:N/A

Create Your Next Masterpiece

🎬MULTI-SHOT VIDEO GENERATION

Wan 2.6Professional Multi-Shot AI Video Creation

Alibaba's latest breakthrough in AI video generation. Create up to 15-second 1080p videos with multi-shot storytelling, reference-driven character consistency, and native audio-visual synchronization. The first model to truly understand storyboard logic for cinematic narratives.

Revolutionary Breakthroughs

What makes Wan 2.6 the game-changer in AI video generation

Multi-Shot Storytelling

First model to understand storyboard logic. Automatically generates sequential shots with coherent transitions, maintaining character appearance and environment consistency across scene changes—enabling complete story arcs in a single 15-second generation.

Reference-to-Video (R2V)

Upload a 2-30 second reference video to extract and preserve character appearance, movement patterns, and voice characteristics. Create consistent character performances across multiple videos with unprecedented accuracy.

Accurate Text Rendering

Industry-leading text rendering capabilities for product packaging, signage, and branded content. Generate clear, readable text within video frames—essential for marketing and commercial applications.

Core Capabilities

Extended 15-Second Duration

Generate up to 15 seconds per video with complete "Three Act" structure (Setup → Action → Resolution)

Professional 1080p Quality

Native 1080p output at 24fps with cinematic quality and enhanced visual stability

Native Audio Sync

Dialogue matches lip movements, background music aligns with pacing, sound effects trigger perfectly

Character Consistency

Maintain character appearance, costumes, and identity across shots and multiple videos

Cinematic Camera Control

Professional camera movements including pans, zooms, tracking shots, and dolly movements

Flexible Aspect Ratios

16:9 (YouTube), 9:16 (Reels), 1:1 (Square) - platform-optimized without post-production cropping

Wan 2.6 vs Wan 2.5: Major Improvements

See what's new in the latest release

Video Duration
Up to 15 seconds
Wan 2.5: 10 seconds max
Multi-Shot Capability
Understands storyboard logic
Wan 2.5: Single shot or messy morphing
Reference Video Support
R2V mode with full preservation
Wan 2.5: Image reference only
Character Consistency
Excellent across shots
Wan 2.5: Character drift issues
Motion Stability
Reduced jitter and artifacts
Wan 2.5: Occasional frame drift
Prompt Understanding
Complex multi-character scenes
Wan 2.5: Basic scene generation

Three Specialized Generation Modes

Choose the right mode for your creative workflow

Text-to-Video (T2V)

Most Popular

Generate complete videos from text prompts with enhanced multi-shot segmentation and improved prompt handling. Perfect for storytelling and creative exploration.

  • Automatic shot segmentation from single prompt
  • Multi-character interaction understanding
  • Camera movement and emotional cues
  • Environmental detail preservation

Image-to-Video (I2V)

Enhanced

Transform still images into motion videos with improved motion coherence. Ideal for product showcases, photo animation, and visual storytelling.

  • Precise text rendering for products
  • Style consistency across frames
  • Natural motion from static images
  • Narrative-driven visual optimization

Reference-to-Video (R2V)

NEW

Upload a reference video (2-30s) to preserve character appearance, movement patterns, and voice. Strongest consistency guarantee for character-driven content.

  • Full character identity preservation
  • Voice characteristics extraction
  • Movement pattern replication
  • Multi-character co-acting scenes

Perfect For

Marketing & Advertising

Product demos with text rendering, brand campaigns with character consistency, and promotional videos

Content Creation

YouTube videos, social media reels, multi-shot storytelling, and video editing workflows

E-commerce

Product showcases with accurate text, tutorial videos, and customer testimonial recreation

Education & Training

Instructional content, course materials, and multi-scene educational narratives

Entertainment

Short films, character-driven stories, cinematic sequences, and creative experiments

Pre-visualization

Film concept development, storyboard creation, and scene planning for productions

Wan 2.6 T2V, I2V, and R2V API Integration

Complete API suite for Text-to-Video, Image-to-Video, and Reference-to-Video generation

Text-to-Video API (T2V API)

Our Wan 2.6 T2V API transforms text prompts into multi-shot cinematic videos with automatic scene segmentation. Generate professional 1080p videos up to 15 seconds with native audio sync.

Multi-shot storytelling from single prompt
15-second duration with Three Act structure
Enhanced prompt understanding for complex scenes
Flexible aspect ratios: 16:9, 9:16, 1:1

Image-to-Video API (I2V API)

Our Wan 2.6 I2V API brings still images to life with precise motion control and text rendering. Perfect for product videos, photo animation, and branded content creation.

Accurate text rendering for products and signage
Style consistency across animation frames
Natural motion with improved coherence
Narrative-optimized visual output

Reference-to-Video API (R2V API)

Our Wan 2.6 R2V API preserves character identity from reference videos. Upload 2-30 second clips to extract appearance, voice, and movement patterns for consistent character generation.

Character appearance and identity preservation
Voice characteristics extraction and replication
Movement pattern analysis and reproduction
Multi-character scene support
💡

Complete API Suite

All three Wan 2.6 API modes (T2V API, I2V API, R2V API) support RESTful architecture with comprehensive documentation. Get started with SDKs for Python, Node.js, and more. Each endpoint includes native audio-visual synchronization and full commercial usage rights.

How to Get Started with Wan 2.6

Start creating professional videos in minutes with two simple paths

API Integration

For developers building applications

1

Sign Up & Login

Create your Atlas Cloud account or login to access the console

2

Add Payment Method

Bind your credit card in the Billing section to fund your account

3

Generate API Key

Navigate to Console → API Keys and create your authentication key

4

Start Building

Use T2V, I2V, or R2V API endpoints to integrate Wan 2.6 into your application

Playground Experience

For quick testing and experimentation

1

Sign Up & Login

Create your Atlas Cloud account or login to access the platform

2

Add Payment Method

Bind your credit card in the Billing section to get started

3

Use Playground

Go to the Wan 2.6 playground, choose T2V/I2V/R2V mode, and generate videos instantly

💡
Pro Tip: Test different generation modes in the Playground first to understand which works best for your use case, then integrate the corresponding API for production scale.

Frequently Asked Questions

What makes Wan 2.6's multi-shot capability unique?

Wan 2.6 is the first model to truly understand storyboard logic. Unlike Wan 2.5 which created messy "morphing" effects, Wan 2.6 can automatically segment a single prompt into multiple distinct shots with coherent transitions, maintaining character consistency across scene changes.

How does Reference-to-Video (R2V) work?

Upload a 2-30 second reference video, and Wan 2.6 extracts the character's appearance, movement patterns, and voice characteristics. You can then generate new videos featuring the same character with consistent identity—ideal for creating character-driven content series.

What video formats and durations are supported?

Wan 2.6 generates 1080p videos at 24fps with durations from 5 to 15 seconds. Supported aspect ratios include 16:9 (YouTube), 9:16 (Instagram Reels/TikTok), and 1:1 (square format), optimized for each platform without requiring post-production cropping.

Can Wan 2.6 render text in videos?

Yes! Wan 2.6 features industry-leading text rendering for product packaging, signage, and branded content. The model can generate clear, readable text within video frames—a critical feature that Seedance and most competitors lack.

What's the difference between T2V, I2V, and R2V modes?

T2V (Text-to-Video) generates from text prompts with multi-shot capability. I2V (Image-to-Video) animates still images with precise text rendering. R2V (Reference-to-Video) uses video references to preserve character identity across generations. Choose based on your input type and consistency needs.

Do I have commercial rights to generated videos?

Yes! Every Wan 2.6 creation comes with full commercial usage rights. Videos are production-ready for marketing campaigns, client deliverables, branded content, and commercial applications without additional licensing requirements.

Why Use Wan 2.6 on Atlas Cloud?

Leverage enterprise-grade infrastructure for your professional video generation workflows

Purpose-Built Infrastructure

Deploy Wan 2.6's multi-shot generation and R2V capabilities on infrastructure specifically optimized for demanding AI video workloads. Maximum performance for 1080p 15-second generation.

Unified API for All Models

Access Wan 2.6 (T2V, I2V, R2V) alongside 300+ AI models (LLMs, image, video, audio) through one unified API. Single integration for all your generative AI needs with consistent auth.

Competitive Pricing

Save up to 70% compared to AWS with transparent, pay-as-you-go pricing. No hidden fees, no commitments—scale from prototype to production without breaking the bank.

SOC I & II Certified Security

Your reference videos and generated content protected with SOC I & II certifications and HIPAA compliance. Enterprise-grade security with encrypted transmission and storage.

99.9% Uptime SLA

Enterprise-grade reliability with guaranteed 99.9% uptime. Your Wan 2.6 multi-shot video generation is always available for production campaigns and critical content workflows.

Easy Integration

Complete integration in minutes with REST API and multi-language SDKs (Python, Node.js, Go). Switch between T2V, I2V, and R2V modes seamlessly with unified endpoint structure.

99.9%
Uptime
70%
Lower Cost vs AWS
300+
Gen AI Models
24/7
Pro Support

Technical Specifications

Architecture
Advanced Transformer with Multi-Modal Understanding
Resolution
1080p (Full HD)
Frame Rate
24 FPS
Duration
5-15 seconds (mode dependent)
Aspect Ratios
16:9, 9:16, 1:1
Generation Modes
T2V, I2V, R2V
Audio
Native synchronization with lip-sync
Commercial Rights
Full commercial usage included

Experience Professional Multi-Shot Video Generation

Join content creators, marketers, and filmmakers worldwide who are revolutionizing video production with Wan 2.6's groundbreaking multi-shot storytelling and character consistency capabilities.

Start From 300+ Models,

Only at Atlas Cloud.