
Seedance v1.5 Pro Image-to-Video API by ByteDance
Native audio-visual joint generation model by ByteDance. Supports unified multimodal generation with precise audio-visual sync, cinematic camera control, and enhanced narrative coherence.
INPUT
OUTPUT
IdleYour request will cost $0.047 per run. For $10 you can run this model approximately 212 times.
Here's what you can do next:
Code Example
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-v1.5-pro/image-to-video",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Install
Install the required package for your language.
pip install requestsAuthentication
All API requests require authentication via an API key. You can get your API key from the Atlas Cloud dashboard.
export ATLASCLOUD_API_KEY="your-api-key-here"HTTP Headers
import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Never expose your API key in client-side code or public repositories. Use environment variables or a backend proxy instead.
Submit a request
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Submit a Request
Submit an asynchronous generation request. The API returns a prediction ID that you can use to check the status and retrieve the result.
/api/v1/model/generateVideoRequest Body
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-v1.5-pro/image-to-video",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")Response
{
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Check Status
Poll the prediction endpoint to check the current status of your request.
/api/v1/model/prediction/{prediction_id}Polling Example
import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)Status Values
processingThe request is still being processed.completedGeneration is complete. Outputs are available.succeededGeneration succeeded. Outputs are available.failedGeneration failed. Check the error field.Completed Response
{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Upload Files
Upload files to Atlas Cloud storage and get a URL you can use in your API requests. Use multipart/form-data to upload.
/api/v1/model/uploadMediaUpload Example
import requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")Response
{
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Input Schema
The following parameters are accepted in the request body.
No parameters available.
Example Request Body
{
"model": "bytedance/seedance-v1.5-pro/image-to-video"
}Output Schema
The API returns a prediction response with the generated output URLs.
Example Response
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills
Atlas Cloud Skills integrates 300+ AI models directly into your AI coding assistant. One command to install, then use natural language to generate images, videos, and chat with LLMs.
Supported Clients
Install
npx skills add AtlasCloudAI/atlas-cloud-skillsSetup API Key
Get your API key from the Atlas Cloud dashboard and set it as an environment variable.
export ATLASCLOUD_API_KEY="your-api-key-here"Capabilities
Once installed, you can use natural language in your AI assistant to access all Atlas Cloud models.
MCP Server
Atlas Cloud MCP Server connects your IDE with 300+ AI models via the Model Context Protocol. Works with any MCP-compatible client.
Supported Clients
Install
npx -y atlascloud-mcpConfiguration
Add the following configuration to your IDE's MCP settings file.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Available Tools
API Schema
Schema not availablePlease log in to view request history
You need to be logged in to access your model request history.
Log InSeedance 1.5 ProSound and Vision, All in One Take
ByteDance's revolutionary AI model that generates perfectly synchronized audio and video simultaneously from a single unified process. Experience true native audio-visual generation with millisecond-precision lip-sync across 8+ languages.
Revolutionary Innovation
What makes SeeDANCE 1.5 Pro fundamentally different
Dual-Branch Architecture
Uses a 4.5 billion parameter Dual-Branch Diffusion Transformer (DB-DiT) that generates audio and video simultaneously—not sequentially—ensuring perfect synchronization from the start.
Phoneme-Level Lip Sync
Understands individual phonemes and maps them correctly to lip shapes across different languages, achieving millisecond-precision audio-visual synchronization.
Narrative Auto-Completion
Intelligently fills narrative gaps based on prompt intent, maintaining coherent storytelling across characters' emotions, expressions, and actions.
Core Capabilities
Native 1080p Quality
Professional HD video output with cinematic quality at 24fps, supporting 4-12 second durations
8+ Language Support
English, Mandarin, Japanese, Korean, Spanish, Portuguese, Indonesian, plus Chinese dialects
Cinematic Camera Control
Complex camera movements including dolly zooms, tracking shots, and professional film techniques
Multi-Speaker Dialogue
Natural conversations with multiple characters, distinct vocal identities, and realistic turn-taking
Physics-Accurate Motion
Realistic hair dynamics, fluid behaviors, and material interactions for lifelike visuals
Character Consistency
Maintains clothing, faces, and style across scenes for complete story continuity
Seedance 1.5 Pro vs Competition
See how Seedance stands out from other video generation models
Perfect For
Short Drama Production
Create emotion-forward narrative clips with realistic character dialogue and cinematic lighting
Advertising Creatives
Performance-heavy ad content with natural acting, perfect lip-sync, and professional production value
Multilingual Content
Reach global audiences with native-quality audio-visual content in 8+ languages
Educational Videos
Engaging instructional content with clear narration and synchronized visual demonstrations
Social Media
Viral-ready short-form content with professional audio-visual quality for maximum engagement
Film Production
Pre-visualization and concept development with realistic character performances and dialogue
Seedance 1.5 Pro T2V and I2V API Integration
Powerful Text-to-Video (T2V) API and Image-to-Video (I2V) API endpoints for seamless integration
Text-to-Video API (T2V API)
Our Seedance 1.5 Pro T2V API transforms text prompts into complete cinematic videos with native audio-visual synchronization. Generate scenes, camera movements, character actions, and dialogue in a single Text-to-Video API call.
Perfect for:
- Automated video content creation at scale
- Dynamic storytelling and narrative videos
- Marketing campaign automation
- Educational content generation
Image-to-Video API (I2V API)
Our Seedance 1.5 Pro I2V API brings still images to life with motion, camera movement, and synchronized audio. The Image-to-Video API features advanced frame control to define precise start and end points for your animations.
Perfect for:
- Photo animation and enhancement
- Character consistency in video sequences
- Product showcase with motion effects
- Architectural visualization and walkthroughs
Simple T2V and I2V API Integration
Both T2V API and I2V API modes support RESTful architecture with comprehensive documentation. Get started in minutes with SDKs for Python, Node.js, and more. All Seedance 1.5 Pro API endpoints include automatic audio generation with phoneme-level lip synchronization for seamless video creation.
How to Get Started
Start generating videos in minutes with two simple paths
API Integration
For developers building applications
Sign Up & Login
Create your Atlas Cloud account or login to access the console
Add Payment Method
Bind your credit card in the Billing section to fund your account
Generate API Key
Navigate to Console → API Keys and create your authentication key
Start Building
Use the API key to make requests and integrate SeeDANCE into your application
Playground Experience
For quick testing and experimentation
Sign Up & Login
Create your Atlas Cloud account or login to access the platform
Add Payment Method
Bind your credit card in the Billing section to get started
Use Playground
Go to the model playground, enter your prompt, and generate videos instantly with an intuitive interface
Frequently Asked Questions
What makes Seedance 1.5 Pro's audio-visual sync unique?
Unlike other models that generate video first and add audio later, Seedance 1.5 Pro uses a dual-branch architecture to generate both simultaneously. This ensures perfect synchronization from the start, with phoneme-level lip-sync accuracy across all supported languages.
How does it compare to Wan 2.5 or Wan 2.6?
While Wan 2.6 supports longer durations (up to 15s) and text rendering, Seedance 1.5 Pro excels in cinematic camera control, multi-language/dialect support with spatial audio, and physics-accurate motion. Choose based on your needs: Seedance for storytelling and multilingual content, Wan for product demos with text.
What video formats and resolutions are supported?
Seedance 1.5 Pro generates native 1080p videos at 24fps. Supported aspect ratios include 16:9, 9:16, 4:3, 3:4, 1:1, and 21:9. Duration ranges from 4-12 seconds, with Smart Duration allowing the model to select the optimal length automatically.
Which languages are supported for audio generation?
Seedance 1.5 Pro supports 8+ languages including English, Mandarin Chinese, Japanese, Korean, Spanish, Portuguese, Indonesian, and Chinese dialects like Cantonese and Sichuanese. Each language features accurate lip-sync and natural pronunciation.
Can I control specific camera movements?
Yes! Seedance understands technical film grammar. You can specify camera techniques like "Dolly Zoom on the subject" (Hitchcock effect), tracking shots, close-ups, or wide shots. The model interprets these to create professional cinematic results.
What's the difference between Text-to-Video and Image-to-Video?
Text-to-Video generates complete videos from text prompts. Image-to-Video uses a "First Frame" to lock character identity and lighting, with optional "Last Frame" control for precise beginning and end-point transitions. Both modes support full audio generation.
Why Use Seedance 1.5 Pro on Atlas Cloud?
Experience unmatched performance, reliability, and support for your AI video generation needs
Purpose-Built Infrastructure
Our system is specifically optimized for AI model deployment. Run Seedance 1.5 Pro with maximum performance on infrastructure tailored for demanding AI workloads and video generation.
Unified API for All Models
Access Seedance 1.5 Pro alongside 300+ AI models (LLMs, image, video, audio) through one unified API. Manage all your AI needs from a single platform with consistent authentication.
Competitive Pricing
Save up to 70% compared to AWS with transparent, pay-as-you-go pricing. No hidden fees, no minimum commitments—only pay for what you use with volume discounts available.
SOC I & II Certified Security
Your data and generated videos are protected with SOC I & II certifications and HIPAA compliance. Enterprise-grade security with encrypted data transmission and storage.
99.9% Uptime SLA
Enterprise-grade reliability with guaranteed 99.9% uptime. Your Seedance 1.5 Pro video generation is always available for production applications and critical workflows.
Easy Integration
Complete integration in minutes through our simple REST API and multi-language SDKs (Python, Node.js, Go). Comprehensive documentation and code examples get you started fast.
Technical Specifications
Experience Native Audio-Visual Generation
Join filmmakers, advertisers, and creators worldwide who are revolutionizing video content creation with Seedance 1.5 Pro's groundbreaking technology.
Seedance 1.5 PRO: A Native Audio-Visual Joint Generation Foundation Model
Seedance 1.5 PRO is a foundational model engineered specifically for native joint audio-visual generation, developed by the ByteDance Seed team. It represents a significant leap forward in transforming video generation into a practical, utility-driven tool. By integrating a dual-branch Diffusion Transformer architecture, the model achieves exceptional audio-visual synchronization and superior generation quality, establishing it as a robust engine for professional-grade content creation.
Key Features
Seedance 1.5 PRO introduces several key technical advancements that set a new standard for audio-visual content generation.
- Unified Multimodal Generation : Leverages a unified framework based on the MMDiT architecture to facilitate deep cross-modal interaction, ensuring precise temporal synchronization and semantic consistency between visual and auditory streams.
- Precise Audio-Visual Sync : Achieves high-fidelity alignment of lip movements, intonation, and performance rhythm. It natively supports multiple languages and regional dialects, accurately capturing unique vocal prosody and emotional tonalities.
- Cinematic Camera Control : Possesses autonomous camera scheduling capabilities, enabling the execution of complex movements such as continuous long takes and dolly zooms ("Hitchcock zoom"), significantly enhancing the dynamic tension of the video.
- Enhanced Narrative Coherence : Through strengthened semantic understanding, the model significantly improves the overall narrative coordination of audio-visual segments, providing strong support for professional-grade content creation.
- Efficient Inference Acceleration : An optimized multi-stage distillation framework, combined with quantization and parallelization, boosts the end-to-end inference speed by over 10x while preserving high performance.
Performance Highlights
The model's capabilities were rigorously evaluated against other state-of-the-art video generation models using the comprehensive SeedVideoBench 1.5 framework. Seedance 1.5 PRO demonstrates significant improvements across both video and audio dimensions.
In Text-to-Video (T2V) and Image-to-Video (I2V) tasks, it achieves a leading position in motion quality and instruction following (alignment). The model also shows strong competitiveness in visual aesthetics and motion dynamics. For audio generation, particularly in Chinese-language contexts, Seedance 1.5 PRO consistently outperforms competitors like Veo 3.1, delivering superior audio quality and audio-visual synchronization.
Use Cases
Seedance 1.5 PRO is well-suited for a wide range of professional applications, including:
- Film and Short Drama Production: Creating high-quality, emotionally resonant scenes with precise character performances.
- Advertising and Social Media: Generating engaging and dynamic video content for marketing campaigns.
- Cultural and Artistic Expression: Faithfully rendering traditional performing arts, such as Chinese opera, by capturing distinctive cadences and stylized gestures.
- Multi-Lingual Content: Producing content in various languages and dialects with accurate lip-sync and intonation.






