
Seedance v1.5 Pro Image-to-Video Spicy API by ByteDance
Seedance V1.5 Pro Spicy transforms images into high-quality cinematic video with smooth motion and expressive animations, optimized for creative content at scale.
INPUT
OUTPUT
IdleYour request will cost $0.049 per run. For $10 you can run this model approximately 204 times.
Here's what you can do next:
Code Example
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-v1.5-pro/image-to-video-spicy",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Install
Install the required package for your language.
pip install requestsAuthentication
All API requests require authentication via an API key. You can get your API key from the Atlas Cloud dashboard.
export ATLASCLOUD_API_KEY="your-api-key-here"HTTP Headers
import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Never expose your API key in client-side code or public repositories. Use environment variables or a backend proxy instead.
Submit a request
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Submit a Request
Submit an asynchronous generation request. The API returns a prediction ID that you can use to check the status and retrieve the result.
/api/v1/model/generateVideoRequest Body
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-v1.5-pro/image-to-video-spicy",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")Response
{
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Check Status
Poll the prediction endpoint to check the current status of your request.
/api/v1/model/prediction/{prediction_id}Polling Example
import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)Status Values
processingThe request is still being processed.completedGeneration is complete. Outputs are available.succeededGeneration succeeded. Outputs are available.failedGeneration failed. Check the error field.Completed Response
{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Upload Files
Upload files to Atlas Cloud storage and get a URL you can use in your API requests. Use multipart/form-data to upload.
/api/v1/model/uploadMediaUpload Example
import requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")Response
{
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Input Schema
The following parameters are accepted in the request body.
No parameters available.
Example Request Body
{
"model": "bytedance/seedance-v1.5-pro/image-to-video-spicy"
}Output Schema
The API returns a prediction response with the generated output URLs.
Example Response
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills
Atlas Cloud Skills integrates 300+ AI models directly into your AI coding assistant. One command to install, then use natural language to generate images, videos, and chat with LLMs.
Supported Clients
Install
npx skills add AtlasCloudAI/atlas-cloud-skillsSetup API Key
Get your API key from the Atlas Cloud dashboard and set it as an environment variable.
export ATLASCLOUD_API_KEY="your-api-key-here"Capabilities
Once installed, you can use natural language in your AI assistant to access all Atlas Cloud models.
MCP Server
Atlas Cloud MCP Server connects your IDE with 300+ AI models via the Model Context Protocol. Works with any MCP-compatible client.
Supported Clients
Install
npx -y atlascloud-mcpConfiguration
Add the following configuration to your IDE's MCP settings file.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Available Tools
API Schema
Schema not availablePlease log in to view request history
You need to be logged in to access your model request history.
Log InSeedance 1.5 ProSound and Vision, All in One Take
ByteDance's revolutionary AI model that generates perfectly synchronized audio and video simultaneously from a single unified process. Experience true native audio-visual generation with millisecond-precision lip-sync across 8+ languages.
Revolutionary Innovation
What makes SeeDANCE 1.5 Pro fundamentally different
Dual-Branch Architecture
Uses a 4.5 billion parameter Dual-Branch Diffusion Transformer (DB-DiT) that generates audio and video simultaneously—not sequentially—ensuring perfect synchronization from the start.
Phoneme-Level Lip Sync
Understands individual phonemes and maps them correctly to lip shapes across different languages, achieving millisecond-precision audio-visual synchronization.
Narrative Auto-Completion
Intelligently fills narrative gaps based on prompt intent, maintaining coherent storytelling across characters' emotions, expressions, and actions.
Core Capabilities
Native 1080p Quality
Professional HD video output with cinematic quality at 24fps, supporting 4-12 second durations
8+ Language Support
English, Mandarin, Japanese, Korean, Spanish, Portuguese, Indonesian, plus Chinese dialects
Cinematic Camera Control
Complex camera movements including dolly zooms, tracking shots, and professional film techniques
Multi-Speaker Dialogue
Natural conversations with multiple characters, distinct vocal identities, and realistic turn-taking
Physics-Accurate Motion
Realistic hair dynamics, fluid behaviors, and material interactions for lifelike visuals
Character Consistency
Maintains clothing, faces, and style across scenes for complete story continuity
Seedance 1.5 Pro vs Competition
See how Seedance stands out from other video generation models
Perfect For
Short Drama Production
Create emotion-forward narrative clips with realistic character dialogue and cinematic lighting
Advertising Creatives
Performance-heavy ad content with natural acting, perfect lip-sync, and professional production value
Multilingual Content
Reach global audiences with native-quality audio-visual content in 8+ languages
Educational Videos
Engaging instructional content with clear narration and synchronized visual demonstrations
Social Media
Viral-ready short-form content with professional audio-visual quality for maximum engagement
Film Production
Pre-visualization and concept development with realistic character performances and dialogue
Seedance 1.5 Pro T2V and I2V API Integration
Powerful Text-to-Video (T2V) API and Image-to-Video (I2V) API endpoints for seamless integration
Text-to-Video API (T2V API)
Our Seedance 1.5 Pro T2V API transforms text prompts into complete cinematic videos with native audio-visual synchronization. Generate scenes, camera movements, character actions, and dialogue in a single Text-to-Video API call.
Perfect for:
- Automated video content creation at scale
- Dynamic storytelling and narrative videos
- Marketing campaign automation
- Educational content generation
Image-to-Video API (I2V API)
Our Seedance 1.5 Pro I2V API brings still images to life with motion, camera movement, and synchronized audio. The Image-to-Video API features advanced frame control to define precise start and end points for your animations.
Perfect for:
- Photo animation and enhancement
- Character consistency in video sequences
- Product showcase with motion effects
- Architectural visualization and walkthroughs
Simple T2V and I2V API Integration
Both T2V API and I2V API modes support RESTful architecture with comprehensive documentation. Get started in minutes with SDKs for Python, Node.js, and more. All Seedance 1.5 Pro API endpoints include automatic audio generation with phoneme-level lip synchronization for seamless video creation.
How to Get Started
Start generating videos in minutes with two simple paths
API Integration
For developers building applications
Sign Up & Login
Create your Atlas Cloud account or login to access the console
Add Payment Method
Bind your credit card in the Billing section to fund your account
Generate API Key
Navigate to Console → API Keys and create your authentication key
Start Building
Use the API key to make requests and integrate SeeDANCE into your application
Playground Experience
For quick testing and experimentation
Sign Up & Login
Create your Atlas Cloud account or login to access the platform
Add Payment Method
Bind your credit card in the Billing section to get started
Use Playground
Go to the model playground, enter your prompt, and generate videos instantly with an intuitive interface
Frequently Asked Questions
What makes Seedance 1.5 Pro's audio-visual sync unique?
Unlike other models that generate video first and add audio later, Seedance 1.5 Pro uses a dual-branch architecture to generate both simultaneously. This ensures perfect synchronization from the start, with phoneme-level lip-sync accuracy across all supported languages.
How does it compare to Wan 2.5 or Wan 2.6?
While Wan 2.6 supports longer durations (up to 15s) and text rendering, Seedance 1.5 Pro excels in cinematic camera control, multi-language/dialect support with spatial audio, and physics-accurate motion. Choose based on your needs: Seedance for storytelling and multilingual content, Wan for product demos with text.
What video formats and resolutions are supported?
Seedance 1.5 Pro generates native 1080p videos at 24fps. Supported aspect ratios include 16:9, 9:16, 4:3, 3:4, 1:1, and 21:9. Duration ranges from 4-12 seconds, with Smart Duration allowing the model to select the optimal length automatically.
Which languages are supported for audio generation?
Seedance 1.5 Pro supports 8+ languages including English, Mandarin Chinese, Japanese, Korean, Spanish, Portuguese, Indonesian, and Chinese dialects like Cantonese and Sichuanese. Each language features accurate lip-sync and natural pronunciation.
Can I control specific camera movements?
Yes! Seedance understands technical film grammar. You can specify camera techniques like "Dolly Zoom on the subject" (Hitchcock effect), tracking shots, close-ups, or wide shots. The model interprets these to create professional cinematic results.
What's the difference between Text-to-Video and Image-to-Video?
Text-to-Video generates complete videos from text prompts. Image-to-Video uses a "First Frame" to lock character identity and lighting, with optional "Last Frame" control for precise beginning and end-point transitions. Both modes support full audio generation.
Why Use Seedance 1.5 Pro on Atlas Cloud?
Experience unmatched performance, reliability, and support for your AI video generation needs
Purpose-Built Infrastructure
Our system is specifically optimized for AI model deployment. Run Seedance 1.5 Pro with maximum performance on infrastructure tailored for demanding AI workloads and video generation.
Unified API for All Models
Access Seedance 1.5 Pro alongside 300+ AI models (LLMs, image, video, audio) through one unified API. Manage all your AI needs from a single platform with consistent authentication.
Competitive Pricing
Save up to 70% compared to AWS with transparent, pay-as-you-go pricing. No hidden fees, no minimum commitments—only pay for what you use with volume discounts available.
SOC I & II Certified Security
Your data and generated videos are protected with SOC I & II certifications and HIPAA compliance. Enterprise-grade security with encrypted data transmission and storage.
99.9% Uptime SLA
Enterprise-grade reliability with guaranteed 99.9% uptime. Your Seedance 1.5 Pro video generation is always available for production applications and critical workflows.
Easy Integration
Complete integration in minutes through our simple REST API and multi-language SDKs (Python, Node.js, Go). Comprehensive documentation and code examples get you started fast.
Technical Specifications
Experience Native Audio-Visual Generation
Join filmmakers, advertisers, and creators worldwide who are revolutionizing video content creation with Seedance 1.5 Pro's groundbreaking technology.
1. Introduction
seedance-v1.5-pro-image-to-video-spicy is an advanced image-to-video generation model developed by ByteDance and offered via third-party platforms such as AtlasCloud.ai and WaveSpeed.ai. It specializes in producing high-quality cinematic video clips from static images, integrating smooth and expressive motion alongside optional synchronized audio output. Positioned as a scalable, unlimited-generation tier, it targets creative storytelling and content production at volume.
This model leverages a dual-branch diffusion transformer architecture to generate temporally coherent video frames and audio waveforms simultaneously. Its capability for bold, vivid motion with stable tonal contrast and multi-aspect ratio support makes it a practical tool for content creators seeking dynamic video renditions of still images. The "Spicy" variant is a platform-specific optimization tier for throughput-focused applications rather than an official ByteDance release.
2. Key Features & Innovations
-
Dual-Branch Diffusion Transformer Architecture: Employs a 4.5 billion parameter model that simultaneously generates video frames and synchronized audio waveforms through a cross-modal joint module, ensuring millisecond-level audiovisual alignment.
-
Unlimited-Generation Scalability: Optimized for high-volume production, this tier supports continuous video clip generation without preset usage caps, enabling batch processing at resolutions up to 1080p with durations ranging from 4 to 12 seconds.
-
Expressive Motion Rendering: Produces cinematic-quality animations with physics-accurate motion, including complex camera movements and natural transitions, enhancing storytelling and visual impact.
-
Flexible Output Specifications: Supports multiple resolutions (480p, 720p, 1080p), a variety of aspect ratios (21:9, 16:9, 4:3, 1:1, 3:4, 9:16), and duration control between 4 to 12 seconds, allowing customization per platform or project requirements.
-
Optional Synchronized Audio Generation: Generates multi-language audio with spatial sound effects aligned precisely with video frames, improving the completeness and immersion of audiovisual content.
-
Platform-Specific Pricing Integration: Available through third-party API aggregators with competitive pricing tiers based on resolution, duration, and audio inclusion, offering cost-effective alternatives to official BytePlus API services.
3. Model Architecture & Technical Details
The core of seedance-v1.5-pro-image-to-video-spicy is a dual-branch diffusion transformer architecture with approximately 4.5 billion parameters. It consists of two interconnected generative pathways: one for video frame sequences and another for audio waveform synthesis. These branches are linked by a cross-modal joint module responsible for millisecond-precise audio-visual synchronization.
The model was trained on a large-scale, diverse dataset containing roughly 100 million minutes of paired audio-video clips, spanning various cinematographic styles and languages. Training incorporates progressive multi-resolution inputs to enhance detail and temporal coherence. Post-training employed advanced fine-tuning approaches to stabilize video quality and support optional audio generation without latency or lip-sync issues.
Supported output formats include varying aspect ratios from ultra-widescreen (21:9) to vertical video (9:16), suited for different display contexts. Moreover, the architecture allows optional fixed-camera settings to simulate locked tripod shots, enhancing usability for specific creative workflows.
4. Performance Highlights
Seedance-v1.5-pro-image-to-video-spicy demonstrates a competitive balance of quality and efficiency in the 2026 AI video generation landscape. While direct benchmark scores are limited due to proprietary evaluations, qualitative assessments place it among leading models for synchronized audiovisual output and scalable batch generation.
| Rank | Model | Developer | Pricing per Second (Approx.) | Release Date |
|---|---|---|---|---|
| 1 | Google Veo 3.1 | $0.75/s | Early 2026 | |
| 2 | Grok Imagine | Grok AI | $0.05/s | 2025 |
| 3 | Kling 3.0 | Kling Labs | 0.15/s | Mid 2025 |
| 4 | Seedance V1.5 Pro Spicy | ByteDance / 3rd Party | 0.104/s | Dec 2025 |
| 5 | Runway Gen-4 | Runway | Proprietary pricing | 2026 |
Its strength lies in generating smooth cinematic clips with expressive, physics-informed motion and integrated audio, outperforming several models constrained to sequential or video-only synthesis. However, text rendering quality and longer clip durations beyond 15 seconds remain challenging.
Evaluation is typically conducted using proprietary audiovisual coherence metrics and user feedback from commercial deployments in e-commerce and social media content creation.
5. Intended Use & Applications
-
E-commerce Product Videos: Enables retailers and brands to produce dynamic product demonstrations and promotional clips from static images, enhancing engagement and conversion.
-
Marketing and Social Media Content: Facilitates the creation of vibrant short-form videos ideal for platforms such as Instagram Reels, TikTok, and YouTube Shorts, supporting scalable campaign generation.
-
Cinematic Content and Filmmaking: Provides filmmakers and creatives with tools to animate concept art or storyboard images into lifelike scenes with complex motion and audio.
-
Education and Training: Generates compelling audiovisual materials for instructional and educational purposes, enriching learning experiences with dynamic visual aids.
-
Content Creator Workflows: Assists creators in rapidly iterating visual concepts and animations with fine control over motion, resolution, and audio synchronization, improving productivity.
Sources: Based on ByteDance Seedance documentation and third-party platform data from AtlasCloud.ai, technical literature, and market analysis as of early 2026.






