Fast multimodal video generation from reference images, videos, and audio. Supports video editing and extension.

Fast multimodal video generation from reference images, videos, and audio. Supports video editing and extension.
Mỗi lần chạy có giá 0.081. Với $10, bạn có thể chạy khoảng 123 lần.
Bạn có thể tiếp tục với:
import requests
import time
# Step 1: Start video generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-2.0-fast/reference-to-video",
"prompt": "A beautiful sunset over the ocean with gentle waves",
"width": 512,
"height": 512,
"duration": 3,
"fps": 24,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] in ["completed", "succeeded"]:
print("Generated video:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
video_url = check_status()Cài đặt gói cần thiết cho ngôn ngữ lập trình của bạn.
pip install requestsTất cả các yêu cầu API đều cần xác thực thông qua khóa API. Bạn có thể lấy khóa API từ bảng điều khiển Atlas Cloud.
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Không bao giờ để lộ khóa API trong mã phía máy khách hoặc kho lưu trữ công khai. Thay vào đó, hãy sử dụng biến môi trường hoặc proxy phía máy chủ.
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Gửi yêu cầu tạo nội dung bất đồng bộ. API trả về một ID dự đoán mà bạn có thể sử dụng để kiểm tra trạng thái và lấy kết quả.
/api/v1/model/generateVideoimport requests
url = "https://api.atlascloud.ai/api/v1/model/generateVideo"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedance-2.0-fast/reference-to-video",
"input": {
"prompt": "A beautiful sunset over the ocean with gentle waves"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}"){
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Truy vấn endpoint dự đoán để kiểm tra trạng thái hiện tại của yêu cầu.
/api/v1/model/prediction/{prediction_id}import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)processingYêu cầu vẫn đang được xử lý.completedQuá trình tạo đã hoàn tất. Kết quả đầu ra đã sẵn sàng.succeededQuá trình tạo thành công. Kết quả đầu ra đã sẵn sàng.failedQuá trình tạo thất bại. Kiểm tra trường lỗi.{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Tải tệp lên bộ nhớ Atlas Cloud và nhận URL mà bạn có thể sử dụng trong các yêu cầu API của mình. Sử dụng multipart/form-data để tải lên.
/api/v1/model/uploadMediaimport requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}"){
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Các tham số sau được chấp nhận trong nội dung yêu cầu.
Không có tham số nào.
{
"model": "bytedance/seedance-2.0-fast/reference-to-video"
}API trả về phản hồi dự đoán với các URL đầu ra đã tạo.
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.mp4"
],
"metrics": {
"predict_time": 45.2
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills tích hợp hơn 300 mô hình AI trực tiếp vào trợ lý lập trình AI của bạn. Một lệnh để cài đặt, sau đó sử dụng ngôn ngữ tự nhiên để tạo hình ảnh, video và trò chuyện với LLM.
npx skills add AtlasCloudAI/atlas-cloud-skillsLấy khóa API từ bảng điều khiển Atlas Cloud và đặt nó làm biến môi trường.
export ATLASCLOUD_API_KEY="your-api-key-here"Sau khi cài đặt, bạn có thể sử dụng ngôn ngữ tự nhiên trong trợ lý AI để truy cập tất cả các mô hình Atlas Cloud.
Atlas Cloud MCP Server kết nối IDE của bạn với hơn 300 mô hình AI thông qua Model Context Protocol. Hoạt động với bất kỳ ứng dụng tương thích MCP nào.
npx -y atlascloud-mcpThêm cấu hình sau vào tệp cài đặt MCP của IDE.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Schema không khả dụngBạn cần đăng nhập để truy cập lịch sử yêu cầu mô hình của mình.
Đăng nhậpSeedance 2.0 is a state-of-the-art multimodal generative AI model designed for synchronized video and audio content creation. Developed by ByteDance and integrated into the CapCut/Dreamina platform as of March 2026, this model family advances the field of generative multimedia by combining sophisticated diffusion transformer architectures with physics-informed world modeling for realistic motion and spatial consistency.
Seedance 2.0’s significance lies in its Dual-Branch Diffusion Transformer (DB-DiT) architecture that jointly processes video and audio streams, enabling phoneme-level lip synchronization across multiple languages. Compared to previous iterations, it achieves substantially higher output usability rates and faster generation speeds. The two variants target different workloads: Seedance 2.0 delivers high-fidelity, cinematic-quality renders with enhanced lighting and texture detail, while Seedance 2.0 Fast provides a cost-effective, accelerated pipeline optimized for high throughput and rapid prototyping.
Dual-Branch Diffusion Transformer Architecture: Seedance 2.0 integrates separate yet synchronized diffusion branches for video and audio, enabling tight coupling between visual motion and sound generation. This architecture improves motion realism and audio-visual coherence beyond previous generative models.
World Model with Physics Simulation: The model incorporates a physics-based world modeling approach that simulates realistic object motion and spatial consistency over time. This leads to naturalistic dynamics and stable scene composition across generated video sequences.
Rich Multimodal Input Support: Seedance 2.0 accepts diverse input formats including text prompts, up to 9 images, and up to 3 video or audio clips of 15 seconds each. This flexibility allows nuanced content creation workflows combining static, dynamic, and auditory cues.
Phoneme-Level Lip Synchronization: The native audio generation pipeline supports lip-sync at the phoneme granularity in 8+ languages, ensuring high fidelity mouth movements closely match generated speech or singing.
High Usability and Efficiency: The model achieves an estimated 90% usable output rate compared to an industry average of approximately 20%, reducing post-processing overhead. Additionally, it delivers a 30% inference speed advantage over predecessor systems.
API Variants for Different Use Cases: The Seedance 2.0 endpoint is geared toward high fidelity and cinematic visual effects suitable for final production, while the Seedance 2.0 Fast variant offers roughly 3 times faster generation and approximately 91% cost savings at $0.022 per second of output, ideal for rapid iteration and volume workflows.
Seedance 2.0 is built around the Dual-Branch Diffusion Transformer (DB-DiT), which separately processes video and audio streams via transformer-based denoising diffusion models while synchronizing generation steps to enforce audio-visual alignment. The system leverages a World Model that integrates physics simulation modules, enabling consistent spatial and temporal object behaviors within video sequences.
Training was conducted in multiple stages on large-scale, diverse datasets spanning images, videos, text captions, and audio recordings across multiple languages. Initial large-scale pre-training utilized resolutions spanning from 720p to 1080p, followed by supervised fine-tuning (SFT) to improve text and visual prompt conditioning fidelity. Reinforcement Learning with Human Feedback (RLHF) optimized multi-dimensional reward models that simultaneously assess aesthetics, motion coherence, and audio-visual synchronization quality.
The training pipeline supports multiple aspect ratios including 9:16, 16:9, 1:1, and 4:3, and target output lengths from 4 to 60 seconds. Specialized modules enable the @ reference system for fine-grained control of creative elements based on provided input assets.
Seedance 2.0 was benchmarked on the comprehensive SeedVideoBench-2.0 suite, which evaluates generative video models across over 50 image-based and 24 video-based benchmarks covering diverse content domains and multi-modal tasks.
| Rank | Model | Developer | Score/Metric | Release Date |
|---|---|---|---|---|
| 1 | Kling 3.0 | External | Competitive | 2025 |
| 2 | Sora 2 | External | Competitive | 2025 |
| 3 | Seedance 2.0 | ByteDance | High audiovisual sync, motion realism | 2026 |
| 4 | Veo 3.1 | External | Strong baseline | 2025 |
Seedance 2.0 matches or exceeds these contemporary models in synchronized video-audio generation, demonstrating especially strong performance in phoneme-level lip synchronization and motion naturalism thanks to the World Model component. Its 30% speed improvement and 90% output usability rate reflect notable efficiency advancements.
Social Media Content Creation: Efficiently generate engaging short videos with synchronized audio and visually rich effects, tailored for platforms like TikTok and Instagram.
E-commerce Product Videos: Automatically produce dynamic product showcases combining text, image, and video inputs with realistic motion and sound to enhance online shopping experiences.
Marketing Campaigns: Craft high-quality cinematic promotional content that integrates brand assets via the @ reference system for tailored storytelling and audience engagement.
Music Videos: Generate synchronized visuals with phoneme-accurate lip-syncing for multilingual vocal tracks to support artist and record label promotional needs.
Short Narrative Films: Create compelling narrative-driven video clips with coherent motion and spatial consistency, supporting indie filmmakers and content creators.
Fashion and Luxury Showcases: Produce visually detailed and aesthetic presentations incorporating texture and lighting refinements for high-end brand communications.