
Seedream v4 Edit API by ByteDance
Open and Advanced Large-Scale Image Generative Models.
INPUT
OUTPUT
IdleYour request will cost $0.027 per run. For $10 you can run this model approximately 370 times.
Here's what you can do next:
Code Example
import requests
import time
# Step 1: Start image generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateImage"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedream-v4/edit",
"prompt": "A beautiful landscape with mountains and lake",
"width": 512,
"height": 512,
"steps": 20,
"guidance_scale": 7.5,
}
generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]
# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
def check_status():
while True:
response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
result = response.json()
if result["data"]["status"] == "completed":
print("Generated image:", result["data"]["outputs"][0])
return result["data"]["outputs"][0]
elif result["data"]["status"] == "failed":
raise Exception(result["data"]["error"] or "Generation failed")
else:
# Still processing, wait 2 seconds
time.sleep(2)
image_url = check_status()Install
Install the required package for your language.
pip install requestsAuthentication
All API requests require authentication via an API key. You can get your API key from the Atlas Cloud dashboard.
export ATLASCLOUD_API_KEY="your-api-key-here"HTTP Headers
import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}Never expose your API key in client-side code or public repositories. Use environment variables or a backend proxy instead.
Submit a request
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateImage"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"prompt": "A beautiful landscape"
}
response = requests.post(url, headers=headers, json=data)
print(response.json())Submit a Request
Submit an asynchronous generation request. The API returns a prediction ID that you can use to check the status and retrieve the result.
/api/v1/model/generateImageRequest Body
import requests
url = "https://api.atlascloud.ai/api/v1/model/generateImage"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "bytedance/seedream-v4/edit",
"input": {
"prompt": "A beautiful landscape with mountains and lake"
}
}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")Response
{
"id": "pred_abc123",
"status": "processing",
"model": "model-name",
"created_at": "2025-01-01T00:00:00Z"
}Check Status
Poll the prediction endpoint to check the current status of your request.
/api/v1/model/prediction/{prediction_id}Polling Example
import requests
import time
prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
while True:
response = requests.get(url, headers=headers)
result = response.json()
status = result["data"]["status"]
print(f"Status: {status}")
if status in ["completed", "succeeded"]:
output_url = result["data"]["outputs"][0]
print(f"Output URL: {output_url}")
break
elif status == "failed":
print(f"Error: {result['data'].get('error', 'Unknown')}")
break
time.sleep(3)Status Values
processingThe request is still being processed.completedGeneration is complete. Outputs are available.succeededGeneration succeeded. Outputs are available.failedGeneration failed. Check the error field.Completed Response
{
"data": {
"id": "pred_abc123",
"status": "completed",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.png"
],
"metrics": {
"predict_time": 8.3
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}
}Upload Files
Upload files to Atlas Cloud storage and get a URL you can use in your API requests. Use multipart/form-data to upload.
/api/v1/model/uploadMediaUpload Example
import requests
url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }
with open("image.png", "rb") as f:
files = {"file": ("image.png", f, "image/png")}
response = requests.post(url, headers=headers, files=files)
result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")Response
{
"data": {
"download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
"file_name": "image.png",
"content_type": "image/png",
"size": 1024000
}
}Input Schema
The following parameters are accepted in the request body.
No parameters available.
Example Request Body
{
"model": "bytedance/seedream-v4/edit"
}Output Schema
The API returns a prediction response with the generated output URLs.
Example Response
{
"id": "pred_abc123",
"status": "completed",
"model": "model-name",
"outputs": [
"https://storage.atlascloud.ai/outputs/result.png"
],
"metrics": {
"predict_time": 8.3
},
"created_at": "2025-01-01T00:00:00Z",
"completed_at": "2025-01-01T00:00:10Z"
}Atlas Cloud Skills
Atlas Cloud Skills integrates 300+ AI models directly into your AI coding assistant. One command to install, then use natural language to generate images, videos, and chat with LLMs.
Supported Clients
Install
npx skills add AtlasCloudAI/atlas-cloud-skillsSetup API Key
Get your API key from the Atlas Cloud dashboard and set it as an environment variable.
export ATLASCLOUD_API_KEY="your-api-key-here"Capabilities
Once installed, you can use natural language in your AI assistant to access all Atlas Cloud models.
MCP Server
Atlas Cloud MCP Server connects your IDE with 300+ AI models via the Model Context Protocol. Works with any MCP-compatible client.
Supported Clients
Install
npx -y atlascloud-mcpConfiguration
Add the following configuration to your IDE's MCP settings file.
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}Available Tools
API Schema
Schema not availablePlease log in to view request history
You need to be logged in to access your model request history.
Log InSeedream 4.0 - ByteDance's All-in-One Visual Creation Model
NEWThe Latest Generation of Doubao's Image Creation Engine
Seedream 4.0 is ByteDance's latest generation image creation model, positioned as an "integrated generation and editing" professional tool. The same model can handle text-to-image, image editing, and multi-image generation tasks, making your creative journey from inspiration to implementation more efficient and controllable.
Model Highlights
Featuring five core capabilities: Precision Instruction Editing, High Feature Preservation, Deep Intent Understanding, Multi-Image I/O, and Ultra HD Resolution. Covering diverse creative scenarios, bringing every inspiration to life instantly with high quality.
Precision Instruction Editing
Simply describe your needs in plain language to accurately perform add, delete, modify, and replace operations. Enable applications across commercial design, artistic creation, and entertainment.
High Feature Preservation
Deep Intent Understanding
Multi-Image Input/Output
Input multiple images at once, supporting complex editing operations like combination, migration, replacement, and derivation, achieving high-difficulty synthesis
Ultra HD Resolution
Resolution upgraded again, supporting ultra-high-definition output for professional-grade image quality
Application Scenarios
Prompt Examples & Creative Templates
Discover the power of Seedream 4.0 with these carefully crafted prompt examples. Each template showcases specific capabilities and helps you achieve professional results.

Perspective & Composition Control
Transform camera angles, adjust scene distance, and modify aspect ratios with precisionChange the camera angle from eye-level to bird's-eye view, adjust the scene from close-up to medium shot, and convert the image aspect ratio to 16:9. Maintain all original elements and lighting while adapting the composition for the new perspective and format.
.png&w=3840&q=75)
Mathematical Whiteboard Creation
Generate clean whiteboard with precise mathematical formulas and equationsCreate a clean white whiteboard with the following mathematical equations written in clear, professional handwriting: E=mc², √(9)=3, and the quadratic formula (-b±√(b²-4ac))/2a. Use black or dark blue marker style, with proper spacing and mathematical notation.
.png&w=3840&q=75)
Sketch to Reality Transformation
Transform rough sketches into detailed realistic objects - bringing wild imagination to lifeBased on this rough sketch, generate a vintage television set from the 1950s-60s era. Transform the abstract lines and shapes into a realistic, detailed old-style TV with wooden cabinet, rounded screen, control knobs, and period-appropriate design elements. Make the vague concept concrete and lifelike.
.png&w=3840&q=75)
Lossless Detail Enhancement
Maximize original image detail retention, avoiding AI-generated artifacts for truly lossless editingEnhance this image while maximizing the preservation of original details. Avoid any AI-generated 'plastic' or 'oily' artifacts. Maintain authentic textures, natural lighting, and original image characteristics. Focus on clean, lossless enhancement that respects the source material's integrity.
.png&w=3840&q=75)
Creative Font Styling
Transform plain text into artistic, creative typography while maintaining readabilityTransform all the text in this image into creative, artistic fonts. Replace the standard typography with stylized lettering that matches the image's aesthetic - use decorative fonts, calligraphy styles, or artistic text treatments. Maintain the same text content and layout while making the typography more visually appealing and creative.
Core Capabilities
Advanced text understanding and image generation capabilities, supporting various artistic styles and professional requirements, from concept to final artwork in one step.
Natural language-based editing commands, supporting object addition/removal, style transfer, background replacement, and more complex editing operations.
Revolutionary multi-image input capability, enabling complex image synthesis, style migration, and creative combinations with unprecedented control.
Why Choose Seedream 4.0?
All-in-One Solution
Single model handles generation, editing, and composition - no need to switch between different toolsProfessional Quality
Commercial-grade output quality with precise control over every detailConsistent Style
Maintains character and style consistency across multiple generations and editsTechnical Specifications
Experience the Power of Seedream 4.0
Join creators worldwide in revolutionizing visual content creation with ByteDance's most advanced integrated image AI model.
Seedream 4: A next-generation multimodal image generation system developed by ByteDance Seed
Model Card Overview
| Field | Description |
|---|---|
| Model Name | Seedream 4 |
| Developed by | ByteDance Seed Team |
| Release Date | September 9, 2025 |
| Model Type | Multimodal Image Generation |
| Related Links | Official Website, Technical Report (arXiv), GitHub Organization (ByteDance-Seed) |
Introduction
Seedream 4 is a powerful, efficient, and high-performance multimodal image generation system that unifies text-to-image (T2I) synthesis, image editing, and multi-image composition within a single, integrated framework. Engineered for scalability and efficiency, the model introduces a novel diffusion transformer (DiT) architecture combined with a powerful Variational Autoencoder (VAE). This design enables the fast generation of native high-resolution images up to 4K, while significantly reducing computational requirements compared to its predecessors.
The primary goal of Seedream 4 is to extend traditional T2I systems into a more interactive and multidimensional creative tool. It is designed to handle complex tasks involving precise image editing, in-context reasoning, and multi-image referencing, pushing the boundaries of generative AI for both creative and professional applications.
Key Features & Innovations
Seedream 4 introduces several key advancements in image generation technology:
- Unified Multimodal Architecture: It integrates T2I generation, image editing, and multi-image composition into a single model, allowing for seamless transitions between different creative workflows.
- Efficient and Scalable Design: The model features a highly efficient DiT backbone and a high-compression VAE, achieving over 10x inference acceleration compared to Seedream 3.0 while delivering superior performance. This architecture is hardware-friendly and easily scalable.
- Ultra-Fast, High-Resolution Output: Seedream 4 can generate native high-resolution images (from 1K to 4K) in as little as 1.4 to 1.8 seconds for a 2K image, greatly enhancing user interaction and production efficiency.
- Advanced Multimodal Capabilities: The model excels at complex tasks such as precise, instruction-based image editing, in-context reasoning, and generating new images by blending elements from multiple reference images.
- Professional and Knowledge-Based Content Generation: Beyond artistic imagery, Seedream 4 can generate structured and knowledge-based content, including charts, mathematical formulas, and professional design materials, bridging the gap between creative expression and practical application.
- Advanced Training and Acceleration: The model is pre-trained on billions of text-image pairs and utilizes a multi-stage post-training process (CT, SFT, RLHF) to enhance its capabilities. Inference is accelerated through a combination of adversarial distillation, quantization, and speculative decoding.
Model Architecture & Technical Details
Seedream 4's architecture is a significant leap forward, focusing on efficiency and power. The core components are a diffusion transformer (DiT) and a Variational Autoencoder (VAE).
- Pre-training Data: Billions of text-image pairs, including a specialized pipeline for knowledge-related data like instructional images and formulas.
- Training Strategy: A multi-stage approach, starting at a 512x512 resolution and fine-tuning at higher resolutions up to 4K.
- Post-training: A joint multi-task process involving Continuing Training (CT), Supervised Fine-Tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF) to enhance instruction following and alignment.
- Inference Acceleration: A holistic system combining an adversarial learning framework, hardware-aware quantization (adaptive 4/8-bit), and speculative decoding.
Intended Use & Applications
Seedream 4 is designed for a wide range of creative and professional applications, moving beyond simple image generation to become a comprehensive visual content creation tool.
- Creative Content Generation: Creating high-quality, artistic images, illustrations, and concept art from text prompts.
- Advanced Image Editing: Performing complex edits on existing images using natural language instructions, such as adding or removing objects, changing styles, and modifying backgrounds.
- Design and Marketing: Generating professional design materials, product mockups, and marketing visuals with precise control over text and branding elements.
- Educational and Technical Content: Creating structured, knowledge-based visuals like diagrams, charts, and mathematical formulas for educational or technical documentation.
- Multi-Image Composition: Blending elements from multiple source images to create new compositions, such as virtual try-ons for fashion or combining characters with new scenes.
Performance
Seedream 4 has demonstrated state-of-the-art performance on both internal and public benchmarks as of September 18, often outperforming other leading models in text-to-image and image editing tasks.
MagicBench (Internal Benchmark)
| Task | Performance Summary |
|---|---|
| Text-to-Image | Achieved high scores in prompt following, aesthetics, and text-rendering. |
| Single-Image Editing | Showed a good balance between prompt following and alignment with the source image. |






