Seedream v4.5 Sequential
text-to-image

Seedream v4.5 Sequential API by ByteDance

bytedance/seedream-v4.5/sequential
Sequential

ByteDance latest image generation model with batch generation support. Generate up to 15 images in a single request.

INPUT

Loading parameter configuration...

OUTPUT

Idle
Your generated images will appear here
Configure your settings and click Run to get started

Your request will cost $0.036 per run. For $10 you can run this model approximately 277 times.

Here's what you can do next:

Parametri

Esempio di codice

import requests
import time

# Step 1: Start image generation
generate_url = "https://api.atlascloud.ai/api/v1/model/generateImage"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "bytedance/seedream-v4.5/sequential",
    "prompt": "A beautiful landscape with mountains and lake",
    "width": 512,
    "height": 512,
    "steps": 20,
    "guidance_scale": 7.5,
}

generate_response = requests.post(generate_url, headers=headers, json=data)
generate_result = generate_response.json()
prediction_id = generate_result["data"]["id"]

# Step 2: Poll for result
poll_url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"

def check_status():
    while True:
        response = requests.get(poll_url, headers={"Authorization": "Bearer $ATLASCLOUD_API_KEY"})
        result = response.json()

        if result["data"]["status"] == "completed":
            print("Generated image:", result["data"]["outputs"][0])
            return result["data"]["outputs"][0]
        elif result["data"]["status"] == "failed":
            raise Exception(result["data"]["error"] or "Generation failed")
        else:
            # Still processing, wait 2 seconds
            time.sleep(2)

image_url = check_status()

Installa

Installa il pacchetto richiesto per il tuo linguaggio.

bash
pip install requests

Autenticazione

Tutte le richieste API richiedono l'autenticazione tramite una chiave API. Puoi ottenere la tua chiave API dalla dashboard di Atlas Cloud.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

Header HTTP

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
Proteggi la tua chiave API

Non esporre mai la tua chiave API nel codice lato client o nei repository pubblici. Utilizza invece variabili d'ambiente o un proxy backend.

Invia una richiesta

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateImage"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "prompt": "A beautiful landscape"
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Invia una richiesta

Invia una richiesta di generazione asincrona. L'API restituisce un ID di previsione che puoi usare per controllare lo stato e recuperare il risultato.

POST/api/v1/model/generateImage

Corpo della richiesta

import requests

url = "https://api.atlascloud.ai/api/v1/model/generateImage"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}

data = {
    "model": "bytedance/seedream-v4.5/sequential",
    "input": {
        "prompt": "A beautiful landscape with mountains and lake"
    }
}

response = requests.post(url, headers=headers, json=data)
result = response.json()

print(f"Prediction ID: {result['id']}")
print(f"Status: {result['status']}")

Risposta

{
  "id": "pred_abc123",
  "status": "processing",
  "model": "model-name",
  "created_at": "2025-01-01T00:00:00Z"
}

Controlla lo stato

Interroga l'endpoint di previsione per verificare lo stato attuale della tua richiesta.

GET/api/v1/model/prediction/{prediction_id}

Esempio di polling

import requests
import time

prediction_id = "pred_abc123"
url = f"https://api.atlascloud.ai/api/v1/model/prediction/{prediction_id}"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

while True:
    response = requests.get(url, headers=headers)
    result = response.json()
    status = result["data"]["status"]
    print(f"Status: {status}")

    if status in ["completed", "succeeded"]:
        output_url = result["data"]["outputs"][0]
        print(f"Output URL: {output_url}")
        break
    elif status == "failed":
        print(f"Error: {result['data'].get('error', 'Unknown')}")
        break

    time.sleep(3)

Valori di stato

processingLa richiesta è ancora in fase di elaborazione.
completedLa generazione è completata. I risultati sono disponibili.
succeededLa generazione è riuscita. I risultati sono disponibili.
failedLa generazione è fallita. Controlla il campo errore.

Risposta completata

{
  "data": {
    "id": "pred_abc123",
    "status": "completed",
    "outputs": [
      "https://storage.atlascloud.ai/outputs/result.png"
    ],
    "metrics": {
      "predict_time": 8.3
    },
    "created_at": "2025-01-01T00:00:00Z",
    "completed_at": "2025-01-01T00:00:10Z"
  }
}

Carica file

Carica file nello storage Atlas Cloud e ottieni un URL utilizzabile nelle tue richieste API. Usa multipart/form-data per il caricamento.

POST/api/v1/model/uploadMedia

Esempio di caricamento

import requests

url = "https://api.atlascloud.ai/api/v1/model/uploadMedia"
headers = { "Authorization": "Bearer $ATLASCLOUD_API_KEY" }

with open("image.png", "rb") as f:
    files = {"file": ("image.png", f, "image/png")}
    response = requests.post(url, headers=headers, files=files)

result = response.json()
download_url = result["data"]["download_url"]
print(f"File URL: {download_url}")

Risposta

{
  "data": {
    "download_url": "https://storage.atlascloud.ai/uploads/abc123/image.png",
    "file_name": "image.png",
    "content_type": "image/png",
    "size": 1024000
  }
}

Schema di input

I seguenti parametri sono accettati nel corpo della richiesta.

Totale: 0Obbligatorio: 0Opzionale: 0

Nessun parametro disponibile.

Esempio di corpo della richiesta

json
{
  "model": "bytedance/seedream-v4.5/sequential"
}

Schema di output

L'API restituisce una risposta di previsione con gli URL degli output generati.

idstringrequired
Unique identifier for the prediction.
statusstringrequired
Current status of the prediction.
processingcompletedsucceededfailed
modelstringrequired
The model used for generation.
outputsarray[string]
Array of output URLs. Available when status is "completed".
errorstring
Error message if status is "failed".
metricsobject
Performance metrics.
predict_timenumber
Time taken for image generation in seconds.
created_atstringrequired
ISO 8601 timestamp when the prediction was created.
Format: date-time
completed_atstring
ISO 8601 timestamp when the prediction was completed.
Format: date-time

Esempio di risposta

json
{
  "id": "pred_abc123",
  "status": "completed",
  "model": "model-name",
  "outputs": [
    "https://storage.atlascloud.ai/outputs/result.png"
  ],
  "metrics": {
    "predict_time": 8.3
  },
  "created_at": "2025-01-01T00:00:00Z",
  "completed_at": "2025-01-01T00:00:10Z"
}

Atlas Cloud Skills

Atlas Cloud Skills integra oltre 300 modelli di IA direttamente nel tuo assistente di codifica IA. Un comando per installare, poi usa il linguaggio naturale per generare immagini, video e chattare con LLM.

Client supportati

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ client supportati

Installa

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

Configura chiave API

Ottieni la tua chiave API dalla dashboard di Atlas Cloud e impostala come variabile d'ambiente.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

Funzionalità

Una volta installato, puoi usare il linguaggio naturale nel tuo assistente IA per accedere a tutti i modelli Atlas Cloud.

Generazione di immaginiGenera immagini con modelli come Nano Banana 2, Z-Image e altri.
Creazione di videoCrea video da testo o immagini con Kling, Vidu, Veo, ecc.
Chat LLMChatta con Qwen, DeepSeek e altri grandi modelli linguistici.
Caricamento mediaCarica file locali per la modifica di immagini e flussi di lavoro da immagine a video.

Server MCP

Il server MCP di Atlas Cloud collega il tuo IDE con oltre 300 modelli di IA tramite il Model Context Protocol. Funziona con qualsiasi client compatibile MCP.

Client supportati

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ client supportati

Installa

bash
npx -y atlascloud-mcp

Configurazione

Aggiungi la seguente configurazione al file delle impostazioni MCP del tuo IDE.

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

Strumenti disponibili

atlas_generate_imageGenera immagini da prompt testuali.
atlas_generate_videoCrea video da testo o immagini.
atlas_chatChatta con grandi modelli linguistici.
atlas_list_modelsEsplora oltre 300 modelli di IA disponibili.
atlas_quick_generateCreazione di contenuti in un solo passaggio con selezione automatica del modello.
atlas_upload_mediaCarica file locali per i flussi di lavoro API.

API Schema

Schema not available

Please log in to view request history

You need to be logged in to access your model request history.

Log In
4.5NEW RELEASE

SeedreamSuono e Visione, Tutto in Una Sola Ripresa

Il rivoluzionario modello di IA di ByteDance che genera audio e video perfettamente sincronizzati simultaneamente da un unico processo unificato. Sperimenta la vera generazione audio-visiva nativa con sincronizzazione labiale di precisione millimetrica in oltre 8 lingue.

Key Updates

Experience the next level of AI-powered visual creation

Superior Aesthetics

Produces cinematic visuals with refined lighting and rendering for professional-grade output.

Higher Consistency

Maintains stable subjects, clear details, and coherent scenes across multiple images.

Smarter Instruction Following

Accurately responds to complex prompts with precise visual control and interactive editing.

Stronger Spatial Understanding

Generates realistic proportions, object placement, and scene layout with accuracy.

Richer World Knowledge

Creates knowledge-based visuals with accurate scientific and technical reasoning.

Deeper Industry Application

Supports professional workflows for e-commerce, film, advertising, gaming, and more.

Industry Applications

🛒

E-commerce

Product photography & marketing

🎬

Film & TV

Concept art & storyboarding

📺

Advertising

Campaign visuals & creatives

🎮

Gaming

Character & environment design

📚

Education

Instructional illustrations

🏠

Interior Design

Space visualization

🏗️

Architecture

Architectural rendering

👗

Fashion

Virtual try-on & styling

Improvements from 4.0

See how Seedream 4.5 outperforms the previous version

1

Face Quality

Significant improvement when face proportion is small

Before (4.0)Distorted facial features in distant shots
After (4.5)Clear, natural facial details preserved
2

Text Rendering

Enhanced small character rendering capability

Before (4.0)Blurry or incorrect text generation
After (4.5)Sharp, accurate text placement
3

ID Preservation

Stronger identity retention ability

Before (4.0)Character features drift across generations
After (4.5)Consistent identity across all outputs

Sperimenta la Generazione Audio-Visiva Nativa

Unisciti a cineasti, inserzionisti e creatori di tutto il mondo che stanno rivoluzionando la creazione di contenuti video con la tecnologia rivoluzionaria di Seedance 1.5 Pro.

Cinematic Quality
Fast Generation
🎯Precise Control

Seedream 4.5 : A professional, high-fidelity multimodal image generation model by ByteDance Seed

Model Card Overview

FieldDescription
Model NameSeedream 4.5
Developed ByByteDance Seed
Release DateDecember 2025
Model TypeMultimodal Image Generation
Related LinksOfficial Website,Technical Paper (arXiv), GitHub Repository

Introduction

Seedream 4.5 is a state-of-the-art, multimodal generative model engineered for scalability, efficiency, and professional-grade output. As an advanced version of Seedream 4.0, it is built upon a unified framework that seamlessly integrates text-to-image synthesis, sophisticated image editing, and complex multi-image composition. The model's primary design goal is to deliver professional visual creatives with exceptional consistency and fidelity. This is achieved through a significant scaling of the model architecture and training data, which enhances its ability to preserve reference details, render dense text and typography accurately, and understand nuanced user instructions.

Key Features & Innovations

  • Unified Multimodal Framework: Integrates text-to-image (T2I), single-image editing, and multi-image composition into a single, cohesive model, allowing for diverse and flexible creative workflows.
  • High-Fidelity & High-Resolution Generation: Capable of generating native high-resolution images (up to 4K), capturing fine details, realistic textures, and accurate lighting for professional use cases.
  • Advanced Image Editing: Excels at preserving the core structure, lighting, and color tone of reference images while applying precise edits based on natural language instructions.
  • Enhanced Multi-Image Composition: Accurately identifies and blends main subjects from multiple reference images, enabling complex creative compositions and style fusions.
  • Superior Typography and Text Rendering: Features significantly improved capabilities for rendering clear, legible, and contextually integrated text within images.
  • Efficient and Scalable Architecture: Built on a highly efficient Diffusion Transformer (DiT) and a powerful Variational Autoencoder (VAE), enabling fast inference and effective scalability.
  • Optimized for Professional Use: Demonstrates strong performance in generating structured, knowledge-based content such as design materials, posters, and product visualizations, bridging the gap between creative generation and practical industry applications.

Model Architecture & Technical Details

Seedream 4.5's architecture is an extension of the foundation laid by Seedream 4.0. The core of the model is a highly efficient and scalable Diffusion Transformer (DiT), which significantly increases model capacity while reducing computational requirements for training and inference. This is paired with a powerful Variational Autoencoder (VAE) with a high compression ratio, which minimizes the number of image tokens processed in the latent space, further boosting efficiency.

Training and Data: The model was pre-trained on billions of text-image pairs, covering a vast range of taxonomies and knowledge-centric concepts. Training was conducted in multiple stages, starting at a 512x512 resolution and fine-tuning at progressively higher resolutions up to 4K. The post-training phase is extensive, incorporating Continuing Training (CT) for foundational knowledge, Supervised Fine-Tuning (SFT) for artistic quality, and Reinforcement Learning from Human Feedback (RLHF) to align outputs with human preferences. A sophisticated Prompt Engineering (PE) module, built upon the Seed1.5-VL vision-language model, is used to process user inputs and enhance instruction following.

Intended Use & Applications

Seedream 4.5 is designed for professional creators and applications demanding high-quality, consistent, and controllable image generation. Its intended uses include:

  • Professional Content Creation: Generating cinematic-quality visuals for digital advertising, social media, and print.
  • Advanced Photo Editing: Performing complex edits, such as changing clothing materials, modifying backgrounds, or adjusting lighting, while maintaining subject integrity.
  • E-commerce and Product Visualization: Creating high-quality product showcases and marketing materials.
  • Graphic Design: Designing posters, key visuals, and other materials that require the integration of stylized text and typography.
  • Creative Storytelling: Producing sequential, thematically related images for storyboards or visual narratives.

Performance

Seedream 4.5 and its predecessor, Seedream 4.0, have demonstrated top-tier performance on public benchmarks. The models are evaluated on the Artificial Analysis Arena, a real-time competitive leaderboard that ranks models based on blind user votes.

Text-to-Image Leaderboard (December 2025)

RankModelDeveloperELO ScoreRelease Date
1GPT Image 1.5 (high)OpenAI1,252Dec 2025
2Nano Banana ProGoogle1,223Nov 2025
5Seedream 4.0ByteDance Seed1,193Sept 2025
7Seedream 4.5ByteDance Seed1,169Dec 2025

Inizia con Oltre 300 Modelli,

Esplora tutti i modelli

Join our Discord community

Join the Discord community for the latest model updates, prompts, and support.