




The Flux.2 Series is a comprehensive family of AI image generation models. Across the lineup, Flux supports text-to-image, image-to-image, reconstruction, contextual reasoning, and high-speed creative workflows.
Atlas Cloud provides you with the latest industry-leading creative models.

Open and Advanced Large-Scale Image Generative Models.

Open and Advanced Large-Scale Image Generative Models.

Open and Advanced Large-Scale Image Generative Models.

Open and Advanced Large-Scale Image Generative Models.

Open and Advanced Large-Scale Image Generative Models.

Open and Advanced Large-Scale Image Generative Models.

Open and Advanced Large-Scale Image Generative Models.

Flux-dev text to image model, 12 billion parameter rectified flow transformer.

Rapid, high-quality image generation with FLUX.1 [dev] and LoRA support for personalized styles and brand-specific outputs, ultra fast !

Rapid, high-quality image generation with FLUX.1 [dev] and LoRA support for personalized styles and brand-specific outputs.

Flux-dev text to image model, 12 billion parameter rectified flow transformer, ultra fast!

FLUX.1 Kontext [dev] is a development version of the state-of-the-art image editing model that lets you edit images using text prompts. It makes editing intuitive by understanding the relationship between visuals and language.

FLUX.1 Kontext Ultra Fast [dev] is a development version of the state-of-the-art image editing model that lets you edit images using text prompts. It makes editing intuitive by understanding the relationship between visuals and language, with ultra-fast processing speed.

Fast FLUX.1 Kontext [dev] endpoint with LoRA support for rapid image editing using pre-trained adapters for brand and style. Ready-to-use REST inference API, best performance, no coldstarts, affordable pricing.

FLUX.1 Kontext LoRA Ultra Fast [dev] is an optimized, high-performance version of the state-of-the-art image editing model that lets you edit images using text prompts and LoRA models. It maintains the same high quality while delivering significantly faster processing times.

FLUX.1 Kontext Multi [dev] is a development version of the state-of-the-art image editing model that lets you edit multiple images using text prompts. It makes batch editing intuitive by understanding the relationship between visuals and language.

FLUX.1 Kontext Multi Ultra Fast [dev] is an optimized, high-performance version of the state-of-the-art image editing model that lets you edit multiple images using text prompts. It maintains the same high quality while delivering significantly faster processing times.

The FLUX.1 Kontext [pro] text-to-image delivers state-of-the-art image generation results with unprecedented prompt following, photorealistic rendering, and flawless typography.

Experimental version of FLUX.1 Kontext [pro] with multi image handling capabilities.

A state-of-the-art image editing model, Flux Kontext, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini2 Flash.

FLUX.1 Kontext [max] text-to-image is a new premium model brings maximum performance across all aspects greatly improved prompt adherence.

Experimental version of FLUX.1 Kontext [max] with multi image handling capabilities.

A state-of-the-art image editing model, Flux Kontext, which aims to provide comparable performance against the closed-source models like GPT-4o and Gemini2 Flash.

FLUX.1 [schnell] is a 12 billion parameter flow transformer that generates high-quality images from text in 1 to 4 steps, suitable for personal and commercial use.

FLUX.1 [schnell] is fastest image generation model tailored for local development and personal use, a 12 billion parameter rectified flow transformer.

FLUX.2 [flex] Text-to-Image delivers versatile, style-rich image generation with broader aesthetics at high speed, perfect for creative exploration and diverse visual styles.

FLUX.2 [pro] Text-to-Image is the flagship model built for maximum fidelity, cinematic visual quality, and strong prompt adherence—ideal for hero shots, key art, and high-stakes campaigns.

FLUX.2 [dev] Text-to-Image is a lightweight base model optimised for speed and LoRA training, perfect for high-volume generation pipelines and domain-specific fine-tuning.

FLUX.2 [flex] Edit offers powerful, precise, and colour-accurate image editing with a broader stylistic range, ideal for creative exploration and controllable transformations.

FLUX.2 [pro] Edit is a premium image editing model for detailed, high-fidelity transformations on critical assets, delivering flagship-quality results for demanding production work.

FLUX.2 [dev] Edit is a lightweight, LoRA-friendly image editing model for fast, style-consistent edits on existing images with professional visual quality and efficient resource usage.
Atlas Cloud provides you with the latest industry-leading creative models.

Generates crisp, high-resolution images with accurate lighting, textures, and detail for production use.

Optimized architecture delivers rapid image generation on modest GPUs and edge hardware.

Supports styles, presets, and prompt controls so designers can quickly dial in the exact look they want.

Simple APIs and plugins connect Nano Banana to design tools, apps, and pipelines with minimal setup.

Efficient diffusion kernels and smart caching keep generation costs low, so teams can experiment freely at scale.

Flexible Deployment Options Run in the cloud, on-prem, or in VPC environments.
Get started in minutes — follow these simple steps to integrate and deploy models through Atlas Cloud's platform.
Sign up at atlascloud.ai and complete verification. New users receive free credits to explore the platform and test models.
Combining the advanced Flux.2 Image Models models with Atlas Cloud's GPU-accelerated platform provides unmatched performance, scalability, and developer experience.
Low Latency:
GPU-optimized inference for real-time reasoning.
Unified API:
Run Flux.2 Image Models, GPT, Gemini, and DeepSeek with one integration.
Transparent Pricing:
Predictable per-token billing with serverless options.
Developer Experience:
SDKs, analytics, fine-tuning tools, and templates.
Reliability:
99.99% uptime, RBAC, and compliance-ready logging.
Security & Compliance:
SOC 2 Type II, HIPAA alignment, data sovereignty in US.
Nano Banana 2 (by Google), is a generative image model that perfectly balances lightning-fast rendering with exceptional visual quality. With an improved price-performance ratio, it achieves breakthrough micro-detail depiction, accurate native text rendering, and complex physical structure reconstruction. It serves as a highly efficient, commercial-grade visual production tool for developers, marketing teams, and content creators.
Seedream 5.0 (by ByteDance) is a next-generation multimodal visual synthesis engine. By groundbreakingly fusing real-time web retrieval with intelligent logical reasoning, it precisely comprehends physical laws and complex instructions. It serves as an end-to-end visual productivity powerhouse for professional designers and creators—empowering the entire workflow from the spark of inspiration to instant generation and precision editing.
Seedance 2.0(by Bytedance) is a multimodal video generation model that redefines "controllable creation," moving beyond the limitations of text or start/end frames. It supports quad-modal inputs—text, image, video, and audio—and introduces an industry-leading "Universal Reference" system. By precisely replicating the composition, camera movement, and character actions from reference assets, Seedance 2.0 solves critical issues with character consistency and physical coherence, empowering creators to act as true "directors" with deep control over their output.
Kling 3.0 Series Video Models (by Kuaishou) is a native multi-modal video generation system integrates high-fidelity video generation, intelligent storyboard storytelling, and precise video editing. Powered by its exclusive "AI Director" mode and Omni-level subject consistency technology, it breaks through bottlenecks in long-take storytelling and character stability, offering global creators and enterprises a cinematic, commercially ready, and efficient video solution.
GLM (General Language Model) is a large language model developed by ZAI (Zhipu AI) for text understanding, generation, and reasoning. It supports both Chinese and English and performs well in dialogue, content creation, translation, and code assistance. GLM is widely used in chatbots, enterprise AI systems, and developer applications due to its stable performance and versatility.
Explore OpenAI’s language and video models on Atlas Cloud: ChatGPT for advanced reasoning and interaction, and Sora-2 for physics-aware video generation.
Vidu (by ShengShu Technology) is a foundational video model built on the proprietary U-ViT architecture, combining the strengths of Diffusion and Transformer models. It features superior semantic understanding and generation capabilities, producing coherent, fluid visuals that adhere to physical laws without the need for interpolation. With exceptional spatiotemporal consistency and a deep understanding of diverse cultural elements, Vidu empowers professional filmmakers and creators with a stable, efficient, and imaginative tool for video production.
Van Model is a flagship video model family, perfectly retaining the cinematic visuals and complex dynamics of 3D VAE and Flow Matching. By leveraging proprietary compute distillation, it breaks the "quality equals cost" barrier to deliver extreme inference speeds and ultra-low costs. This makes Van the premier engine for enterprises and developers seeking high-frequency, scalable video production on a budget.
MiniMax is a large language model developed by MiniMax AI, focused on efficient reasoning, long-context understanding, and scalable text generation. It is designed for complex tasks such as dialogue systems, document analysis, content creation, and AI agents. With an emphasis on high performance at lower computational cost, MiniMax is well suited for enterprise applications and developer use cases where stability, efficiency, and cost control are important.
Kimi is a large language model developed by Moonshot AI, designed for reasoning, coding, and long-context understanding. It performs well in complex tasks such as code generation, analysis, and intelligent assistants. With strong performance and efficient architecture, Kimi is suitable for enterprise AI applications and developer use cases. Its balance of capability and cost makes it an increasingly popular choice in the LLM ecosystem.
Veo 3.1 (by Google) is a flagship generative video model that sets a new standard for cinematic AI by deeply integrating semantic capabilities to deliver cinematic visuals, synchronized audio, and complex storytelling in a single workflow. Distinguishing itself through superior adherence to cinematic terminology and physics-based consistency, it offers professional filmmakers an unparalleled tool for transforming scripts into coherent, high-fidelity productions with precise directorial control.
The Sora-2 family from OpenAI is the next-generation video + audio generation model, enabling both text-to-video and image-to-video outputs with synchronized dialogue, sound effect, improved physical realism, and fine-grained control.