Kimi K2.5
LLM

Kimi K2.5

Kimi K2.5 is an advanced large language model with strong reasoning and upgraded native multimodality. It natively understands and processes text and images, delivering more accurate analysis, better instruction following, and stable performance across complex tasks. Designed for production use, Kimi K2.5 is ideal for AI assistants, enterprise applications, and multimodal workflows that require reliable and high-quality outputs.

Kimi K2.5
Qwen3 Max 20260123
DeepSeek V3.2
MiniMax M2.1
GLM 4.7
CATEGORY
Series
36 of 36 models
New
NEW
HOT
Kimi K2.5
LLM

Kimi K2.5

Kimi K2.5 is an advanced large language model with strong reasoning and upgraded native multimodality. It natively understands and processes text and images, delivering more accurate analysis, better instruction following, and stable performance across complex tasks. Designed for production use, Kimi K2.5 is ideal for AI assistants, enterprise applications, and multimodal workflows that require reliable and high-quality outputs.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$0.56/M tokens
Output:$2.8/M tokens
Max Output:65.54K
$0.56/2.8M in/out
NEW
Qwen3 Max 20260123
LLM

Qwen3 Max 20260123

Qwen3-Max is a flagship large language model designed for ultra-long context understanding, powerful reasoning, and high-performance text and code generation, making it well suited for complex, large-scale, and production-grade AI applications.

252.0K CONTEXT:
Input type:
Output type:
Context:252.00K
Input:$1.2/M tokens
Output:$6/M tokens
Max Output:32.00K
$1.2/6M in/out
HOT
MiniMax M2.1
LLM

MiniMax M2.1

MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency.

196.6K CONTEXT:
Input type:
Output type:
Context:196.61K
Input:$0.29/M tokens
Output:$0.95/M tokens
Max Output:65.54K
$0.29/0.95M in/out
NEW
HOT
GLM 4.7
LLM

GLM 4.7

GLM-4.7 is Z.AI’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.

202.8K CONTEXT:
Input type:
Output type:
Context:202.75K
Input:$0.52/M tokens
Output:$1.75/M tokens
Max Output:131.07K
$0.52/1.75M in/out
NEW
HOT
DeepSeek V3.2 Speciale
LLM

DeepSeek V3.2 Speciale

Fastest, most cost-effective model from DeepSeek Ai.

163.8K CONTEXT:
Input type:
Output type:
Context:163.84K
Input:$0.4/M tokens
Output:$1.2/M tokens
Max Output:65.54K
$0.4/1.2M in/out
NEW
HOT
DeepSeek V3.2
LLM

DeepSeek V3.2

DeepSeek V3.2 is a state-of-the-art large language model combining efficient sparse attention, strong reasoning, and integrated agent capabilities for robust long-context understanding and versatile AI applications.

163.8K CONTEXT:
Input type:
Output type:
Context:163.84K
Input:$0.26/M tokens
Output:$0.38/M tokens
Max Output:65.54K
$0.26/0.38M in/out
NEW
HOT
KwaiKAT
LLM

KAT Coder Pro V1

KAT Coder Pro is KwaiKAT's most advanced agentic coding model in the KAT-Coder series. Designed specifically for agentic coding tasks, it excels in real-world software engineering scenarios, achieving 73.4% solve rate on the SWE-Bench Verified benchmark.

256.0K CONTEXT:
Input type:
Output type:
Context:256.00K
Input:$0.3/M tokens
Output:$1.2/M tokens
Max Output:128.00K
$0.3/1.2M in/out
NEW
HOT
KwaiKAT
LLM

KAT Coder Exp-72b-101

KAT Coder is KwaiKAT's most advanced agentic coding model in the KAT-Coder series.

128.0K CONTEXT:
Input type:
Output type:
Context:128.00K
Input:Free
Output:Free
Max Output:32.77K
Free
HOT
MiniMax-M2
LLM

MiniMax-M2

MiniMax-M2 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency.

196.6K CONTEXT:
Input type:
Output type:
Context:196.61K
Input:$0.255/M tokens
Output:$1/M tokens
Max Output:65.54K
$0.255/1M in/out
NEW
HOT
DeepSeek V3.2 Exp
LLM

DeepSeek V3.2 Exp

Fastest, most cost-effective model from DeepSeek Ai.

163.8K CONTEXT:
Input type:
Output type:
Context:163.84K
Input:$0.27/M tokens
Output:$0.41/M tokens
Max Output:65.54K
$0.27/0.41M in/out
NEW
HOT
GLM 4.6
LLM

GLM 4.6

357B-parameter efficient MoE model from Zhipu AI.

202.8K CONTEXT:
Input type:
Output type:
Context:202.75K
Input:$0.44/M tokens
Output:$1.74/M tokens
Max Output:131.07K
$0.44/1.74M in/out
NEW
HOT
Kimi-K2-Thinking
LLM

Kimi-K2-Thinking

Kimi's latest and most powerful open-source model.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$0.6/M tokens
Output:$2.5/M tokens
Max Output:65.54K
$0.6/2.5M in/out
NEW
HOT
MiMo V2 Flash
LLM

MiMo V2 Flash

No description available.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$0.1/M tokens
Output:$0.3/M tokens
Max Output:65.54K
$0.1/0.3M in/out
NEW
DeepSeek-V3-0324
LLM

DeepSeek-V3-0324

DeepSeek's updated V3 model released on 03/24/2025.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.216/M tokens
Output:$0.88/M tokens
Max Output:32.77K
$0.216/0.88M in/out
NEW
HOT
DeepSeek-R1-0528
LLM

DeepSeek-R1-0528

The advanced LLM

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.55/M tokens
Output:$2.15/M tokens
Max Output:32.77K
$0.55/2.15M in/out
HOT
Qwen3-VL-235B-A22B-Instruct
LLM

Qwen3-VL-235B-A22B-Instruct

No description available.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.3/M tokens
Output:$1.5/M tokens
Max Output:32.77K
$0.3/1.5M in/out
NEW
Tongyi DeepResearch 30B A3B
LLM

Tongyi DeepResearch 30B A3B

Open and Advanced Models.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.09/M tokens
Output:$0.45/M tokens
Max Output:131.07K
$0.09/0.45M in/out
NEW
Qwen3 30B A3B Instruct 2507
LLM

Qwen3 30B A3B Instruct 2507

Qwen's latest and most powerful open-source model.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.1/M tokens
Output:$0.3/M tokens
Max Output:131.07K
$0.1/0.3M in/out
NEW
HOT
Qwen3 Next 80B A3B Thinking
LLM

Qwen3 Next 80B A3B Thinking

Qwen's latest and most powerful open-source model.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$0.15/M tokens
Output:$1.5/M tokens
Max Output:262.14K
$0.15/1.5M in/out
NEW
HOT
Qwen3 Next 80B A3B Instruct
LLM

Qwen3 Next 80B A3B Instruct

Qwen's latest and most powerful open-source model.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$0.15/M tokens
Output:$1.5/M tokens
Max Output:32.77K
$0.15/1.5M in/out
NEW
HOT
DeepSeek-V3.1
LLM

DeepSeek-V3.1

Deepseek's latest and most powerful open-source model.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.3/M tokens
Output:$0.95/M tokens
Max Output:32.77K
$0.3/0.95M in/out
NEW
HOT
Kimi-K2-Instruct-0905
LLM

Kimi-K2-Instruct-0905

No description available.

65.5K CONTEXT:
Input type:
Output type:
Context:65.54K
Input:$0.6/M tokens
Output:$2.5/M tokens
Max Output:65.54K
$0.6/2.5M in/out
NEW
HOT
Kimi-K2-Instruct
LLM

Kimi-K2-Instruct

Kimi's latest and most powerful open-source model.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.7/M tokens
Output:$2.5/M tokens
Max Output:65.54K
$0.7/2.5M in/out
Qwen3 8B
LLM

Qwen3 8B

The latest Qwen reasoning model.

32.0K CONTEXT:
Input type:
Output type:
Context:32.00K
Input:$0.05/M tokens
Output:$0.4/M tokens
Max Output:8.19K
$0.05/0.4M in/out
Qwen3 235B A22B Thinking 2507
LLM

Qwen3 235B A22B Thinking 2507

The latest Qwen reasoning model.

128.0K CONTEXT:
Input type:
Output type:
Context:128.00K
Input:$0.28/M tokens
Output:$2.3/M tokens
Max Output:32.77K
$0.28/2.3M in/out
Qwen3 VL 235B A22B Thinking
LLM

Qwen3 VL 235B A22B Thinking

The latest Qwen reasoning model.

128.0K CONTEXT:
Input type:
Output type:
Context:128.00K
Input:$0.5/M tokens
Output:$2.5/M tokens
Max Output:32.77K
$0.5/2.5M in/out
Qwen3 30B A3B
LLM

Qwen3 30B A3B

The latest Qwen reasoning model.

41.0K CONTEXT:
Input type:
Output type:
Context:40.96K
Input:$0.08/M tokens
Output:$1.25/M tokens
Max Output:32.77K
$0.08/1.25M in/out
Qwen3 30B A3B Thinking 2507
LLM

Qwen3 30B A3B Thinking 2507

The latest Qwen reasoning model.

128.0K CONTEXT:
Input type:
Output type:
Context:128.00K
Input:$0.08/M tokens
Output:$0.4/M tokens
Max Output:32.77K
$0.08/0.4M in/out
Qwen2.5 7B Instruct
LLM

Qwen2.5 7B Instruct

The latest Qwen model.

128.0K CONTEXT:
Input type:
Output type:
Context:128.00K
Input:$0.04/M tokens
Output:$0.1/M tokens
Max Output:8.19K
$0.04/0.1M in/out
DeepSeek OCR
LLM

DeepSeek OCR

The latest Deepseek model.

8.2K CONTEXT:
Input type:
Output type:
Context:8.19K
Input:$0.04/M tokens
Output:$0.08/M tokens
Max Output:8.19K
$0.04/0.08M in/out
NEW
Qwen3 32B
LLM

Qwen3 32B

The latest Qwen reasoning model.

41.0K CONTEXT:
Input type:
Output type:
Context:40.96K
Input:$0.1/M tokens
Output:$1.2/M tokens
Max Output:40.96K
$0.1/1.2M in/out
NEW
HOT
Qwen3 Coder
LLM

Qwen3 Coder

Qwen3-Coder is the code version of Qwen3.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$0.78/M tokens
Output:$3.8/M tokens
Max Output:65.54K
$0.78/3.8M in/out
NEW
HOT
Qwen3-235B-A22B-Instruct-2507
LLM

Qwen3-235B-A22B-Instruct-2507

235B-parameter MoE thinking model in Qwen3 series.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.2/M tokens
Output:$0.88/M tokens
Max Output:32.77K
$0.2/0.88M in/out
NEW
GPT OSS 120b
LLM

GPT OSS 120b

117B-parameter MoE language model from OpenAI.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.1/M tokens
Output:$0.2/M tokens
Max Output:128.00K
$0.1/0.2M in/out
NEW
HOT
DeepSeek V3.1 Terminus
LLM

DeepSeek V3.1 Terminus

Deepseek's latest and most powerful open-source model.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.3/M tokens
Output:$0.95/M tokens
Max Output:32.77K
$0.3/0.95M in/out
NEW
HOT
LongCat Flash Chat
LLM

LongCat Flash Chat

Meituan's latest and most powerful open-source model.

131.1K CONTEXT:
Input type:
Output type:
Context:131.07K
Input:$0.2/M tokens
Output:$0.8/M tokens
Max Output:32.77K
$0.2/0.8M in/out
Start From 300+ Models,

Only at Atlas Cloud.

Explore all models