Kimi K2.5
LLM

Kimi K2.5

Kimi K2.5 is an advanced large language model with strong reasoning and upgraded native multimodality. It natively understands and processes text and images, delivering more accurate analysis, better instruction following, and stable performance across complex tasks. Designed for production use, Kimi K2.5 is ideal for AI assistants, enterprise applications, and multimodal workflows that require reliable and high-quality outputs.

Kimi K2.5
Qwen3 Max 20260123
DeepSeek V3.2
MiniMax M2.1
GLM 4.7
分類
模型系列
共 36 個模型,目前顯示 36 個
最新
NEW
HOT
Kimi K2.5
LLM

Kimi K2.5

Kimi K2.5 is an advanced large language model with strong reasoning and upgraded native multimodality. It natively understands and processes text and images, delivering more accurate analysis, better instruction following, and stable performance across complex tasks. Designed for production use, Kimi K2.5 is ideal for AI assistants, enterprise applications, and multimodal workflows that require reliable and high-quality outputs.

262.1K 上下文:
輸入類型:
輸出類型:
上下文:262.14K
輸入:$0.56/百萬 tokens
輸出:$2.8/百萬 tokens
最大輸出:65.54K
$0.56/2.8百萬 輸入/輸出
Cache-Based
NEW
Qwen3 Max 20260123
LLM

Qwen3 Max 20260123

Qwen3-Max is a flagship large language model designed for ultra-long context understanding, powerful reasoning, and high-performance text and code generation, making it well suited for complex, large-scale, and production-grade AI applications.

252.0K 上下文:
輸入類型:
輸出類型:
上下文:252.00K
輸入:$1.2/百萬 tokens
輸出:$6/百萬 tokens
最大輸出:32.00K
$1.2/6百萬 輸入/輸出
Gradient-Based
HOT
MiniMax M2.1
LLM

MiniMax M2.1

MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency.

196.6K 上下文:
輸入類型:
輸出類型:
上下文:196.61K
輸入:$0.29/百萬 tokens
輸出:$0.95/百萬 tokens
最大輸出:65.54K
$0.29/0.95百萬 輸入/輸出
Cache-Based
NEW
HOT
GLM 4.7
LLM

GLM 4.7

GLM-4.7 is Z.AI’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.

202.8K 上下文:
輸入類型:
輸出類型:
上下文:202.75K
輸入:$0.52/百萬 tokens
輸出:$1.75/百萬 tokens
最大輸出:131.07K
$0.52/1.75百萬 輸入/輸出
Cache-Based
NEW
HOT
DeepSeek V3.2 Speciale
LLM

DeepSeek V3.2 Speciale

Fastest, most cost-effective model from DeepSeek Ai.

163.8K 上下文:
輸入類型:
輸出類型:
上下文:163.84K
輸入:$0.4/百萬 tokens
輸出:$1.2/百萬 tokens
最大輸出:65.54K
$0.4/1.2百萬 輸入/輸出
Cache-Based
NEW
HOT
DeepSeek V3.2
LLM

DeepSeek V3.2

DeepSeek V3.2 is a state-of-the-art large language model combining efficient sparse attention, strong reasoning, and integrated agent capabilities for robust long-context understanding and versatile AI applications.

163.8K 上下文:
輸入類型:
輸出類型:
上下文:163.84K
輸入:$0.26/百萬 tokens
輸出:$0.38/百萬 tokens
最大輸出:65.54K
$0.26/0.38百萬 輸入/輸出
Cache-Based
NEW
HOT
KwaiKAT
LLM

KAT Coder Pro V1

KAT Coder Pro is KwaiKAT's most advanced agentic coding model in the KAT-Coder series. Designed specifically for agentic coding tasks, it excels in real-world software engineering scenarios, achieving 73.4% solve rate on the SWE-Bench Verified benchmark.

256.0K 上下文:
輸入類型:
輸出類型:
上下文:256.00K
輸入:$0.3/百萬 tokens
輸出:$1.2/百萬 tokens
最大輸出:128.00K
$0.3/1.2百萬 輸入/輸出
Cache-Based
NEW
HOT
KwaiKAT
LLM

KAT Coder Exp-72b-101

KAT Coder is KwaiKAT's most advanced agentic coding model in the KAT-Coder series.

128.0K 上下文:
輸入類型:
輸出類型:
上下文:128.00K
輸入:免費
輸出:免費
最大輸出:32.77K
免費
HOT
MiniMax-M2
LLM

MiniMax-M2

MiniMax-M2 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency.

196.6K 上下文:
輸入類型:
輸出類型:
上下文:196.61K
輸入:$0.255/百萬 tokens
輸出:$1/百萬 tokens
最大輸出:65.54K
$0.255/1百萬 輸入/輸出
Cache-Based
NEW
HOT
DeepSeek V3.2 Exp
LLM

DeepSeek V3.2 Exp

Fastest, most cost-effective model from DeepSeek Ai.

163.8K 上下文:
輸入類型:
輸出類型:
上下文:163.84K
輸入:$0.27/百萬 tokens
輸出:$0.41/百萬 tokens
最大輸出:65.54K
$0.27/0.41百萬 輸入/輸出
NEW
HOT
GLM 4.6
LLM

GLM 4.6

357B-parameter efficient MoE model from Zhipu AI.

202.8K 上下文:
輸入類型:
輸出類型:
上下文:202.75K
輸入:$0.44/百萬 tokens
輸出:$1.74/百萬 tokens
最大輸出:131.07K
$0.44/1.74百萬 輸入/輸出
NEW
HOT
Kimi-K2-Thinking
LLM

Kimi-K2-Thinking

Kimi's latest and most powerful open-source model.

262.1K 上下文:
輸入類型:
輸出類型:
上下文:262.14K
輸入:$0.6/百萬 tokens
輸出:$2.5/百萬 tokens
最大輸出:65.54K
$0.6/2.5百萬 輸入/輸出
NEW
HOT
MiMo V2 Flash
LLM

MiMo V2 Flash

暫無描述

262.1K 上下文:
輸入類型:
輸出類型:
上下文:262.14K
輸入:$0.1/百萬 tokens
輸出:$0.3/百萬 tokens
最大輸出:65.54K
$0.1/0.3百萬 輸入/輸出
NEW
DeepSeek-V3-0324
LLM

DeepSeek-V3-0324

DeepSeek's updated V3 model released on 03/24/2025.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.216/百萬 tokens
輸出:$0.88/百萬 tokens
最大輸出:32.77K
$0.216/0.88百萬 輸入/輸出
Cache-Based
NEW
HOT
DeepSeek-R1-0528
LLM

DeepSeek-R1-0528

The advanced LLM

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.55/百萬 tokens
輸出:$2.15/百萬 tokens
最大輸出:32.77K
$0.55/2.15百萬 輸入/輸出
Cache-Based
HOT
Qwen3-VL-235B-A22B-Instruct
LLM

Qwen3-VL-235B-A22B-Instruct

暫無描述

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.3/百萬 tokens
輸出:$1.5/百萬 tokens
最大輸出:32.77K
$0.3/1.5百萬 輸入/輸出
NEW
Tongyi DeepResearch 30B A3B
LLM

Tongyi DeepResearch 30B A3B

Open and Advanced Models.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.09/百萬 tokens
輸出:$0.45/百萬 tokens
最大輸出:131.07K
$0.09/0.45百萬 輸入/輸出
NEW
Qwen3 30B A3B Instruct 2507
LLM

Qwen3 30B A3B Instruct 2507

Qwen's latest and most powerful open-source model.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.1/百萬 tokens
輸出:$0.3/百萬 tokens
最大輸出:131.07K
$0.1/0.3百萬 輸入/輸出
Cache-Based
NEW
HOT
Qwen3 Next 80B A3B Thinking
LLM

Qwen3 Next 80B A3B Thinking

Qwen's latest and most powerful open-source model.

262.1K 上下文:
輸入類型:
輸出類型:
上下文:262.14K
輸入:$0.15/百萬 tokens
輸出:$1.5/百萬 tokens
最大輸出:262.14K
$0.15/1.5百萬 輸入/輸出
NEW
HOT
Qwen3 Next 80B A3B Instruct
LLM

Qwen3 Next 80B A3B Instruct

Qwen's latest and most powerful open-source model.

262.1K 上下文:
輸入類型:
輸出類型:
上下文:262.14K
輸入:$0.15/百萬 tokens
輸出:$1.5/百萬 tokens
最大輸出:32.77K
$0.15/1.5百萬 輸入/輸出
NEW
HOT
DeepSeek-V3.1
LLM

DeepSeek-V3.1

Deepseek's latest and most powerful open-source model.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.3/百萬 tokens
輸出:$0.95/百萬 tokens
最大輸出:32.77K
$0.3/0.95百萬 輸入/輸出
Cache-Based
NEW
HOT
Kimi-K2-Instruct-0905
LLM

Kimi-K2-Instruct-0905

暫無描述

65.5K 上下文:
輸入類型:
輸出類型:
上下文:65.54K
輸入:$0.6/百萬 tokens
輸出:$2.5/百萬 tokens
最大輸出:65.54K
$0.6/2.5百萬 輸入/輸出
NEW
HOT
Kimi-K2-Instruct
LLM

Kimi-K2-Instruct

Kimi's latest and most powerful open-source model.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.7/百萬 tokens
輸出:$2.5/百萬 tokens
最大輸出:65.54K
$0.7/2.5百萬 輸入/輸出
Qwen3 8B
LLM

Qwen3 8B

The latest Qwen reasoning model.

32.0K 上下文:
輸入類型:
輸出類型:
上下文:32.00K
輸入:$0.05/百萬 tokens
輸出:$0.4/百萬 tokens
最大輸出:8.19K
$0.05/0.4百萬 輸入/輸出
Qwen3 235B A22B Thinking 2507
LLM

Qwen3 235B A22B Thinking 2507

The latest Qwen reasoning model.

128.0K 上下文:
輸入類型:
輸出類型:
上下文:128.00K
輸入:$0.28/百萬 tokens
輸出:$2.3/百萬 tokens
最大輸出:32.77K
$0.28/2.3百萬 輸入/輸出
Qwen3 VL 235B A22B Thinking
LLM

Qwen3 VL 235B A22B Thinking

The latest Qwen reasoning model.

128.0K 上下文:
輸入類型:
輸出類型:
上下文:128.00K
輸入:$0.5/百萬 tokens
輸出:$2.5/百萬 tokens
最大輸出:32.77K
$0.5/2.5百萬 輸入/輸出
Qwen3 30B A3B
LLM

Qwen3 30B A3B

The latest Qwen reasoning model.

41.0K 上下文:
輸入類型:
輸出類型:
上下文:40.96K
輸入:$0.08/百萬 tokens
輸出:$1.25/百萬 tokens
最大輸出:32.77K
$0.08/1.25百萬 輸入/輸出
Qwen3 30B A3B Thinking 2507
LLM

Qwen3 30B A3B Thinking 2507

The latest Qwen reasoning model.

128.0K 上下文:
輸入類型:
輸出類型:
上下文:128.00K
輸入:$0.08/百萬 tokens
輸出:$0.4/百萬 tokens
最大輸出:32.77K
$0.08/0.4百萬 輸入/輸出
Qwen2.5 7B Instruct
LLM

Qwen2.5 7B Instruct

The latest Qwen model.

128.0K 上下文:
輸入類型:
輸出類型:
上下文:128.00K
輸入:$0.04/百萬 tokens
輸出:$0.1/百萬 tokens
最大輸出:8.19K
$0.04/0.1百萬 輸入/輸出
DeepSeek OCR
LLM

DeepSeek OCR

The latest Deepseek model.

8.2K 上下文:
輸入類型:
輸出類型:
上下文:8.19K
輸入:$0.04/百萬 tokens
輸出:$0.08/百萬 tokens
最大輸出:8.19K
$0.04/0.08百萬 輸入/輸出
NEW
Qwen3 32B
LLM

Qwen3 32B

The latest Qwen reasoning model.

41.0K 上下文:
輸入類型:
輸出類型:
上下文:40.96K
輸入:$0.1/百萬 tokens
輸出:$1.2/百萬 tokens
最大輸出:40.96K
$0.1/1.2百萬 輸入/輸出
NEW
HOT
Qwen3 Coder
LLM

Qwen3 Coder

Qwen3-Coder is the code version of Qwen3.

262.1K 上下文:
輸入類型:
輸出類型:
上下文:262.14K
輸入:$0.78/百萬 tokens
輸出:$3.8/百萬 tokens
最大輸出:65.54K
$0.78/3.8百萬 輸入/輸出
Cache-Based
Gradient-Based
NEW
HOT
Qwen3-235B-A22B-Instruct-2507
LLM

Qwen3-235B-A22B-Instruct-2507

235B-parameter MoE thinking model in Qwen3 series.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.2/百萬 tokens
輸出:$0.88/百萬 tokens
最大輸出:32.77K
$0.2/0.88百萬 輸入/輸出
NEW
GPT OSS 120b
LLM

GPT OSS 120b

117B-parameter MoE language model from OpenAI.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.1/百萬 tokens
輸出:$0.2/百萬 tokens
最大輸出:128.00K
$0.1/0.2百萬 輸入/輸出
NEW
HOT
DeepSeek V3.1 Terminus
LLM

DeepSeek V3.1 Terminus

Deepseek's latest and most powerful open-source model.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.3/百萬 tokens
輸出:$0.95/百萬 tokens
最大輸出:32.77K
$0.3/0.95百萬 輸入/輸出
Cache-Based
NEW
HOT
LongCat Flash Chat
LLM

LongCat Flash Chat

Meituan's latest and most powerful open-source model.

131.1K 上下文:
輸入類型:
輸出類型:
上下文:131.07K
輸入:$0.2/百萬 tokens
輸出:$0.8/百萬 tokens
最大輸出:32.77K
$0.2/0.8百萬 輸入/輸出
300+ 模型,即刻開啟,

盡在 Atlas Cloud。

探索全部模型