MiniMax M2.7
LLM

MiniMax M2.7

MiniMax-M2.7 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency.

MiniMax M2.7
GLM 5 Turbo
Kimi K2.5
Qwen3.5 122B A10B
DeepSeek V3.2 Speciale
分类
模型系列
共 11 个模型,当前显示 11 个
最新
NEW
HOT
DeepSeek V4 Pro
LLM
PRO

DeepSeek V4 Pro

DeepSeek V4 Pro is a state-of-the-art large language model combining efficient sparse attention, strong reasoning, and integrated agent capabilities for robust long-context understanding and versatile AI applications.

1048.6K 上下文:
输入类型:
输出类型:
上下文:1048.58K
输入:$1.7/M tokens
输出:$3.4/M tokens
最大输出:65.54K
$1.7/3.4M 输入/输出
Cache-Based
NEW
HOT
DeepSeek V4 Flash
LLM

DeepSeek V4 Flash

DeepSeek V4 Flash is a state-of-the-art large language model combining efficient sparse attention, strong reasoning, and integrated agent capabilities for robust long-context understanding and versatile AI applications.

1048.6K 上下文:
输入类型:
输出类型:
上下文:1048.58K
输入:$0.14/M tokens
输出:$0.28/M tokens
最大输出:393.22K
$0.14/0.28M 输入/输出
Cache-Based
NEW
HOT
DeepSeek V3.2 Speciale
LLM

DeepSeek V3.2 Speciale

Fastest, most cost-effective model from DeepSeek Ai.

163.8K 上下文:
输入类型:
输出类型:
上下文:163.84K
输入:$0.4/M tokens
输出:$1.2/M tokens
最大输出:163.84K
$0.4/1.2M 输入/输出
Cache-Based
NEW
HOT
DeepSeek V3.2 Fast
LLM

DeepSeek V3.2 Fast

DeepSeek V3.2 is a state-of-the-art large language model combining efficient sparse attention, strong reasoning, and integrated agent capabilities for robust long-context understanding and versatile AI applications.

163.8K 上下文:
输入类型:
输出类型:
上下文:163.84K
输入:$0.5/M tokens
输出:$1.5/M tokens
最大输出:131.07K
$0.5/1.5M 输入/输出
NEW
HOT
DeepSeek V3.2
LLM

DeepSeek V3.2

DeepSeek V3.2 is a state-of-the-art large language model combining efficient sparse attention, strong reasoning, and integrated agent capabilities for robust long-context understanding and versatile AI applications.

163.8K 上下文:
输入类型:
输出类型:
上下文:163.84K
输入:$0.26/M tokens
输出:$0.38/M tokens
最大输出:163.84K
$0.26/0.38M 输入/输出
Cache-Based
NEW
HOT
DeepSeek V3.2 Exp
LLM

DeepSeek V3.2 Exp

Fastest, most cost-effective model from DeepSeek Ai.

163.8K 上下文:
输入类型:
输出类型:
上下文:163.84K
输入:$0.27/M tokens
输出:$0.41/M tokens
最大输出:163.84K
$0.27/0.41M 输入/输出
NEW
DeepSeek-V3-0324
LLM

DeepSeek-V3-0324

DeepSeek's updated V3 model released on 03/24/2025.

131.1K 上下文:
输入类型:
输出类型:
上下文:131.07K
输入:$0.216/M tokens
输出:$0.88/M tokens
最大输出:16.38K
$0.216/0.88M 输入/输出
Cache-Based
NEW
HOT
DeepSeek-R1-0528
LLM

DeepSeek-R1-0528

The advanced LLM

131.1K 上下文:
输入类型:
输出类型:
上下文:131.07K
输入:$0.55/M tokens
输出:$2.15/M tokens
最大输出:131.07K
$0.55/2.15M 输入/输出
Cache-Based
NEW
HOT
DeepSeek-V3.1
LLM

DeepSeek-V3.1

Deepseek's latest and most powerful open-source model.

131.1K 上下文:
输入类型:
输出类型:
上下文:131.07K
输入:$0.3/M tokens
输出:$0.95/M tokens
最大输出:65.54K
$0.3/0.95M 输入/输出
Cache-Based
DeepSeek OCR
LLM

DeepSeek OCR

The latest Deepseek model.

8.2K 上下文:
输入类型:
输出类型:
上下文:8.19K
输入:$0.04/M tokens
输出:$0.08/M tokens
最大输出:8.19K
$0.04/0.08M 输入/输出
NEW
HOT
DeepSeek V3.1 Terminus
LLM

DeepSeek V3.1 Terminus

Deepseek's latest and most powerful open-source model.

131.1K 上下文:
输入类型:
输出类型:
上下文:131.07K
输入:$0.3/M tokens
输出:$0.95/M tokens
最大输出:65.54K
$0.3/0.95M 输入/输出
Cache-Based

Join our Discord community

Join the Discord community for the latest model updates, prompts, and support.