Kimi K2.5
LLM

Kimi K2.5

Kimi K2.5 is an advanced large language model with strong reasoning and upgraded native multimodality. It natively understands and processes text and images, delivering more accurate analysis, better instruction following, and stable performance across complex tasks. Designed for production use, Kimi K2.5 is ideal for AI assistants, enterprise applications, and multimodal workflows that require reliable and high-quality outputs.

Kimi K2.5
Qwen3 Max 20260123
DeepSeek V3.2
MiniMax M2.1
GLM 4.7
CATEGORY
Series
4 of 4 models
New
NEW
HOT
GLM 5 Turbo
LLM
TURBO

GLM 5 Turbo

GLM-5 Turbo is Z.AI’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.

262.1K CONTEXT:
Input type:
Output type:
Context:262.14K
Input:$1.2/M tokens
Output:$4/M tokens
Max Output:131.07K
$1.2/4M in/out
Cache-Based
NEW
HOT
GLM 5
LLM

GLM 5

GLM-5 is Z.AI’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.

202.8K CONTEXT:
Input type:
Output type:
Context:202.75K
Input:$0.95/M tokens
Output:$3.15/M tokens
Max Output:202.75K
$0.95/3.15M in/out
Cache-Based
NEW
HOT
GLM 4.7
LLM

GLM 4.7

GLM-4.7 is Z.AI’s latest flagship model, featuring upgrades in two key areas: enhanced programming capabilities and more stable multi-step reasoning/execution. It demonstrates significant improvements in executing complex agent tasks while delivering more natural conversational experiences and superior front-end aesthetics.

202.8K CONTEXT:
Input type:
Output type:
Context:202.75K
Input:$0.52/M tokens
Output:$1.75/M tokens
Max Output:202.75K
$0.52/1.75M in/out
Cache-Based
NEW
HOT
GLM 4.6
LLM

GLM 4.6

357B-parameter efficient MoE model from Zhipu AI.

202.8K CONTEXT:
Input type:
Output type:
Context:202.75K
Input:$0.44/M tokens
Output:$1.74/M tokens
Max Output:202.75K
$0.44/1.74M in/out