Qwen/Qwen3-Coder

Qwen3-Coder is the code version of Qwen3.

LLMNEWHOT
LLM

Qwen3-Coder is the code version of Qwen3.

参数

OpenAI SDK

集成

Input Schema

以下参数在请求体中被接受。

总计: 0必填: 0可选: 0

暂无可用参数。

请求体示例

json
{
  "model": "Qwen/Qwen3-Coder",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ],
  "max_tokens": 1024,
  "temperature": 0.7,
  "stream": false
}

Qwen3-Coder-480B-A35B-Instruct

Highlights

Qwen3-Coder-480B-A35B-Instruct. featuring the following key enhancements:

  • Significant Performance among open models on Agentic Coding, Agentic Browser-Use, and other foundational coding tasks, achieving results comparable to Claude Sonnet.
  • Long-context Capabilities with native support for 256K tokens, extendable up to 1M tokens using Yarn, optimized for repository-scale understanding.
  • Agentic Coding supporting for most platform such as Qwen Code, CLINE, featuring a specially designed function call format.

Model Overview

Qwen3-480B-A35B-Instruct has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 480B in total and 35B activated
  • Number of Layers: 62
  • Number of Attention Heads (GQA): 96 for Q and 8 for KV
  • Number of Experts: 160
  • Number of Activated Experts: 8
  • Context Length: 262,144 natively.

NOTE: This model supports only non-thinking mode and does not generate <think></think> blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.

300+ 模型,即刻开启,

探索全部模型