The latest Qwen model.

The latest Qwen model.
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("ATLASCLOUD_API_KEY"),
base_url="https://api.atlascloud.ai/v1"
)
response = client.chat.completions.create(
model="qwen/qwen2.5-7b-instruct",
messages=[
{
"role": "user",
"content": "hello"
}
],
max_tokens=1024,
temperature=0.7
)
print(response.choices[0].message.content)安装所需的依赖包。
pip install requests所有 API 请求需要通过 API Key 进行认证。您可以在 Atlas Cloud 控制台获取 API Key。
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}切勿在客户端代码或公开仓库中暴露您的 API Key。请使用环境变量或后端代理。
import requests
url = "https://api.atlascloud.ai/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 1024
}
response = requests.post(url, headers=headers, json=data)
print(response.json())以下参数在请求体中被接受。
{
"model": "qwen/qwen2.5-7b-instruct",
"messages": [
{
"role": "user",
"content": "Hello"
}
],
"max_tokens": 1024,
"temperature": 0.7,
"stream": false
}API 返回兼容 ChatCompletion 的响应格式。
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1700000000,
"model": "model-name",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}Atlas Cloud Skills 将 300+ AI 模型直接集成到您的 AI 编程助手中。一条命令安装,即可用自然语言生成图像、视频和与 LLM 对话。
npx skills add AtlasCloudAI/atlas-cloud-skills从 Atlas Cloud 控制台获取 API Key,并将其设置为环境变量。
export ATLASCLOUD_API_KEY="your-api-key-here"安装后,您可以在 AI 助手中使用自然语言访问所有 Atlas Cloud 模型。
Atlas Cloud MCP Server 通过 Model Context Protocol 将您的 IDE 与 300+ AI 模型连接。支持任何兼容 MCP 的客户端。
npx -y atlascloud-mcp将以下配置添加到您的 IDE 的 MCP 设置文件中。
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

This repo contains the QwQ 32B model, which has the following features:
Note: For the best experience, please review the usage guidelines before deploying QwQ models.
You can try our demo or access QwQ models via QwenChat.
For more details, please refer to our blog, GitHub, and Documentation.
QwQ is based on Qwen2.5, whose code has been in the latest Hugging face transformers. We advise you to use the latest version of transformers.
With transformers<4.37.0, you will encounter the following error:
KeyError: 'qwen2'
Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Qwen/QwQ-32B" model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "How many r's are in the word \"strawberry\"" messages = [ {"role": "user", "content": prompt} ] text = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) model_inputs = tokenizer([text], return_tensors="pt").to(model.device) generated_ids = model.generate( **model_inputs, max_new_tokens=32768 ) generated_ids = [ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) ] response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] print(response)
To achieve optimal performance, we recommend the following settings:
Enforce Thoughtful Output: Ensure the model starts with "\n" to prevent generating empty thinking content, which can degrade output quality. If you use apply_chat_template and set add_generation_prompt=True, this is already automatically implemented, but it may cause the response to lack the tag at the beginning. This is normal behavior.
Sampling Parameters:
* Use Temperature=0.6, TopP=0.95, MinP=0 instead of Greedy decoding to avoid endless repetitions. * Use TopK between 20 and 40 to filter out rare token occurrences while maintaining the diversity of the generated output. * For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may result in occasional language mixing and a slight decrease in performance.
No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. This feature is already implemented in apply_chat_template.
Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
* **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt. * **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g.,`\"answer\": \"C\"`." in the prompt.
For supported frameworks, you could add the following to config.json to enable YaRN:
{ ..., "rope_scaling": { "factor": 4.0, "original_max_position_embeddings": 32768, "type": "yarn" } }
For deployment, we recommend using vLLM. Please refer to our Documentation for usage if you are not familar with vLLM. Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.
Detailed evaluation results are reported in this 📑 blog.
For requirements on GPU memory and the respective throughput, see results here.
If you find our work helpful, feel free to give us a cite.
@misc{qwq32b, title = {QwQ-32B: Embracing the Power of Reinforcement Learning}, url = {https://qwenlm.github.io/blog/qwq-32b/}, author = {Qwen Team}, month = {March}, year = {2025} } @article{qwen2.5, title={Qwen2.5 Technical Report}, author={An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu}, journal={arXiv preprint arXiv:2412.15115}, year={2024} }