The advanced LLM

The advanced LLM
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("ATLASCLOUD_API_KEY"),
base_url="https://api.atlascloud.ai/v1"
)
response = client.chat.completions.create(
model="deepseek-ai/deepseek-r1-0528",
messages=[
{
"role": "user",
"content": "hello"
}
],
max_tokens=1024,
temperature=0.7
)
print(response.choices[0].message.content)安装所需的依赖包。
pip install requests所有 API 请求需要通过 API Key 进行认证。您可以在 Atlas Cloud 控制台获取 API Key。
export ATLASCLOUD_API_KEY="your-api-key-here"import os
API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}切勿在客户端代码或公开仓库中暴露您的 API Key。请使用环境变量或后端代理。
import requests
url = "https://api.atlascloud.ai/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
"model": "your-model",
"messages": [{"role": "user", "content": "Hello"}],
"max_tokens": 1024
}
response = requests.post(url, headers=headers, json=data)
print(response.json())以下参数在请求体中被接受。
{
"model": "deepseek-ai/deepseek-r1-0528",
"messages": [
{
"role": "user",
"content": "Hello"
}
],
"max_tokens": 1024,
"temperature": 0.7,
"stream": false
}API 返回兼容 ChatCompletion 的响应格式。
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1700000000,
"model": "model-name",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 20,
"total_tokens": 30
}
}Atlas Cloud Skills 将 300+ AI 模型直接集成到您的 AI 编程助手中。一条命令安装,即可用自然语言生成图像、视频和与 LLM 对话。
npx skills add AtlasCloudAI/atlas-cloud-skills从 Atlas Cloud 控制台获取 API Key,并将其设置为环境变量。
export ATLASCLOUD_API_KEY="your-api-key-here"安装后,您可以在 AI 助手中使用自然语言访问所有 Atlas Cloud 模型。
Atlas Cloud MCP Server 通过 Model Context Protocol 将您的 IDE 与 300+ AI 模型连接。支持任何兼容 MCP 的客户端。
npx -y atlascloud-mcp将以下配置添加到您的 IDE 的 MCP 设置文件中。
{
"mcpServers": {
"atlascloud": {
"command": "npx",
"args": [
"-y",
"atlascloud-mcp"
],
"env": {
"ATLASCLOUD_API_KEY": "your-api-key-here"
}
}
}
}The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.

Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.
Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.
For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of , a top-p value of , and generate 16 responses per query to estimate pass@1.
| Category | Benchmark (Metric) | DeepSeek R1 | DeepSeek R1 0528 |
|---|---|---|---|
| General | |||
| MMLU-Redux (EM) | 92.9 | 93.4 | |
| MMLU-Pro (EM) | 84.0 | 85.0 | |
| GPQA-Diamond (Pass@1) | 71.5 | 81.0 | |
| SimpleQA (Correct) | 30.1 | 27.8 | |
| FRAMES (Acc.) | 82.5 | 83.0 | |
| Humanity's Last Exam (Pass@1) | 8.5 | 17.7 | |
| Code | |||
| LiveCodeBench (2408-2505) (Pass@1) | 63.5 | 73.3 | |
| Codeforces-Div1 (Rating) | 1530 | 1930 | |
| SWE Verified (Resolved) | 49.2 | 57.6 | |
| Aider-Polyglot (Acc.) | 53.3 | 71.6 | |
| Math | |||
| AIME 2024 (Pass@1) | 79.8 | 91.4 | |
| AIME 2025 (Pass@1) | 70.0 | 87.5 | |
| HMMT 2025 (Pass@1) | 41.7 | 79.4 | |
| CNMO 2024 (Pass@1) | 78.8 | 86.9 | |
| Tools | |||
| BFCL_v3_MultiTurn (Acc) | - | 37.0 | |
| Tau-Bench (Pass@1) | - | 53.5(Airline)/63.9(Retail) |
Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.