deepseek-ai/deepseek-r1-0528

The advanced LLM

LLMFP8HOT
首頁
探索
DeepSeek LLM Models
deepseek-ai/deepseek-r1-0528
DeepSeek-R1-0528
LLM

The advanced LLM

參數

程式碼範例

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("ATLASCLOUD_API_KEY"),
    base_url="https://api.atlascloud.ai/v1"
)

response = client.chat.completions.create(
    model="deepseek-ai/deepseek-r1-0528",
    messages=[
    {
        "role": "user",
        "content": "hello"
    }
],
    max_tokens=1024,
    temperature=0.7
)

print(response.choices[0].message.content)

安裝

為您的程式語言安裝所需的套件。

bash
pip install requests

驗證

所有 API 請求都需要透過 API 金鑰進行驗證。您可以從 Atlas Cloud 儀表板取得 API 金鑰。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

HTTP 標頭

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
請妥善保管您的 API 金鑰

切勿在客戶端程式碼或公開儲存庫中暴露您的 API 金鑰。請改用環境變數或後端代理。

提交請求

import requests

url = "https://api.atlascloud.ai/v1/chat/completions"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "messages": [{"role": "user", "content": "Hello"}],
    "max_tokens": 1024
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

輸入 Schema

以下參數可在請求主體中使用。

總計: 9必填: 2選填: 7
modelstringrequired
The model ID to use for the completion.
Example: "deepseek-ai/deepseek-r1-0528"
messagesarray[object]required
A list of messages comprising the conversation so far.
rolestringrequired
The role of the message author. One of "system", "user", or "assistant".
systemuserassistant
contentstringrequired
The content of the message.
max_tokensinteger
The maximum number of tokens to generate in the completion.
Default: 1024Min: 1
temperaturenumber
Sampling temperature between 0 and 2. Higher values make output more random, lower values more focused and deterministic.
Default: 0.7Min: 0Max: 2
top_pnumber
Nucleus sampling parameter. The model considers the tokens with top_p probability mass.
Default: 1Min: 0Max: 1
streamboolean
If set to true, partial message deltas will be sent as server-sent events.
Default: false
stoparray[string]
Up to 4 sequences where the API will stop generating further tokens.
frequency_penaltynumber
Penalizes new tokens based on their existing frequency in the text so far. Between -2.0 and 2.0.
Default: 0Min: -2Max: 2
presence_penaltynumber
Penalizes new tokens based on whether they appear in the text so far. Between -2.0 and 2.0.
Default: 0Min: -2Max: 2

範例請求主體

json
{
  "model": "deepseek-ai/deepseek-r1-0528",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ],
  "max_tokens": 1024,
  "temperature": 0.7,
  "stream": false
}

輸出 Schema

API 傳回與 ChatCompletion 相容的回應。

idstringrequired
Unique identifier for the completion.
objectstringrequired
Object type, always "chat.completion".
Default: "chat.completion"
createdintegerrequired
Unix timestamp of when the completion was created.
modelstringrequired
The model used for the completion.
choicesarray[object]required
List of completion choices.
indexintegerrequired
Index of the choice.
messageobjectrequired
The generated message.
finish_reasonstringrequired
The reason generation stopped.
stoplengthcontent_filter
usageobjectrequired
Token usage statistics.
prompt_tokensintegerrequired
Number of tokens in the prompt.
completion_tokensintegerrequired
Number of tokens in the completion.
total_tokensintegerrequired
Total tokens used.

範例回應

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1700000000,
  "model": "model-name",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20,
    "total_tokens": 30
  }
}

Atlas Cloud Skills

Atlas Cloud Skills 將 300 多個 AI 模型直接整合至您的 AI 程式碼助手。一鍵安裝,即可使用自然語言生成圖片、影片,以及與 LLM 對話。

支援的客戶端

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ 支援的客戶端

安裝

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

設定 API 金鑰

從 Atlas Cloud 儀表板取得 API 金鑰,並設為環境變數。

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

功能

安裝完成後,您可以在 AI 助手中使用自然語言存取所有 Atlas Cloud 模型。

圖片生成使用 Nano Banana 2、Z-Image 等模型生成圖片。
影片創作使用 Kling、Vidu、Veo 等從文字或圖片創建影片。
LLM 對話與 Qwen、DeepSeek 及其他大型語言模型對話。
媒體上傳上傳本機檔案,用於圖片編輯和圖片轉影片工作流程。

MCP Server

Atlas Cloud MCP Server 透過 Model Context Protocol 將您的 IDE 與 300 多個 AI 模型連接。支援任何 MCP 相容的客戶端。

支援的客戶端

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ 支援的客戶端

安裝

bash
npx -y atlascloud-mcp

設定

將以下設定新增至您 IDE 的 MCP 設定檔中。

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

可用工具

atlas_generate_image根據文字提示生成圖片。
atlas_generate_video從文字或圖片創建影片。
atlas_chat與大型語言模型對話。
atlas_list_models瀏覽 300 多個可用的 AI 模型。
atlas_quick_generate一步完成內容創建,自動選擇模型。
atlas_upload_media上傳本機檔案用於 API 工作流程。

DeepSeek-R1-0528

1. Introduction

The DeepSeek R1 model has undergone a minor version upgrade, with the current version being DeepSeek-R1-0528. In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training. The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming, and general logic. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro.

Image 12

Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks. For instance, in the AIME 2025 test, the model’s accuracy has increased from 70% in the previous version to 87.5% in the current version. This advancement stems from enhanced thinking depth during the reasoning process: in the AIME test set, the previous model used an average of 12K tokens per question, whereas the new version averages 23K tokens per question.

Beyond its improved reasoning capabilities, this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.

2. Evaluation Results

DeepSeek-R1-0528

For all our models, the maximum generation length is set to 64K tokens. For benchmarks requiring sampling, we use a temperature of 0.60.6, a top-p value of 0.950.95, and generate 16 responses per query to estimate pass@1.

CategoryBenchmark (Metric)DeepSeek R1DeepSeek R1 0528
General
MMLU-Redux (EM)92.993.4
MMLU-Pro (EM)84.085.0
GPQA-Diamond (Pass@1)71.581.0
SimpleQA (Correct)30.127.8
FRAMES (Acc.)82.583.0
Humanity's Last Exam (Pass@1)8.517.7
Code
LiveCodeBench (2408-2505) (Pass@1)63.573.3
Codeforces-Div1 (Rating)15301930
SWE Verified (Resolved)49.257.6
Aider-Polyglot (Acc.)53.371.6
Math
AIME 2024 (Pass@1)79.891.4
AIME 2025 (Pass@1)70.087.5
HMMT 2025 (Pass@1)41.779.4
CNMO 2024 (Pass@1)78.886.9
Tools
BFCL_v3_MultiTurn (Acc)-37.0
Tau-Bench (Pass@1)-53.5(Airline)/63.9(Retail)

Note: We use Agentless framework to evaluate model performance on SWE-Verified. We only evaluate text-only prompts in HLE testsets. GPT-4.1 is employed to act user role in Tau-bench evaluation.

300+ 模型,即刻開啟,

探索全部模型