deepseek-ai/deepseek-ocr

The latest Deepseek model.

LLMFP8
Home
Explore
DeepSeek LLM Models
deepseek-ai/deepseek-ocr
DeepSeek OCR
LLM

The latest Deepseek model.

Parametri

Esempio di codice

import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("ATLASCLOUD_API_KEY"),
    base_url="https://api.atlascloud.ai/v1"
)

response = client.chat.completions.create(
    model="deepseek-ai/deepseek-ocr",
    messages=[
    {
        "role": "user",
        "content": "hello"
    }
],
    max_tokens=1024,
    temperature=0.7
)

print(response.choices[0].message.content)

Installa

Installa il pacchetto richiesto per il tuo linguaggio.

bash
pip install requests

Autenticazione

Tutte le richieste API richiedono l'autenticazione tramite una chiave API. Puoi ottenere la tua chiave API dalla dashboard di Atlas Cloud.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

Header HTTP

python
import os

API_KEY = os.environ.get("ATLASCLOUD_API_KEY")
headers = {
    "Content-Type": "application/json",
    "Authorization": f"Bearer {API_KEY}"
}
Proteggi la tua chiave API

Non esporre mai la tua chiave API nel codice lato client o nei repository pubblici. Utilizza invece variabili d'ambiente o un proxy backend.

Invia una richiesta

import requests

url = "https://api.atlascloud.ai/v1/chat/completions"
headers = {
    "Content-Type": "application/json",
    "Authorization": "Bearer $ATLASCLOUD_API_KEY"
}
data = {
    "model": "your-model",
    "messages": [{"role": "user", "content": "Hello"}],
    "max_tokens": 1024
}

response = requests.post(url, headers=headers, json=data)
print(response.json())

Schema di input

I seguenti parametri sono accettati nel corpo della richiesta.

Totale: 9Obbligatorio: 2Opzionale: 7
modelstringrequired
The model ID to use for the completion.
Example: "deepseek-ai/deepseek-ocr"
messagesarray[object]required
A list of messages comprising the conversation so far.
rolestringrequired
The role of the message author. One of "system", "user", or "assistant".
systemuserassistant
contentstringrequired
The content of the message.
max_tokensinteger
The maximum number of tokens to generate in the completion.
Default: 1024Min: 1
temperaturenumber
Sampling temperature between 0 and 2. Higher values make output more random, lower values more focused and deterministic.
Default: 0.7Min: 0Max: 2
top_pnumber
Nucleus sampling parameter. The model considers the tokens with top_p probability mass.
Default: 1Min: 0Max: 1
streamboolean
If set to true, partial message deltas will be sent as server-sent events.
Default: false
stoparray[string]
Up to 4 sequences where the API will stop generating further tokens.
frequency_penaltynumber
Penalizes new tokens based on their existing frequency in the text so far. Between -2.0 and 2.0.
Default: 0Min: -2Max: 2
presence_penaltynumber
Penalizes new tokens based on whether they appear in the text so far. Between -2.0 and 2.0.
Default: 0Min: -2Max: 2

Esempio di corpo della richiesta

json
{
  "model": "deepseek-ai/deepseek-ocr",
  "messages": [
    {
      "role": "user",
      "content": "Hello"
    }
  ],
  "max_tokens": 1024,
  "temperature": 0.7,
  "stream": false
}

Schema di output

L'API restituisce una risposta compatibile con ChatCompletion.

idstringrequired
Unique identifier for the completion.
objectstringrequired
Object type, always "chat.completion".
Default: "chat.completion"
createdintegerrequired
Unix timestamp of when the completion was created.
modelstringrequired
The model used for the completion.
choicesarray[object]required
List of completion choices.
indexintegerrequired
Index of the choice.
messageobjectrequired
The generated message.
finish_reasonstringrequired
The reason generation stopped.
stoplengthcontent_filter
usageobjectrequired
Token usage statistics.
prompt_tokensintegerrequired
Number of tokens in the prompt.
completion_tokensintegerrequired
Number of tokens in the completion.
total_tokensintegerrequired
Total tokens used.

Esempio di risposta

json
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "created": 1700000000,
  "model": "model-name",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I assist you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 20,
    "total_tokens": 30
  }
}

Atlas Cloud Skills

Atlas Cloud Skills integra oltre 300 modelli di IA direttamente nel tuo assistente di codifica IA. Un comando per installare, poi usa il linguaggio naturale per generare immagini, video e chattare con LLM.

Client supportati

Claude Code
OpenAI Codex
Gemini CLI
Cursor
Windsurf
VS Code
Trae
GitHub Copilot
Cline
Roo Code
Amp
Goose
Replit
40+ client supportati

Installa

bash
npx skills add AtlasCloudAI/atlas-cloud-skills

Configura chiave API

Ottieni la tua chiave API dalla dashboard di Atlas Cloud e impostala come variabile d'ambiente.

bash
export ATLASCLOUD_API_KEY="your-api-key-here"

Funzionalità

Una volta installato, puoi usare il linguaggio naturale nel tuo assistente IA per accedere a tutti i modelli Atlas Cloud.

Generazione di immaginiGenera immagini con modelli come Nano Banana 2, Z-Image e altri.
Creazione di videoCrea video da testo o immagini con Kling, Vidu, Veo, ecc.
Chat LLMChatta con Qwen, DeepSeek e altri grandi modelli linguistici.
Caricamento mediaCarica file locali per la modifica di immagini e flussi di lavoro da immagine a video.

Server MCP

Il server MCP di Atlas Cloud collega il tuo IDE con oltre 300 modelli di IA tramite il Model Context Protocol. Funziona con qualsiasi client compatibile MCP.

Client supportati

Cursor
VS Code
Windsurf
Claude Code
OpenAI Codex
Gemini CLI
Cline
Roo Code
100+ client supportati

Installa

bash
npx -y atlascloud-mcp

Configurazione

Aggiungi la seguente configurazione al file delle impostazioni MCP del tuo IDE.

json
{
  "mcpServers": {
    "atlascloud": {
      "command": "npx",
      "args": [
        "-y",
        "atlascloud-mcp"
      ],
      "env": {
        "ATLASCLOUD_API_KEY": "your-api-key-here"
      }
    }
  }
}

Strumenti disponibili

atlas_generate_imageGenera immagini da prompt testuali.
atlas_generate_videoCrea video da testo o immagini.
atlas_chatChatta con grandi modelli linguistici.
atlas_list_modelsEsplora oltre 300 modelli di IA disponibili.
atlas_quick_generateCreazione di contenuti in un solo passaggio con selezione automatica del modello.
atlas_upload_mediaCarica file locali per i flussi di lavoro API.

DeepSeek-V3.1

Introduction

DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode. Compared to the previous version, this upgrade brings improvements in multiple aspects:

  • Hybrid thinking mode: One model supports both thinking mode and non-thinking mode by changing the chat template.

  • Smarter tool calling: Through post-training optimization, the model's performance in tool usage and agent tasks has significantly improved.

  • Higher thinking efficiency: DeepSeek-V3.1-Think achieves comparable answer quality to DeepSeek-R1-0528, while responding more quickly.

DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.

Model Downloads

Model#Total Params#Activated ParamsContext LengthDownload
DeepSeek-V3.1-Base671B37B128KHuggingFace | ModelScope
DeepSeek-V3.1671B37B128KHuggingFace | ModelScope

Chat Template

The details of our chat template is described in tokenizer_config.json and assets/chat_template.jinja. Here is a brief description.

Thinking

v3.1 support both thinking and no-thinking. we now both support turning thinking with qwen mode

"chat_template_kwargs": {"enable_thinking": true}

and zai mode

"thinking":{"type":"enabled"}

ToolCall

Toolcall is supported in non-thinking mode. The format is:

<|begin▁of▁sentence|>{system prompt}{tool_description}<|User|>{query}<|Assistant|></think> where the tool_description is

## Tools You have access to the following tools: ### {tool_name1} Description: {description} Parameters: {json.dumps(parameters)} IMPORTANT: ALWAYS adhere to this exact format for tool use: <|tool▁calls▁begin|><|tool▁call▁begin|>tool_call_name<|tool▁sep|>tool_call_arguments<|tool▁call▁end|>{{additional_tool_calls}}<|tool▁calls▁end|> Where: - `tool_call_name` must be an exact match to one of the available tools - `tool_call_arguments` must be valid JSON that strictly follows the tool's Parameters Schema - For multiple tool calls, chain them directly without separators or spaces

Code-Agent

We support various code agent frameworks. Please refer to the above toolcall format to create your own code agents. An example is shown in assets/code_agent_trajectory.html.

Search-Agent

We design a specific format for searching toolcall in thinking mode, to support search agent.

For complex questions that require accessing external or up-to-date information, DeepSeek-V3.1 can leverage a user-provided search tool through a multi-turn tool-calling process.

Please refer to the assets/search_tool_trajectory.html and assets/search_python_tool_trajectory.html for the detailed template.

Evaluation

CategoryBenchmark (Metric)DeepSeek V3.1-NonThinkingDeepSeek V3 0324DeepSeek V3.1-ThinkingDeepSeek R1 0528
General
MMLU-Redux (EM)91.890.593.793.4
MMLU-Pro (EM)83.781.284.885.0
GPQA-Diamond (Pass@1)74.968.480.181.0
Humanity's Last Exam (Pass@1)--15.917.7
Search Agent
BrowseComp--30.08.9
BrowseComp_zh--49.235.7
Humanity's Last Exam (Python + Search)--29.824.8
SimpleQA--93.492.3
Code
LiveCodeBench (2408-2505) (Pass@1)56.443.074.873.3
Codeforces-Div1 (Rating)--20911930
Aider-Polyglot (Acc.)68.455.176.371.6
Code Agent
SWE Verified (Agent mode)66.045.4-44.6
SWE-bench Multilingual (Agent mode)54.529.3-30.5
Terminal-bench (Terminus 1 framework)31.313.3-5.7
Math
AIME 2024 (Pass@1)66.359.493.191.4
AIME 2025 (Pass@1)49.851.388.487.5
HMMT 2025 (Pass@1)33.529.284.279.4

Note:

  • Search agents are evaluated with our internal search framework, which uses a commercial search API + webpage filter + 128K context window. Seach agent results of R1-0528 are evaluated with a pre-defined workflow.

  • SWE-bench is evaluated with our internal code agent framework.

  • HLE is evaluated with the text-only subset.

Usage Example

import transformers tokenizer = transformers.AutoTokenizer.from_pretrained("deepseek-ai/DeepSeek-V3.1") messages = [ {"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Who are you?"}, {"role": "assistant", "content": "<think>Hmm</think>I am DeepSeek"}, {"role": "user", "content": "1+1=?"} ] tokenizer.apply_chat_template(messages, tokenize=False, thinking=True, add_generation_prompt=True) # '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|><think>' tokenizer.apply_chat_template(messages, tokenize=False, thinking=False, add_generation_prompt=True) # '<|begin▁of▁sentence|>You are a helpful assistant<|User|>Who are you?<|Assistant|></think>I am DeepSeek<|end▁of▁sentence|><|User|>1+1=?<|Assistant|></think>'

How to Run Locally

The model structure of DeepSeek-V3.1 is the same as DeepSeek-V3. Please visit DeepSeek-V3 repo for more information about running this model locally.

License

This repository and the model weights are licensed under the MIT License.

Citation

@misc{deepseekai2024deepseekv3technicalreport, title={DeepSeek-V3 Technical Report}, author={DeepSeek-AI}, year={2024}, eprint={2412.19437}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2412.19437}, }

Contact

If you have any questions, please raise an issue or contact us at [email protected].

Inizia con Oltre 300 Modelli,

Esplora tutti i modelli