DeepSeek's Recommended Settings for Deploying its R1 Model

The AI community was recently surprised by DeepSeek, a startup based in China, with its R1 model and an AI chatbot rivaling the capabilities of ChatGPT. Now, the company is sharing key recommendations for those looking to deploy this model.

Optimal Temperature Settings

The most effective way to achieve a balance between creativity and coherence in the model’s output is to adjust the temperature setting. DeepSeek suggests setting the temperature to 0.6. This will allow the model to generate varied responses while maintaining logical consistency.

Avoiding System Prompts

DeepSeek suggests that you avoid using a system prompt. The model is designed to adapt more flexibly to different inputs without being constrained by a fixed system prompt.

Official Prompts and Guidelines

DeepSeek provides official prompts for tasks like search and file uploads, accessible via a provided link. They have also created guidelines to help ensure proper model behavior and prevent unwanted outputs.

The official DeepSeek-R1 deployment utilizes the same model as the open-source version, ensuring consistent performance across various applications.

There is some speculation regarding the reasons behind DeepSeek’s success. Analyst Charles Archer suggests possibilities such as reverse-engineering existing open-source AI, substantial funding, or innovative scaling with older hardware.


While various factors contribute to DeepSeek’s achievements, understanding and applying these recommended settings will help ensure optimal performance when deploying the R1 model.