DeepSeek AI: How to Get Started

DeepSeek is making waves, especially with their DeepSeek V3 model. It’s an open-source powerhouse known for its speed and reasoning capabilities. It’s even been said to outperform big names like GPT-4o, Qwen 2.5 Coder, and Claude 3.5.

Recently, DeepSeek launched DeepSeek R1 and DeepSeek R1 Zero which are aiming to be a match to OpenAI’s o1 model, but at a lower cost. Pretty cool, right?

You can actually give DeepSeek a try on your computer without any subscriptions. Although, you’ll need to subscribe if you want the extra features. You can also find their models over on HuggingFace.

Method 1: Accessing DeepSeek R1 through a web browser

Here’s how you can jump right in using your web browser:

  1. First, head to www.deepseekv3.com.
  2. Next, once the site’s loaded, you’ll see a button labeled ‘Try DeepSeek R1 Chat’ – click on that.

  1. Now you should see the DeepSeek R1 chat interface on your screen. Just type your questions in the chat box.

  1. After typing your question, simply press Enter or click ‘Send’, and the AI will get to work and respond.

Method 2: Accessing DeepSeek V3 Coder via API

If you’re looking to use the API, here’s how to set that up:

  1. First, go to chat.deepseek.com.
  2. Then, look for the ‘Sign Up’ option and create your account.

  1. After creating your account, you will get an API key.
  2. If you don’t have Python installed, go to python.org and download it. Make sure to add python.exe to PATH during install, or you’ll have to manually navigate to Python to run commands.
  3. Now, to install the SDK. DeepSeek’s API works with OpenAI compatible formats, so you can use the OpenAI SDK. Open Command Prompt and use pip install openai
  4. Now, to configure your API access, set the base URL to https://api.deepseek.com.
  5. Now you can access DeepSeek AI by calling its API.

Method 3: Deploying DeepSeek V3 locally

If you’d like to run DeepSeek locally, it’s a bit more involved:

Local deployments are best for Linux distributions. If you’re using Windows, you will need to create an environment similar to Linux. Here’s what you’ll need: A GPU with CUDA, Python 3.8+, at least 16 GB of RAM, and CUDA and cuDNN.

  1. To start, install the Windows Subsystem for Linux on your system.
  2. Next, clone the DeepSeek repo with: git clone https://github.com/deepseek-ai/DeepSeek-V3.git
  3. Then, go to the inference directory and install the dependencies: cd DeepSeek-V3/inference and then pip install -r requirements.txt
  4. Next, convert the model weights to your specific format: python convert.py --hf-ckpt-path /path/to/DeepSeek-V3 --save-path /path/to/DeepSeek-V3-Demo --n-experts 256 --model-parallel 16
  5. That’s it, now you can chat with DeepSeek locally.

Important points to note

  • Some DeepSeek models are free to use, while others require payment based on usage. For DeepSeek-V3, it’s around $0.14 per million input tokens and $0.28 per million output tokens. If you want to customize settings and save chat history you will need to create a DeepSeek account.

  • The DeepSeek R1 model shines with complex math and reasoning problems. This makes it perfect for tutoring, debugging, code generation, etc.

  • DeepSeek also offers smaller, more efficient versions called DeepSeek R1 distilled models, which can run on consumer devices.

  • The R1 model can also browse the web in real time, combining online data with its existing knowledge, which can make it a powerful tool similar to Perplexity AI.