MeshWorld MeshWorld.
OpenClaw DeepSeek AI Agent LLM Tutorial Self-Hosted Configuration 8 min read

How to Use OpenClaw with DeepSeek

Darsh Jariwala
By Darsh Jariwala
| Updated: Apr 8, 2026
How to Use OpenClaw with DeepSeek

OpenClaw supports OpenAI, Anthropic, and Google Gemini out of the box. DeepSeek isn’t on that list. That’s the problem. DeepSeek v3 handles most agent tasks — file management, scheduling, messaging, web searches — at roughly 95% lower cost than comparable frontier models. If your OpenClaw setup is running on GPT-4o or Claude and the API bill is climbing, this is worth configuring. It takes about 10 minutes and doesn’t require touching any code.

The scenario: You’ve been running OpenClaw for a couple of months. It manages your reminders, drafts replies, and handles some file sorting. Then you check your API usage and realize you’ve been spending $30–$40/month on model calls for tasks that don’t actually need GPT-4o-level reasoning. DeepSeek v3 handles everything on your list. This guide shows you how to switch.

:::note[TL;DR]

  • OpenClaw uses ~/.openclaw/openclaw.json for all configuration
  • You add DeepSeek as a custom provider in the models.providers block
  • The agents.defaults.model.primary field sets your default model
  • Your API key stays out of the config file via environment variable
  • Run openclaw models list to confirm DeepSeek is registered before starting a session :::

Prerequisites

Before starting, make sure you have:

That’s it. OpenClaw handles the rest at install time.


Step 1: Install OpenClaw

Open a terminal and run:

npm install -g openclaw

This installs the OpenClaw CLI globally. Depending on your connection speed, it may take a few minutes — the package pulls in several dependencies.


Step 2: Run the onboarding process

Once installed, run:

openclaw onboard --install-daemon

This installs the OpenClaw background daemon and walks you through initial setup. You’ll be asked about models, providers, and channels.

:::tip During onboarding you’ll be asked to select a model provider. We’re not setting up DeepSeek yet — that comes in the config step. For now, pick Minimax when prompted (it only has two models, which makes it easy to clean up later). You’ll replace this with DeepSeek in the next step. :::

Here’s how to answer each prompt:

PromptAnswer
I understand this is powerful and inherently risky. Continue?Yes
Onboarding modeQuickstart
Model/auth providerSkip for now
Filter models by providerMinimax
Default modelminimax/MiniMax-M2
Select channelSkip for now
Configure skills now?No
Enable hooks?Skip for now (Space to select)
How do you want to hatch your bot?Do this later

When you see Onboarding complete, hit Ctrl + C to exit. Don’t launch a session yet — DeepSeek isn’t configured.


Step 3: Configure DeepSeek models

The main config file lives at ~/.openclaw/openclaw.json. Open it in your editor.

You’re going to add DeepSeek as a custom provider inside the models.providers block. This tells OpenClaw where DeepSeek’s API lives and what models are available.

The apiKey field uses an environment variable (${DEEPSEEK_API_KEY}) instead of a hardcoded value. That keeps your key out of the config file. We’ll set the variable in Step 5.

Add the following to your openclaw.json:

{
  "models": {
    "mode": "merge",
    "providers": {
      "deepseek": {
        "baseUrl": "https://api.deepseek.com/v1",
        "apiKey": "${DEEPSEEK_API_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "deepseek-chat",
            "name": "DeepSeek Chat (v3.2)",
            "reasoning": false,
            "input": ["text"],
            "cost": {
              "input": 2.8e-7,
              "output": 4.2e-7,
              "cacheRead": 2.8e-8,
              "cacheWrite": 2.8e-7
            },
            "contextWindow": 128000,
            "maxTokens": 8192
          },
          {
            "id": "deepseek-reasoner",
            "name": "DeepSeek Reasoner (v3.2)",
            "reasoning": true,
            "input": ["text"],
            "cost": {
              "input": 2.8e-7,
              "output": 4.2e-7,
              "cacheRead": 2.8e-8,
              "cacheWrite": 2.8e-7
            },
            "contextWindow": 128000,
            "maxTokens": 65536
          }
        ]
      }
    }
  }
}

Two models are defined here. deepseek-chat is the standard model — fast, cheap, handles most tasks. deepseek-reasoner is the thinking model with a 65,536 token output limit, useful for complex multi-step reasoning. Start with deepseek-chat for everyday agent work.

The "api": "openai-completions" line tells OpenClaw to use OpenAI’s completion format for requests, which DeepSeek’s API is fully compatible with.


Step 4: Configure the agents section

Now update the agents block. This sets the default model your agent actually uses and lists which DeepSeek models are available for selection.

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "deepseek/deepseek-chat"
      },
      "models": {
        "deepseek/deepseek-chat": {},
        "deepseek/deepseek-reasoner": {}
      },
      "workspace": "~/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      }
    }
  }
}

The model.primary field is the one that matters most. Set it to deepseek/deepseek-chat and your agent will use DeepSeek by default for every session. The models block just lists what’s available for switching mid-session.

compaction.mode: "safeguard" is the default context management setting — it prevents context overflow by trimming older messages. Leave it as-is unless you know you need something else.


Step 5: Set the environment variable

Your DeepSeek API key needs to be available as an environment variable before OpenClaw starts. Never hardcode it in the config file.

On Linux or macOS, run this in your terminal:

export DEEPSEEK_API_KEY="your_api_key_here"

To make it permanent across sessions, add that line to your shell profile:

# For bash users
echo 'export DEEPSEEK_API_KEY="your_api_key_here"' >> ~/.bashrc

# For zsh users
echo 'export DEEPSEEK_API_KEY="your_api_key_here"' >> ~/.zshrc

Then restart your terminal (or run source ~/.bashrc / source ~/.zshrc) for it to take effect.

On Windows, set it for the current Command Prompt session:

set DEEPSEEK_API_KEY=your_api_key_here

Or use setx to persist it across sessions:

setx DEEPSEEK_API_KEY "your_api_key_here"

Replace your_api_key_here with your actual key from platform.deepseek.com.


Step 6: Verify DeepSeek is registered

Before starting a session, confirm OpenClaw can see the DeepSeek models:

openclaw models list

You should see deepseek/deepseek-chat and deepseek/deepseek-reasoner in the output. If they don’t appear, double-check the models.providers.deepseek block in your config for typos — a missing comma or misquoted key name is usually the cause.


Step 7: Start your first session

Everything’s in place. Launch a session with:

openclaw tui

The TUI interface starts and you should see deepseek/deepseek-chat displayed as the active model.

:::warning If you’ve had OpenClaw running previously, restart the gateway first or the new config won’t be picked up:

openclaw gateway restart

Run this before starting any new TUI session after a config change. :::

From here, OpenClaw works the same way it does with any other provider — tasks, messaging integrations, file operations, all of it. The only difference is the model on the backend and the API bill at the end of the month.

For a deeper look at what OpenClaw can actually do once it’s running, the OpenClaw first agent guide covers building out your first real workflow. If you’re curious about running a fully local model instead — no API key, no cloud — the Gemma 4 + OpenClaw + Ollama guide walks through that setup.


Frequently asked questions

Why doesn’t OpenClaw support DeepSeek natively?

OpenClaw’s built-in provider list covers the major commercial APIs — OpenAI, Anthropic, and Google Gemini. DeepSeek’s API is compatible with the OpenAI completions format, which is why the manual configuration in this guide works: you’re telling OpenClaw to talk to DeepSeek’s endpoint using the same protocol it already knows. Native support may be added in a future release.

Which DeepSeek model should I use with OpenClaw — chat or reasoner?

Start with deepseek-chat for most tasks. It handles scheduling, messaging, file operations, and web searches without issue. Use deepseek-reasoner when you need multi-step problem solving — complex data analysis, debugging, or anything where you’d normally reach for a thinking model. The reasoner costs the same per token but uses more of them, so it’s slower and pricier for simple tasks.

Does this config work with other non-native providers too?

Yes. If the provider exposes an OpenAI-compatible completions endpoint (Mistral, Together AI, Groq, and many others do), the same pattern works: set "api": "openai-completions", point baseUrl at their endpoint, add your key as an environment variable, and add the model IDs. OpenClaw’s config format is intentionally flexible about this.

Can I switch between DeepSeek and other models mid-session?

Yes. List the models you want in the agents.defaults.models block and you can switch during a session without restarting the daemon. The model.primary field just sets the default at startup.

Is there a risk of my API key being exposed in the config file?

Not if you use the ${DEEPSEEK_API_KEY} environment variable syntax, which is what this guide uses. The config file itself never contains the raw key — OpenClaw reads it from the environment at runtime. Make sure you’re not committing openclaw.json to a public repository regardless, but the key itself won’t be in the file.