OpenClaw在K8s Pod中稳定运行的Docker制作指南(源码版)

0 阅读6分钟

一:概述

最近鼎道智联和联想合作推出的 Yoga AI mini 智能迷你主机中集成了 DingClaw,这个设计让用户用上 OpenClaw 变得格外省心 —— 不用再费劲儿手动部署配置,开机就能直接用,极大降低了使用门槛。

作为一名常年和智能硬件、容器化部署打交道的开发者,在实际落地过程中,我们发现容器化部署的灵活性对后续产品迭代至关重要。为了让未来更多集成 DingClaw 的智能硬件能在 Docker 环境下更稳定、更灵活地扩充功能,同时适配 k8s 云环境的运行需求,我近期专门深入研究了 OpenClaw 的 Docker 部署方案。这篇内容就记录下我在公司 k8s 云上部署 OpenClaw 时踩过的坑,以及最终通过源码方式实现 Docker 稳定运行的过程和自己的解决思路,希望能和同行开发者交流分享。

二: 通过 openclaw_zh 官方 docker 制作过程以及遇到的问题

openclaw的汉化版制作流程如下

2.1 拉取镜像:

docker pull justlikemaki/openclaw-docker-cn-im:latest

2.2 制作文件: docker-compose.yml

version: '3.8'

services:
  openclaw-gateway:
    container_name: openclaw-cn-1
    image: ${OPENCLAW_IMAGE}
    entrypoint: ["/bin/bash", "/usr/local/bin/init.sh"]
    cap_add:
      - CHOWN
      - SETUID
      - SETGID
      - DAC_OVERRIDE
    # 可选:指定容器运行 UID:GID(例如 1000:1000)
    # 默认保持 root 启动,以便 init.sh 自动修复挂载卷权限后再降权运行网关
    user: ${OPENCLAW_RUN_USER:-0:0}
    environment:
      TZ: Asia/Shanghai
      HOME: /home/node
      TERM: xterm-256color
      # 模型配置
      MODEL_ID: ${MODEL_ID}
      BASE_URL: ${BASE_URL}
      API_KEY: ${API_KEY}
      API_PROTOCOL: ${API_PROTOCOL}
      CONTEXT_WINDOW: ${CONTEXT_WINDOW}
      MAX_TOKENS: ${MAX_TOKENS}
      # 通道配置
      TELEGRAM_BOT_TOKEN: ${TELEGRAM_BOT_TOKEN}
      FEISHU_APP_ID: ${FEISHU_APP_ID}
      FEISHU_APP_SECRET: ${FEISHU_APP_SECRET}
      DINGTALK_CLIENT_ID: ${DINGTALK_CLIENT_ID}
      DINGTALK_CLIENT_SECRET: ${DINGTALK_CLIENT_SECRET}
      DINGTALK_ROBOT_CODE: ${DINGTALK_ROBOT_CODE}
      DINGTALK_CORP_ID: ${DINGTALK_CORP_ID}
      DINGTALK_AGENT_ID: ${DINGTALK_AGENT_ID}
      QQBOT_APP_ID: ${QQBOT_APP_ID}
      QQBOT_CLIENT_SECRET: ${QQBOT_CLIENT_SECRET}
      # 企业微信配置
      WECOM_TOKEN: ${WECOM_TOKEN}
      WECOM_ENCODING_AES_KEY: ${WECOM_ENCODING_AES_KEY}
      # 工作空间配置
      WORKSPACE: ${WORKSPACE}
      # Gateway 配置
      OPENCLAW_GATEWAY_TOKEN: ${OPENCLAW_GATEWAY_TOKEN}
      OPENCLAW_GATEWAY_BIND: ${OPENCLAW_GATEWAY_BIND}
      OPENCLAW_GATEWAY_PORT: ${OPENCLAW_GATEWAY_PORT}
      OPENCLAW_BRIDGE_PORT: ${OPENCLAW_BRIDGE_PORT}
      OPENCLAW_GATEWAY_ALLOW_INSECURE: "true"
      NODE_TLS_REJECT_UNAUTHORIZED: "0"
    volumes:
      - ${OPENCLAW_DATA_DIR}:/home/node/.openclaw
      # 使用匿名卷排除 extensions 目录,使用镜像中预装的插件
      - /home/node/.openclaw/extensions
    ports:
      - "${OPENCLAW_GATEWAY_PORT}:18789"
      - "${OPENCLAW_BRIDGE_PORT}:18790"
    init: true
    #restart: unless-stopped
    restart: "no"

2.3 制作文件: .env

# OpenClaw Docker 环境变量配置示例
# 复制此文件为 .env 并修改相应的值

# Docker 镜像配置
#OPENCLAW_IMAGE=openclaw-gateway:1
OPENCLAW_IMAGE=openclaw-gateway:1

# 模型配置
MODEL_ID=qwen-plus-latest
BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
API_KEY=ak_xxxx

# API 协议类型: openai-completions 或 anthropic-messages
# openai-completions: OpenAI 协议 (适用于 OpenAI、Gemini 等模型)
# anthropic-messages: Claude 协议 (适用于 Claude 模型,支持 Prompt Caching)
API_PROTOCOL=openai-completions
# 模型上下文窗口大小
CONTEXT_WINDOW=200000
# 模型最大输出 tokens
MAX_TOKENS=8192

# Telegram 配置(可选,留空则不启用)
TELEGRAM_BOT_TOKEN=

# 飞书配置(可选,留空则不启用)
FEISHU_APP_ID=xxxx
FEISHU_APP_SECRET=xxxx

# 钉钉配置(可选,留空则不启用)
DINGTALK_CLIENT_ID=
DINGTALK_CLIENT_SECRET=
DINGTALK_ROBOT_CODE=
DINGTALK_CORP_ID=
DINGTALK_AGENT_ID=

# QQ 机器人配置(可选,留空则不启用)
QQBOT_APP_ID=
QQBOT_CLIENT_SECRET=

# 企业微信配置(可选,留空则不启用)
WECOM_TOKEN=
WECOM_ENCODING_AES_KEY=

# 工作空间配置(不要更改)
WORKSPACE=/home/node/.openclaw/workspace

# 挂载目录配置(按实际更改)
# OpenClaw 数据目录(包含配置文件、工作空间等所有数据)
OPENCLAW_DATA_DIR=/home/liulj/.openclaw

# 可选:容器启动用户 UID:GID
# 默认 0:0(root)用于 init.sh 自动修复挂载目录权限,再降权为 node 启动服务
# 如需与宿主机用户对齐,可设置为 1000:1000 或 Linux 上的 $(id -u):$(id -g)
OPENCLAW_RUN_USER=0:0

# Gateway 配置
## 网关 token,用于认证(按实际更改)
OPENCLAW_GATEWAY_TOKEN=123456
#OPENCLAW_GATEWAY_BIND=lan
#OPENCLAW_GATEWAY_BIND=loopback
#OPENCLAW_GATEWAY_BIND=custom
#OPENCLAW_GATEWAY_HOST=0.0.0.0

OPENCLAW_GATEWAY_PORT=18789
OPENCLAW_BRIDGE_PORT=18790
#OPENCLAW_GATEWAY_URL=ws://127.0.0.1:18789
#OPENCLAW_GATEWAY_PAIRING_REQUIRED=false
OPENCLAW_GATEWAY_BIND=lan
OPENCLAW_GATEWAY_URL=ws://127.0.0.1:18789

2.4 制作文件: nameenv

DOCKERNAME="openclaw-gateway"

2.5 制作 docker

docker-compose up -d

2.6 运行 docker

source nameenv && docker start {DOCKER_NAME} /bin/bash -l

2.7 在 k8s 中遇到的问题

该 docker 在k8s 的pod中运行遇到无法执行命令: openclaw devices list

导致设备匹配无法进行,因此无法从外围连接 openclaw。

由于没有这个汉化版的 openclaw 的源代码, 最终没有找到原因。

三: 通过源码制作 docker 并成功在k8s 云上运行的过程

为了调试上面在 k8s 的pod中遇到的问题, 下载了 openclaw 的源码, 并进行调试。

但通过源码编译运行的 openclaw 在 k8s 的 pod 中运行正常, 无法复现上面遇到的问题。因此最后采用基于源代码的 openclaw 在docker 和 k8s 的 pod中运行的方案。

下面是采用源码的 docker 制作过程:

3.1 下载源码:

mkdir -p /opt/openclaw

cd /opt/openclaw

git clone github.com/openclaw/op…

注: 目前采用的版本是 98125e9982b712e129c4896891cc2e48ef2485a

3.2 建立编译环境

apt install -y build-essential cmake git pkg-config wget unzip

apt install -y curl git ca-certificates build-essential jq wget python3 python3-pip python3-venv nodejs npm

3.3 编译openclaw

pnpm build

3.4 运行 openclaw

建立 /usr/local/bin/init.sh

#!/bin/bash
#rm -f /var/run/openclaw.pid
#/usr/local/bin/service_openclaw.sh --stop
export PNPM_HOME="/root/.local/share/pnpm"
case ":$PATH:" in
  *":$PNPM_HOME:"*) ;;
  *) export PATH="$PNPM_HOME:$PATH" ;;
esac
openclaw_which=`which openclaw`
env_var=`env`

#echo "openclaw which = [${openclaw_which}] " >> /var/log/openclaw.log
#echo "open env = [${env_var}]" >> /var/log/openclaw.log
export OPENCLAW_STATE_DIR=/home/node/.openclaw
export OPENCLAW_WORKSPACE=/home/node/.openclaw/workspace
openclaw gateway  > /var/log/openclaw_running.log 2>&1 &
while true; do
    sleep 3600
done

编辑配置文件 /home/node/.openclaw/openclaw.json

{
  "meta": {
    "lastTouchedVersion": "2026.3.13",
    "lastTouchedAt": "2026-03-30T06:20:32.663Z"
  },
  "update": {
    "checkOnStart": false
  },
  "browser": {
    "executablePath": "/usr/bin/chromium",
    "headless": true,
    "noSandbox": true,
    "defaultProfile": "openclaw"
  },
  "models": {
    "mode": "merge",
    "providers": {
      "default": {
        "baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
        "apiKey": "sk-xxxxxx",
        "api": "openai-completions",
        "models": [
          {
            "id": "qwen-plus-latest",
            "name": "qwen-plus-latest",
            "reasoning": false,
            "input": [
              "text",
              "image"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 200000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "default/qwen-plus-latest"
      },
      "imageModel": {
        "primary": "default/qwen-plus-latest"
      },
      "workspace": "/home/node/.openclaw/workspace",
      "compaction": {
        "mode": "safeguard"
      },
      "elevatedDefault": "full",
      "maxConcurrent": 4,
      "subagents": {
        "maxConcurrent": 8
      },
      "sandbox": {
        "mode": "off"
      }
    }
  },
  "tools": {
    "profile": "full",
    "sessions": {
      "visibility": "all"
    },
    "fs": {
      "workspaceOnly": true
    }
  },
  "messages": {
    "ackReactionScope": "group-mentions",
    "tts": {
      "edge": {
        "voice": "zh-CN-XiaoxiaoNeural"
      }
    }
  },
  "commands": {
    "native": "auto",
    "nativeSkills": "auto",
    "restart": true,
    "ownerDisplay": "raw"
  },
  "channels": {},
  "gateway": {
    "port": 18789,
    "mode": "local",
    "bind": "lan",
    "controlUi": {
      "allowedOrigins": [
        "http://localhost:18789",
        "http://127.0.0.1:18789"
      ],
      "allowInsecureAuth": true,
      "dangerouslyDisableDeviceAuth": false
    },
    "auth": {
      "mode": "token",
      "token": "123456"
    }
  },
  "memory": {
    "backend": "qmd",
    "citations": "auto",
    "qmd": {
      "command": "/usr/local/bin/qmd",
      "includeDefaultMemory": true,
      "paths": [
        {
          "path": "/home/node/.openclaw/workspace",
          "name": "workspace",
          "pattern": "**/*.md"
        }
      ],
      "sessions": {
        "enabled": true
      },
      "update": {
        "interval": "5m",
        "debounceMs": 15000,
        "onBoot": true
      },
      "limits": {
        "maxResults": 16,
        "timeoutMs": 8000
      }
    }
  },
  "plugins": {
    "allow": [],
    "entries": {
      "feishu": {
        "enabled": false
      },
      "dingtalk": {
        "enabled": false
      },
      "qqbot": {
        "enabled": false
      },
      "wecom": {
        "enabled": false
      },
      "openclaw-lark": {
        "enabled": false
      }
    },
    "installs": {}
  }
}

运行 openclaw /usr/local/bin/init.sh

注: 由于/usr/local/bin/init.sh 每次在docker 启动时自动加载, 因此可以成功在 k8s 的 pod 中运行 openclaw

三、总结

这次基于源码完成 OpenClaw 的 Docker 适配,其实也是为后续鼎道智联和联想 Yoga AI mini 这类集成 DingClaw 的产品打基础 —— 毕竟这类智能迷你主机在实际场景中,很可能需要在容器化环境下做功能扩充和环境适配,而 k8s 环境的兼容性是绕不开的点。虽然过程中踩了不少坑,比如官方镜像在 k8s pod 中执行 OpenClaw devices list 命令失败的问题(因为没有汉化版源码没法深挖根因),但换用源码编译部署的方式后,不仅复现不了之前的问题,还能更灵活地调整配置、适配产品的实际需求。

作为开发者,我觉得这类从实际产品落地需求出发的技术踩坑和复盘特别有价值,既解决了当下 OpenClaw 在 k8s 环境的运行问题,也为后续 DingClaw 相关产品的容器化扩充积累了实操经验。如果有同行也在做类似的智能硬件 + AI 组件的容器化部署,希望我的这些操作和思路能提供一点参考,也欢迎大家交流不同的部署优化思路。