复现Lingbot-vla接入RoboTwin2.0(WebSocket模式)

24 阅读11分钟

官方文档

RoboTwin
robotwin-platform.github.io/doc/usage/r…

Lingbot-vla
github.com/Robbyant/li… github.com/Robbyant/li…

前言:环境检查

RoboTwin 要求 Python 3.10,而 Lingbot-VLA 要求 Python 3.12.3。为了避免冲突,必须为它们分别创建独立的 Conda 环境

  • 检查显卡与驱动 (NVIDIA & CUDA):
    nvidia-smi: NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 image.png nvcc --version: nvcc: command not found
  • 检查系统是否已预装 Vulkan 库:
    ls -l /usr/lib/x86_64-linux-gnu/libvulkan.so*:
    lrwxrwxrwx 1 root root 20 Feb 10 2020 /usr/lib/x86_64-linux-gnu/libvulkan.so.1 -> libvulkan.so.1.2.131
    -rw-r--r-- 1 root root 346840 Feb 10 2020 /usr/lib/x86_64-linux-gnu/libvulkan.so.1.2.131
  • 检查 Conda 状态:
    conda --version: conda 25.11.1
    conda env list: base
    /mnt/public/lwb/miniconda3
  • 确认系统版本:
    cat /etc/os-release: NAME="Ubuntu"
    VERSION="20.04.6 LTS (Focal Fossa)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 20.04.6 LTS"
    VERSION_ID="20.04"
    HOME_URL="www.ubuntu.com/"
    SUPPORT_URL="help.ubuntu.com/"
    BUG_REPORT_URL="bugs.launchpad.net/ubuntu/"
    PRIVACY_POLICY_URL="www.ubuntu.com/legal/terms…"
    VERSION_CODENAME=focal
    UBUNTU_CODENAME=focal

第一部分:RoboTwin 环境配置与资产下载

# 1. 检查 Vulkan 依赖 (服务器环境通常已预装) 
# 注意:如果是个人裸机,需运行 sudo apt install libvulkan1 mesa-vulkan-drivers vulkan-tools 
# 在无 sudo 权限的开发机上,直接运行以下命令验证底层是否已支持: 
vulkaninfo | grep "Vulkan Instance Version" 
# 若正常输出版本号,即可直接跳过 apt 安装步骤。

# 2. 创建并激活 RoboTwin 环境 (强制要求 Python 3.10)
conda create -n RoboTwin python=3.10 -y
conda activate RoboTwin

# 3. 在 conda 环境中安装 CUDA Toolkit 12.1 (提供 nvcc 编译器) 
conda install -c "nvidia/label/cuda-12.1.0" cuda-toolkit -y 

# 4. 验证 nvcc 是否安装成功 (确认输出是否为 12.1) 
nvcc --version

# 5. 克隆仓库并安装基础环境
git clone https://github.com/RoboTwin-Platform/RoboTwin.git
或者其他镜像(git clone https://ghfast.top/https://github.com/RoboTwin-Platform/RoboTwin.git)
cd RoboTwin
bash script/_install.sh

# 6. 下载资产 (如遇限流需先执行 huggingface-cli login)
export HF_ENDPOINT=https://hf-mirror.com
bash script/_download_assets.sh

第二部分:生成(直接用官方数据) Lerobot 格式数据集

(保持在 RoboTwin 环境和 RoboTwin 目录下)

# 1. 创建数据处理目录
cd /mnt/public/lwb/test_project/RoboTwin
mkdir -p data/click_bell

# 2. 仅下载 click_bell 任务下最基本的 aloha-agilex 数据
export HF_ENDPOINT=https://hf-mirror.com 
huggingface-cli download --repo-type dataset TianxingChen/RoboTwin2.0 \ --include "dataset/click_bell/aloha-agilex_clean*" \ --local-dir . --resume-download

# 3. 整理并解压数据 
mv dataset/click_bell/aloha-agilex_clean* data/click_bell/ 
cd data/click_bell 
unzip aloha-agilex_clean*.zip 
mv aloha-agilex_clean_50 aloha-agilex_clean # 重命名为脚本期望的名称

# 4. 执行第一阶段数据转换 (转为 HDF5)
cd ../../policy/pi0
bash process_data_pi0.sh click_bell aloha-agilex_clean 50

# 5. 清理因中断可能产生的残留垃圾文件夹 (非常重要)
rm -rf training_data/aloha-agilex_clean/episode_*

# 6. 将处理好的 HDF5 数据复制到训练准备目录 
mkdir -p training_data/aloha-agilex_clean/ 
cp -r processed_data/click_bell-aloha-agilex_clean-50 training_data/aloha-agilex_clean/

# 7. 准备 uv 环境,安装指定版本的底层依赖,撕掉版本死锁
pip install uv
conda install -c conda-forge "ffmpeg=6.*" pkg-config -y
pip install av --only-binary :all: -i [https://mirrors.ustc.edu.cn/pypi/web/simple](https://mirrors.ustc.edu.cn/pypi/web/simple)
rm -f uv.lock
rm -rf .venv

# 8. 运行终极转换脚本生成 Lerobot 数据集
export UV_INDEX_URL=https://mirrors.ustc.edu.cn/pypi/web/simple 
unset LEROBOT_HOME 
export HF_LEROBOT_HOME=/mnt/public/lwb/test_project/RoboTwin/policy/pi0/lerobot_data 
bash generate.sh ./training_data/aloha-agilex_clean/ click_bell_aloha_repo
# 生成的数据默认存放在 ~/.cache/huggingface/lerobot/click_bell_aloha-agilex_repo

第三部分:Lingbot-VLA 环境配置

# 1. 退出前一个环境,创建 Lingbot 环境 (要求 Python 3.12.3)
conda deactivate
conda create -n lingbot python=3.12.3 -y
conda activate lingbot

# 2. 强行锁定 PyTorch 核心底座 (2.8.0 高性能版)
pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url [https://download.pytorch.org/whl/cu128](https://download.pytorch.org/whl/cu128)

# 3. 安装指定 commit 版本的 Lerobot
GIT_LFS_SKIP_SMUDGE=1 git clone [https://github.com/huggingface/lerobot.git](https://github.com/huggingface/lerobot.git)
cd lerobot
git checkout 0cf864870cf29f4738d3ade893e6fd13fbd7cdb5
pip install -e .
cd /mnt/public/lwb/test_project

# 4. 补充 CUDA 编译器并安装 Flash Attention
conda install -c "nvidia/label/cuda-12.1.0" cuda-toolkit -y 
nvcc --version # 确认输出 12.1

pip install ninja
FLASH_ATTENTION_FORCE_BUILD=TRUE pip install flash-attn --no-build-isolation

# 5. 使用加速镜像克隆 Lingbot-VLA 
git clone [https://ghfast.top/https://github.com/robbyant/lingbot-vla.git](https://ghfast.top/https://github.com/robbyant/lingbot-vla.git)
cd lingbot-vla/
# 配置 Git 魔法:全局强制将 github 链接替换为 ghfast.top 镜像 
git config --global url."[https://ghfast.top/https://github.com/](https://ghfast.top/https://github.com/)".insteadOf "[https://github.com/](https://github.com/)"
# 重新初始化并强制拉取所有子模块的实际代码(这次会走镜像,速度很快)
git submodule update --init --recursive  
 
pip install -e .
pip install -r requirements.txt

# 6. 【高危依赖修复】镇压 Numpy/xFormers 叛乱 (非常关键,防止底层算子崩溃!)
# 强制降级并锁定关键视觉和文件库
pip install "numpy==1.26.4" "fsspec==2025.3.0" "opencv-python-headless==4.9.0.80" "rerun-sdk==0.21.0"
# 强行安装 xformers,跳过死板的依赖检查,防止它卸载我们的 Torch 2.8.0
pip install xformers==0.0.28.post3 --no-deps

# 7. 安装视觉子模块 (必须带 --no-deps,否则会搞乱刚修好的环境)
cd ./lingbotvla/models/vla/vision_models/lingbot-depth/
pip install -e . --no-deps
cd ../MoGe
pip install -e . --no-deps
cd ../../../../../

第四部分:模型下载与训练评估

(保持在 Lingbot 环境和 lingbot-vla 目录下)

# 1. 声明 Hugging Face 镜像加速 
export HF_ENDPOINT=https://hf-mirror.com 

# 2. 运行官方脚本下载 4B 基础模型 (若已下载可跳过)
python3 scripts/download_hf_model.py --repo_id robbyant/lingbot-vla-4b --local_dir lingbot-vla-4b

# 【重要路径修复】消除下载脚本产生的嵌套文件夹陷阱,把 config.json 暴露出来
cd lingbot-vla-4b
mv lingbot-vla-4b/* .
rmdir lingbot-vla-4b
cd ..

# 3. 使用 huggingface-cli 下载 Qwen2.5-VL 底座权重 (若已下载可跳过)
huggingface-cli download --repo-type model Qwen/Qwen2.5-VL-3B-Instruct --local-dir Qwen2.5-VL-3B-Instruct

# 4. 启动 8 卡分布式后训练 (注意:global_batch_size 必须是 8 的倍数)
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True

bash train.sh tasks/vla/train_lingbotvla.py ./configs/vla/robotwin_load20000h.yaml \
  --model.model_path /mnt/public/lwb/test_project/lingbot-vla/lingbot-vla-4b \
  --data.train_path /mnt/public/lwb/test_project/RoboTwin/policy/pi0/lerobot_data/click_bell_aloha_repo \
  --train.output_dir /mnt/public/lwb/test_project/lingbot-vla/output_click_bell_aloha \
  --model.tokenizer_path /mnt/public/lwb/test_project/lingbot-vla/Qwen2.5-VL-3B-Instruct \
  --train.micro_batch_size 4 \
  --train.global_batch_size 32
  
# ================= 训练完成后执行 =================

# 5. 启动服务端部署 (注意修改 XXXX 为最终的 global_step 数字)
python -m deploy.lingbot_robotwin_policy \
  --model_path /mnt/public/lwb/test_project/lingbot-vla/output_click_bell_aloha/checkpoints/global_step_8280/hf_ckpt \
  --use_length 50 \
  --port 8080

image.png

新建第二个终端

# 1. 确保在 RoboTwin 环境中 
conda activate RoboTwin 
# 2. 补齐读写模型格式所需的轻量级基础包(只需基础功能即可,不占显存) 
pip install safetensors transformers huggingface_hub
# 1. 进入真实存在的 lerobot 目录 
cd /mnt/public/lwb/test_project/RoboTwin/policy/pi0/lerobot 
# 2. 安全地把它挂载到 RoboTwin 环境中 (带上 --no-deps 护身符) 
pip install -e . --no-deps 
# 3. 补充解析配置所需的两个超轻量级库 
pip install draccus einops 
pip install datasets omegaconf
pip install jsonlines deepdiff
# 提取 lingbot 环境中完美的 transformers 版本号 LINGBOT_VER=$(/mnt/public/lwb/miniconda3/envs/lingbot/bin/pip show transformers | grep Version | awk '{print $2}') echo "发现大脑环境的黄金版本为: $LINGBOT_VER,正在同步..." 
# 强行同步到当前的 RoboTwin 躯干环境 
pip install transformers==$LINGBOT_VER
pip install diffusers
pip install psutil
pip install ipdb
pip install torchdata
pip install msgpack websockets blobfile
# 4. 回到 RoboTwin 根目录 
cd /mnt/public/lwb/test_project/RoboTwin 

nano /mnt/public/lwb/test_project/RoboTwin/script/eval_policy_client.py
将get_model = eval_function_decorator(policy_name, "get_model", conda_env=policy_conda_env)这行注释掉:
变为:# get_model = eval_function_decorator(policy_name, "get_model", conda_env=policy_conda_env)

# 5. 确保跨项目环境变量生效,并发起最终冲锋! 

继续在第二个终端(RoboTwin 环境,项目根目录下) 执行:

# 1. 替换评测主程序中的视频编码器硬编码(破除 libx264 报错)
sed -i 's/libx264/libopenh264/g' /mnt/public/lwb/test_project/RoboTwin/script/eval_policy.py
sed -i 's/libx264/libopenh264/g' /mnt/public/lwb/test_project/RoboTwin/envs/utils/images_to_video.py

# 2. 部署终极源码对齐版转接头 (严格对齐 LingBot 接收端与 RoboTwin 执行端)
cat << 'EOF' > policy/lingbot_wrapper.py
import sys
import numpy as np

# 强制路径优先级,防止幽灵依赖
sys.path.insert(0, "/mnt/public/lwb/test_project/lingbot-vla")
from deploy.websocket_client_policy import WebsocketClientPolicy

def get_model(usr_args):
    port = usr_args.get("port", 8080)
    print(f"🔗 正在通过 WebSocket 连接大脑 (端口: {port})...")
    return WebsocketClientPolicy(host="127.0.0.1", port=port)

def reset_model(model):
    # 兼容 LingBot 底层重置逻辑
    model.reset("aloha-agilex")

def eval(TASK_ENV, model, observation):
    # 强制开启底层录像引擎
    TASK_ENV.eval_video_log = True
    
    # 1. 严格对齐 LingBot 的平铺图像与 Numpy 格式需求
    raw_obs = observation.get('observation', observation)
    cam_high = raw_obs['head_camera']['rgb']
    cam_left = raw_obs['left_camera']['rgb'] if 'left_camera' in raw_obs else raw_obs['wrist_camera']['rgb']
    cam_right = raw_obs['right_camera']['rgb'] if 'right_camera' in raw_obs else raw_obs['wrist_camera']['rgb']
    state = observation.get('qpos', raw_obs.get('qpos', np.zeros(14, dtype=np.float32)))
    
    # 2. 严格使用 RoboTwin 专属方法获取指令
    instruction = TASK_ENV.get_instruction()
    
    payload = {
        "observation.images.cam_high": cam_high,
        "observation.images.cam_left_wrist": cam_left,
        "observation.images.cam_right_wrist": cam_right,
        "observation.state": state,
        "task": instruction
    }
    
    # 3. 推理并解析动作字典
    response = model.infer(payload)
    action_data = response['action'] if isinstance(response, dict) else response
    actions = np.atleast_2d(action_data)
    
    # 4. 严格调用 RoboTwin 的多步执行方法 (兼容 Chunking)
    for act in actions:
        TASK_ENV.take_action(act)
EOF

# 3. 彻底清洗并阻断旧项目路径,发起最终执行
export PYTHONPATH=""
export PYTHONHOME=""
export PYTHONPATH=/mnt/public/lwb/test_project/RoboTwin:/mnt/public/lwb/test_project/lingbot-vla

python script/eval_policy.py \
  --config policy/Your_Policy/deploy_policy.yml \
  --overrides policy_name policy.lingbot_wrapper task_name click_bell task_config demo_clean ckpt_setting default seed 0 port 8080 eval_video_log True

终端1输出 image.png 终端2输出 image.png

终端2可视化输出 image.png

第五部分:进阶扩展 —— 多任务联合训练与评估

说明: 前面的第一至第四部分,我们以 click_bell (按铃) 为例,跑通了从数据到验证的完整单任务闭环。 实际上官方还支持另外 4 个高频任务:open_microwave (开微波炉)、stack_blocks_three (叠木块)、place_shoe (放鞋子)、put_object_cabinet (物品入柜)。 本部分将补充这 4 个任务的数据,并执行 5 任务联合 SFT 训练。

1. 补充数据下载与转换

在 RoboTwin 环境(终端2),处于 /mnt/public/lwb/test_project/RoboTwin 目录执行

# 开启国内加速镜像
export HF_ENDPOINT=https://hf-mirror.com 
export UV_INDEX_URL=https://mirrors.ustc.edu.cn/pypi/web/simple 
export HF_LEROBOT_HOME=/mnt/public/lwb/test_project/RoboTwin/policy/pi0/lerobot_data

# ==========================================
# 补充任务 1: open_microwave (开微波炉)
# ==========================================
cd /mnt/public/lwb/test_project/RoboTwin 
mkdir -p data/open_microwave
huggingface-cli download --repo-type dataset TianxingChen/RoboTwin2.0 --include "dataset/open_microwave/aloha-agilex_clean*" --local-dir . --resume-download
mv dataset/open_microwave/aloha-agilex_clean* data/open_microwave/ 
cd data/open_microwave && unzip -q aloha-agilex_clean*.zip && mv aloha-agilex_clean_50 aloha-agilex_clean

cd ../../policy/pi0
bash process_data_pi0.sh open_microwave aloha-agilex_clean 50
rm -rf training_data/aloha-agilex_clean/*
mkdir -p training_data/aloha-agilex_clean/ 
cp -r processed_data/open_microwave-aloha-agilex_clean-50/* training_data/aloha-agilex_clean/
bash generate.sh ./training_data/aloha-agilex_clean/ open_microwave_aloha_repo
cd ../../

# ==========================================
# 补充任务 2: stack_blocks_three (叠三个木块)
# ==========================================
mkdir -p data/stack_blocks_three
huggingface-cli download --repo-type dataset TianxingChen/RoboTwin2.0 --include "dataset/stack_blocks_three/aloha-agilex_clean*" --local-dir . --resume-download
mv dataset/stack_blocks_three/aloha-agilex_clean* data/stack_blocks_three/ 
cd data/stack_blocks_three && unzip -q aloha-agilex_clean*.zip && mv aloha-agilex_clean_50 aloha-agilex_clean

cd ../../policy/pi0
bash process_data_pi0.sh stack_blocks_three aloha-agilex_clean 50
rm -rf training_data/aloha-agilex_clean/*
mkdir -p training_data/aloha-agilex_clean/ 
cp -r processed_data/stack_blocks_three-aloha-agilex_clean-50/* training_data/aloha-agilex_clean/
bash generate.sh ./training_data/aloha-agilex_clean/ stack_blocks_three_aloha_repo
cd ../../

# ==========================================
# 补充任务 3: place_shoe (放鞋子)
# ==========================================
mkdir -p data/place_shoe
huggingface-cli download --repo-type dataset TianxingChen/RoboTwin2.0 --include "dataset/place_shoe/aloha-agilex_clean*" --local-dir . --resume-download
mv dataset/place_shoe/aloha-agilex_clean* data/place_shoe/ 
cd data/place_shoe && unzip -q aloha-agilex_clean*.zip && mv aloha-agilex_clean_50 aloha-agilex_clean

cd ../../policy/pi0
bash process_data_pi0.sh place_shoe aloha-agilex_clean 50
rm -rf training_data/aloha-agilex_clean/*
mkdir -p training_data/aloha-agilex_clean/ 
cp -r processed_data/place_shoe-aloha-agilex_clean-50/* training_data/aloha-agilex_clean/
bash generate.sh ./training_data/aloha-agilex_clean/ place_shoe_aloha_repo
cd ../../

# ==========================================
# 补充任务 4: put_object_cabinet (物品放入柜子)
# ==========================================
mkdir -p data/put_object_cabinet
huggingface-cli download --repo-type dataset TianxingChen/RoboTwin2.0 --include "dataset/put_object_cabinet/aloha-agilex_clean*" --local-dir . --resume-download
mv dataset/put_object_cabinet/aloha-agilex_clean* data/put_object_cabinet/ 
cd data/put_object_cabinet && unzip -q aloha-agilex_clean*.zip && mv aloha-agilex_clean_50 aloha-agilex_clean

cd ../../policy/pi0
bash process_data_pi0.sh put_object_cabinet aloha-agilex_clean 50
rm -rf training_data/aloha-agilex_clean/*
mkdir -p training_data/aloha-agilex_clean/ 
cp -r processed_data/put_object_cabinet-aloha-agilex_clean-50/* training_data/aloha-agilex_clean/
bash generate.sh ./training_data/aloha-agilex_clean/ put_object_cabinet_aloha_repo
cd ../../

2. 启动多任务联合 SFT 训练

切换到 Lingbot 环境(终端1),在 lingbot-vla 目录下执行

# ==========================================
#  声明全局环境变量与基础路径 (执行一次即可)
# ==========================================
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
DATA_BASE="/mnt/public/lwb/test_project/RoboTwin/policy/pi0/lerobot_data"

# ==========================================
# 任务 1: click_bell (按铃) - (之前已完成可跳过)
# ==========================================
nohup bash train.sh tasks/vla/train_lingbotvla.py ./configs/vla/robotwin_load20000h.yaml \
  --model.model_path /mnt/public/lwb/test_project/lingbot-vla/lingbot-vla-4b \
  --data.train_path ${DATA_BASE}/click_bell_aloha_repo \
  --train.output_dir /mnt/public/lwb/test_project/lingbot-vla/output_click_bell_aloha \
  --model.tokenizer_path /mnt/public/lwb/test_project/lingbot-vla/Qwen2.5-VL-3B-Instruct \
  --train.micro_batch_size 4 \
  --train.global_batch_size 32 > train_click_bell.log 2>&1 &

# ==========================================
# 任务 2: open_microwave (开微波炉)
# ==========================================
nohup bash train.sh tasks/vla/train_lingbotvla.py ./configs/vla/robotwin_load20000h.yaml \
  --model.model_path /mnt/public/lwb/test_project/lingbot-vla/lingbot-vla-4b \
  --data.train_path ${DATA_BASE}/open_microwave_aloha_repo \
  --train.output_dir /mnt/public/lwb/test_project/lingbot-vla/output_open_microwave_aloha \
  --model.tokenizer_path /mnt/public/lwb/test_project/lingbot-vla/Qwen2.5-VL-3B-Instruct \
  --train.micro_batch_size 4 \
  --train.global_batch_size 32 > train_open_microwave.log 2>&1 &

# ==========================================
# 任务 3: stack_blocks_three (叠三个木块)
# ==========================================
nohup bash train.sh tasks/vla/train_lingbotvla.py ./configs/vla/robotwin_load20000h.yaml \
  --model.model_path /mnt/public/lwb/test_project/lingbot-vla/lingbot-vla-4b \
  --data.train_path ${DATA_BASE}/stack_blocks_three_aloha_repo \
  --train.output_dir /mnt/public/lwb/test_project/lingbot-vla/output_stack_blocks_three_aloha \
  --model.tokenizer_path /mnt/public/lwb/test_project/lingbot-vla/Qwen2.5-VL-3B-Instruct \
  --train.micro_batch_size 4 \
  --train.global_batch_size 32 > train_stack_blocks_three.log 2>&1 &

# ==========================================
# 任务 4: place_shoe (放鞋子)
# ==========================================
nohup bash train.sh tasks/vla/train_lingbotvla.py ./configs/vla/robotwin_load20000h.yaml \
  --model.model_path /mnt/public/lwb/test_project/lingbot-vla/lingbot-vla-4b \
  --data.train_path ${DATA_BASE}/place_shoe_aloha_repo \
  --train.output_dir /mnt/public/lwb/test_project/lingbot-vla/output_place_shoe_aloha \
  --model.tokenizer_path /mnt/public/lwb/test_project/lingbot-vla/Qwen2.5-VL-3B-Instruct \
  --train.micro_batch_size 4 \
  --train.global_batch_size 32 > train_place_shoe.log 2>&1 &

# ==========================================
# 任务 5: put_object_cabinet (物品放入柜子)
# ==========================================
nohup bash train.sh tasks/vla/train_lingbotvla.py ./configs/vla/robotwin_load20000h.yaml \
  --model.model_path /mnt/public/lwb/test_project/lingbot-vla/lingbot-vla-4b \
  --data.train_path ${DATA_BASE}/put_object_cabinet_aloha_repo \
  --train.output_dir /mnt/public/lwb/test_project/lingbot-vla/output_put_object_cabinet_aloha \
  --model.tokenizer_path /mnt/public/lwb/test_project/lingbot-vla/Qwen2.5-VL-3B-Instruct \
  --train.micro_batch_size 4 \
  --train.global_batch_size 32 > train_put_object_cabinet.log 2>&1 &
# 以上代码执行后会在后台跑,执行下面命令可以实时打印运行日志
tail -f train_open_microwave.log

3. 多任务闭环评估 (Eval)

3.1 启动多任务大模型服务端 (终端 1)

保持在 Lingbot 环境

# 注意将 XXXX 替换为多任务训练完成后输出的 global_step
python -m deploy.lingbot_robotwin_policy \
  --model_path /mnt/public/lwb/test_project/lingbot-vla/output_5tasks_aloha/checkpoints/global_step_XXXX/hf_ckpt \
  --use_length 50 \
  --port 8080

3.2 按需启动物理评测 (终端 2)

在 RoboTwin 环境(终端2)

export PYTHONPATH=""
export PYTHONHOME=""
export PYTHONPATH=/mnt/public/lwb/test_project/RoboTwin:/mnt/public/lwb/test_project/lingbot-vla

# 自由选择想要评估的任务执行:
python script/eval_policy.py --config policy/Your_Policy/deploy_policy.yml --overrides policy_name policy.lingbot_wrapper task_name click_bell task_config demo_clean ckpt_setting default seed 0 port 8080 eval_video_log True

python script/eval_policy.py --config policy/Your_Policy/deploy_policy.yml --overrides policy_name policy.lingbot_wrapper task_name open_microwave task_config demo_clean ckpt_setting default seed 0 port 8080 eval_video_log True

python script/eval_policy.py --config policy/Your_Policy/deploy_policy.yml --overrides policy_name policy.lingbot_wrapper task_name stack_blocks_three task_config demo_clean ckpt_setting default seed 0 port 8080 eval_video_log True

python script/eval_policy.py --config policy/Your_Policy/deploy_policy.yml --overrides policy_name policy.lingbot_wrapper task_name place_shoe task_config demo_clean ckpt_setting default seed 0 port 8080 eval_video_log True

python script/eval_policy.py --config policy/Your_Policy/deploy_policy.yml --overrides policy_name policy.lingbot_wrapper task_name put_object_cabinet task_config demo_clean ckpt_setting default seed 0 port 8080 eval_video_log True