Semantic Terrain Classification for Off-Road Autonomous Driving代码学习

155 阅读1分钟

Semantic Terrain Classification for Off-Road Autonomous Driving代码学习

github:github.com/JHLee0513/s…

论文:Semantic Terrain Classification for Off-Road Autonomous Driving

数据生成部分

generate_bev_gt\kitti_4class_100x100_center_cfg.yaml 这个文件夹下可以更改标注对不同风险地区的映射,以及costmap的大小

generate_bev_gt\generate_parallel.py生成数据集

make_color_map
函数将标签转换为costmap

第一步 得到每次扫描的colormap 对应图1

注 python zip函数 www.runoob.com/python/pyth… 用于将元素打包为一个个元组或解压出来

pic

原文:对于每次扫描,我们将其与过去的t扫描和未来的t扫描进行聚合,并使用步长s构建一个更大的点集。我们将t设置为足够大的数字(例如71),以获得机器人周围大面积的密集可遍历性信息。这些参数可根据车辆速度和激光雷达点的密度进行调整。 对应步骤1

其中参数

sequence_length 为t 过去的t次扫描

stride 步长

for i in range(len(scan_files)):
    start_idx = i - (sequence_length // 2 * stride)
    history = []

    data_idxs = [start_idx + _ * stride for _ in range(sequence_length)]
    if data_idxs[0] < 0 or data_idxs[-1] >= len(scan_files):
        # Out of range
        continue

    for data_idx in data_idxs:
        scan_file = scan_files[data_idx]
        basename = os.path.splitext(scan_file)[0]
        scan_path = os.path.join(input_folder, "velodyne", scan_file)
        label_path = os.path.join(input_folder, "labels", basename + ".label")
        history.append((scan_path, label_path, poses[data_idx]))
    history = history[::-1]

    if len(history) < sequence_length:
        continue
    assert len(history) == sequence_length

将所有的信息存储到history中得到一个元数组 存储着

包括colormap cmap,点云文件的路径hist_scan_files,标签的路径hist_label_files和poe的路径hist_poses

将所有的信息传递给job_args

job_args.append({
    'cfg': cfg,
    'cmap': cmap,
    'scan_files': hist_scan_files,
    'label_files': hist_label_files,
    'poses': hist_poses,
})

gen_costmap这个函数用于生成costmap

async_results = [pool.apply_async(gen_costmap, (job, FLAGS.mem_frac)) for job in job_args]#利用多进程得到cosmap

scan 存储点云x,y,z,f

label存储标签

remissions 存储f

scan_ground_frame存储高度y

信息存储到history中

history.appendleft({
    "scan": scan.copy(),
    "scan_ground_frame": scan_ground_frame,
    "points": points,
    "labels": labels,
    "remissions": remissions,
    "pose": poses[i].copy(),
    "filename": scan_files[i]
})

keyscan表示当前的那一帧 即我们要将所有的转换给这一帧

key_scan_id = len(history) // 2
key_scan = history[key_scan_id]

首先移除掉会影响判别的移动的物体 remove_moving_objects

new_history = remove_moving_objects(history,
                                    cfg["moving_classes"],
                                    key_scan_id)

join_pointclouds 将点云聚合起来

利用左边转换 将上一帧的点云转换到这一帧的坐标下

for i, past in enumerate(history):
    diff = np.matmul(key_pose_inv, past["pose"])
    tpoints = np.matmul(diff, past["points"].T).T
    tpoints[:, 3] = past["remissions"]
    tpoints = tpoints.reshape((-1))

    npoints = past["points"].shape[0]
    end = start + npoints
    assert(npoints == past["labels"].shape[0])
    concated_points[4 * start:4 * end] = tpoints
    concated_labels[start:end] = past["labels"]
    pc_ids[start:end] = i
    start = end

得到聚合后的点云cat_points,标签cat_labels,以及是第几帧的点pc_ids

cat_points, cat_labels, pc_ids = join_pointclouds(new_history, key_scan["pose"])

然后生成cotmap create_costmap

label利用映射转换为cost_label

利用这个构建map

map_cfg = cfg["costmap"]
h = int(np.floor((map_cfg["maxy"] - map_cfg["miny"])/map_cfg["gridh"]))
w = int(np.floor((map_cfg["maxx"] - map_cfg["minx"])/map_cfg["gridw"]))

利用potprocess处理

def postprocess(points, labels, cfg):

得到costmap地图

论文原文

点云堆叠和地面高度估计

对于聚合扫描中的每个点,我们进行向下投影,以找到其在可遍历性地图上的位置x,y。因此,地图的每个x、y位置都包含一个点柱。我们通过在地图中每个x,y位置上标记为free and low-cost的点的最低z坐标上运行平均滤波核来估计地面高度地图。此高度图用作最终可通过性投影的参考。

其中BinningPostprocess 类为

  def process_pc(self, points, preds):
        ''' We assume the input prediction has 4 classes (pred.shape == nx4):
                + class 0 and 1 are traversable
                + class 2 and 3 are non-traversable
            We start off by estimating ground elavation by locally averaging z values
            of the traversable points. Then, we compute each points distance to the estimated
            ground elavation value and correct the predictions as follows:
                + (LOGIC 0) Any point outside the 2d map boundary is classified as unknown.
                + (LOGIC 1) Any point above the sky_threshold is classified as sky (a new class).
                + (LOGIC 2) Any point in class 2 that is lower than g2_threshold is labeld as class 1.
                + (LOGIC 3) Any point in class 3 that it lower than g3_threshold is labeld as class 1.
            The output is a nx5 dimensional prediction tensor.

这样处理的到分类

cu_labels = postprocess(points, mapped_labels, cfg["postprocessing"])

新的标签存储在cu_labels

转化为tensor数组 mapped_labels

然后投影,得到图的具体位置坐标

## projection
j_inds = np.floor((points[:, 0] - map_cfg["minx"]
                   ) / map_cfg["gridw"]).astype(np.int32)
i_inds = np.floor((points[:, 1] - map_cfg["miny"]
                   ) / map_cfg["gridh"]).astype(np.int32)

inrange = (i_inds >= 0) & (i_inds < h) & (j_inds >= 0) & (j_inds < w)

i_inds, j_inds, mapped_labels = [x[inrange] for x in [i_inds, j_inds,                                                      mapped_labels]]

利用规则填充costmap

# Assume that cost of class j > cost of class i if j > i
# Project the points down. Higher-cost classes overwrite low-cost classes.
costmap = np.ones((h, w), np.uint8) * 255
for l in [0, 1, 2, 3]:
    hist, _, _ = np.histogram2d(i_inds[mapped_labels == l],
                                j_inds[mapped_labels == l],
                                bins=(h, w),
                                range=((0, h), (0, w)))
    costmap[hist != 0] = l

得到key_can的costmap

#### extract postprocessed labels only for the key scan
if pc_ids is not None and key_scan_id >= 0:
    key_labels_ = cu_labels[pc_ids[not_void] == key_scan_id]

    scan_len = (pc_ids == key_scan_id).sum()
    key_labels = np.full(scan_len, -1)

    key_labels[not_void[pc_ids == key_scan_id]] = key_labels_.cpu().numpy()

    # 0,1,2,3,4 are reserved so use 5 for unknown?
    key_labels[key_labels < 0] = 5
            return costmap, key_labels.astype(np.uint32), costmap_pose