做一个 拜年手势红包,从此红包不被白嫖「可体验」(三)

1,328 阅读3分钟

PK创意闹新春,我正在参加「春节创意投稿大赛」,详情请看:春节创意投稿大赛

效果

体验发送页面领取页面拜年姿势识别页面
gh_15bfd4eab8ac_258.jpgimage.pngimage.pngimage.png

一、发红包

文章地址

二、领红包

文章地址

三、拜年手势领红包

需要对着摄像头做出拜年动作才可以领取

  • 思路

    • 借助微信 # CameraFrameListener 实现获取相机每一帧
    • 发送给后端
    • 后端发送到人体关键点失败服务,失败后返回关键点
    • 更具关键点判断是否是拜年姿势
    • 如果是,抢红包
    • 如果不是,继续识别
  • 代码

    • 小程序
    <view>
        相机
        <view style="">
                <camera device-position="front" flash="off" frame-size="small" @error="error" @initdone="xiangji"
                        style="width: 100%; height: 100vh;"></camera>
        </view>
        提示姿势是否正确
        <view v-show="gesture_error" class="background_image"
                style="z-index: 999;margin-top: 170rpx;width: 530rpx;margin-left: 110rpx;">
                <u-alert type="warning" :show-icon="true" :title="gesture_title" :center="true"></u-alert>
        </view>
        边框
        <image class="background_image" src="https://img.yeting.wang/new_year/7.png"></image>
        领取红包后的弹框
        <u-modal :show="res_lingqushow" :confirmText="redPacketReceive.buttonContext"
                @confirm="resBut(redPacketReceive.buttonMethod)">
                <view class="slot-content">
                        {{redPacketReceive.message}}
                </view>
        </u-modal>
    </view>
    
    xiangji() {
        this.listener = this.ctx.onCameraFrame((frame) => {
            //防止太快,判断一下是否要处理
            if (this.flag == true) {
                this.flag = false
                console.log(frame.data instanceof ArrayBuffer, frame.width, frame.height)
                //把图片转换一下转成 [[r,g,b,a],[r,g,b,a],[r,g,b,a]] ,交给python 转成图片
                //前端upng也可以转成图片,不过真机巨慢
                let data = new Uint8ClampedArray(frame.data);
                let list = new Array
                for (var i = 0; i < data.length; i += 4) {
                        list.push([data[i], data[i + 1], data[i + 2], data[i + 3]])
                }
                //发给服务端
                uni.$u.http.post('/redPacket/infer', {
                        imgArray: list,
                        width: frame.width,
                        height: frame.height,
                        redPacketUserId: this.redPacketUserId,
                        redPacketId: this.redPacketId,
                }).then(data => {
                        console.log(data)
                       // 判断是否失败到姿势
                        if (data.gestureStatus) {
                                //停止监控,弹出结果框
                                this.listener.stop()
                                this.gesture_error = false
                                this.redPacketReceive = data
                                this.res_lingqushow = true
                        } else {
                                //未识别到,顶部提示一下,继续识别
                                this.flag = true
                                this.gesture_error = true
                                this.gesture_title = data.message
                                console.log("未识别到拜年姿势");
                        }
                }).catch(err => {
                        console.log("err:" + err)
                        setTimeout(() => {
                                this.flag = true
                        }, 3000);
                })
            }
        })
        this.listener.start({
                success: function(res) {
                        console.log("开始监听");
                }
        });
    },
    
    • 后端
    //代码比较乱,能用就行
    @Override
    public Result<?> infer(UserBo userBo, RedPacketInferVo redPacketInferVo) {
        long s = System.currentTimeMillis();
        //调用识别端
        String post = HttpUtil.post(inferUrl + "/infer/array_buffer", JSONUtil.toJsonStr(redPacketInferVo));
        long e = System.currentTimeMillis();
        System.out.println("识别用时:" + (e - s));
        //转化一下结果
        JSONObject jsonObject = JSONUtil.parseObj(post);
        if (new Integer(20000).equals(jsonObject.getInt("code"))) {
            //data 是可以返回多个人,这里只取一个
            JSONArray dataArray = jsonObject.getJSONArray("data");
            if (dataArray != null && dataArray.size() > 0) {
                //拿一下 肩膀 肘部 手腕
                JSONObject data = dataArray.getJSONObject(0);
                JSONObject left_shoulder = data.getJSONObject("left_shoulder");
                Double left_shoulder_x = left_shoulder.getDouble("x");
                Double left_shoulder_y = left_shoulder.getDouble("y");
                JSONObject right_shoulder = data.getJSONObject("right_shoulder");
                Double right_shoulder_x = right_shoulder.getDouble("x");
                Double right_shoulder_y = right_shoulder.getDouble("y");
                JSONObject left_elbow = data.getJSONObject("left_elbow");
                Double left_elbow_x = left_elbow.getDouble("x");
                Double left_elbow_y = left_elbow.getDouble("y");
                JSONObject right_elbow = data.getJSONObject("right_elbow");
                Double right_elbow_x = right_elbow.getDouble("x");
                Double right_elbow_y = right_elbow.getDouble("y");
                JSONObject left_wrist = data.getJSONObject("left_wrist");
                Double left_wrist_x = left_wrist.getDouble("x");
                Double left_wrist_y = left_wrist.getDouble("y");
                JSONObject right_wrist = data.getJSONObject("right_wrist");
                Double right_wrist_x = right_wrist.getDouble("x");
                Double right_wrist_y = right_wrist.getDouble("y");
                //判读姿势
                if (Math.abs(left_shoulder_y - right_shoulder_y) < 20) {
                    if (Math.abs(left_shoulder_x - right_shoulder_x) > 70) {
                        if (Math.abs(left_wrist_x - right_wrist_x) < 70) {
                            if (Math.abs(left_wrist_y - right_wrist_y) < 20) {
                                if (right_elbow_y > right_shoulder_y || left_elbow_y > left_shoulder_y) {
                                    if (right_wrist_y < right_elbow_y || left_wrist_y < left_elbow_y) {
                                        if ((right_wrist_x > right_elbow_x && right_wrist_x > right_shoulder_x)
                                                || (left_wrist_x < left_elbow_x && left_wrist_x < left_shoulder_x)) {
                                            System.out.println("姿势正确");
                                            //姿势识别正确就调用抢红包接口
                                            return receive(userBo, new RedPacketVo()
                                                    .setRedPacketId(redPacketInferVo.getRedPacketId())
                                                    .setUserId(redPacketInferVo.getRedPacketUserId())
                                            );
                                        } else {
                                            System.out.println("手不再中间");
                                        }
                                    } else {
                                        System.out.println("手大于肘子");
                                    }
                                } else {
                                    System.out.println("肘子小于肩膀");
                                }
                            } else {
                                System.out.println("手腕高度异常");
                            }
                        } else {
                            System.out.println("手腕宽度异常");
                        }
                    } else {
                        System.out.println("肩膀宽度异常");
                    }
                } else {
                    System.out.println("肩膀高度异常");
                }
            }
        }
        return Result.success(new RedPacketReceiveDto()
                .setRedPacketId(redPacketInferVo.getRedPacketId())
                .setGestureStatus(false)
                .setStatus(false)
                .setMessage("未识别到拜年姿势")
        );
    }
    
    • 人体识别
    import json
    import time
    
    from django.http import HttpResponse
    
    # Create your views here.
    from django.views.decorators.csrf import csrf_exempt
    import det_keypoint_unite_infer2 as keypoint
    
    # base64 图片识别
    @csrf_exempt
    def base64(request):
        res = {}
        if request.method == 'GET':
            res['message'] = keypoint.main()
            return HttpResponse(json.dumps(res))
        else:
            json_str = request.body
            body = json.loads(json_str)
            img = body.get("img")
            data = keypoint.base64(img)
            res['message'] = 'success'
            res['code'] = 20000
            res['data'] = data
            return HttpResponse(json.dumps(res))
    
    # [[r,g,b,a],[r,g,b,a],[r,g,b,a]] 类型的图片识别
    @csrf_exempt
    def array_buffer(request):
        res = {}
        if request.method == 'GET':
            res['message'] = keypoint.main()
            return HttpResponse(json.dumps(res))
        else:
            print(time.time())
            json_str = request.body
            body = json.loads(json_str)
            img_array = body['imgArray']
            width = body['width']
            height = body['height']
            data = keypoint.array_buffer(img_array, width, height)
            print(time.time())
            res['message'] = 'success'
            res['code'] = 20000
            res['data'] = data
            return HttpResponse(json.dumps(res))
    
    # 用给定数据预测
    def predict_with_given_det(image, det_res, keypoint_detector,
                               keypoint_batch_size, det_threshold,
                               keypoint_threshold, run_benchmark):
        rec_images, records, det_rects = keypoint_detector.get_person_from_rect(
            image, det_res, det_threshold)
        keypoint_vector = []
        score_vector = []
        rect_vector = det_rects
        batch_loop_cnt = math.ceil(float(len(rec_images)) / keypoint_batch_size)
    
        for i in range(batch_loop_cnt):
            start_index = i * keypoint_batch_size
            end_index = min((i + 1) * keypoint_batch_size, len(rec_images))
            batch_images = rec_images[start_index:end_index]
            batch_records = np.array(records[start_index:end_index])
            if run_benchmark:
                # warmup
                keypoint_result = keypoint_detector.predict(
                    batch_images, keypoint_threshold, repeats=10, add_timer=False)
                # run benchmark
                keypoint_result = keypoint_detector.predict(
                    batch_images, keypoint_threshold, repeats=10, add_timer=True)
            else:
                keypoint_result = keypoint_detector.predict(batch_images,
                                                            keypoint_threshold)
            orgkeypoints, scores = translate_to_ori_images(keypoint_result,
                                                           batch_records)
            keypoint_vector.append(orgkeypoints)
            score_vector.append(scores)
    
        keypoint_res = {}
        keypoint_res['keypoint'] = [
            np.vstack(keypoint_vector).tolist(), np.vstack(score_vector).tolist()
        ] if len(keypoint_vector) > 0 else [[], []]
        keypoint_res['bbox'] = rect_vector
        return keypoint_res
    
    
    # 自上而下联合预测
    def topdown_unite_predict(detector,
                              topdown_keypoint_detector,
                              image_list,
                              keypoint_batch_size=1,
                              save_res=False):
        det_timer = detector.get_timer()
        for i, img_file in enumerate(image_list):
            det_timer.preprocess_time_s.start()
            image, _ = img_file
            det_timer.preprocess_time_s.end()
            # 预测行人
            print("行人开始时间: {}".format(time.time()))
            results = detector.predict([image])
    
            print("行人结束时间: {}".format(time.time()))
            # 判断是否预测到行人
            if results['boxes_num'] == 0:
                continue
            # 预测关键点
            print("关键点开始时间: {}".format(time.time()))
            keypoint_res = predict_with_given_det(
                image, results, topdown_keypoint_detector, keypoint_batch_size, 0.5,
                0.5, False)
            print("关键点结束时间: {}".format(time.time()))
            # draw_pose(img_file, keypoint_res)
            skeletons, scores = keypoint_res['keypoint']
            skeletons = np.array(skeletons)
            kpt_nums = 17
            if len(skeletons) > 0:
                kpt_nums = skeletons.shape[1]
            if kpt_nums == 17:  # plot coco keypoint
                EDGES = [(0, 1), (0, 2), (1, 3), (2, 4), (3, 5), (4, 6), (5, 7), (6, 8),
                         (7, 9), (8, 10), (5, 11), (6, 12), (11, 13), (12, 14),
                         (13, 15), (14, 16), (11, 12)]
            else:  # plot mpii keypoint
                EDGES = [(0, 1), (1, 2), (3, 4), (4, 5), (2, 6), (3, 6), (6, 7), (7, 8),
                         (8, 9), (10, 11), (11, 12), (13, 14), (14, 15), (8, 12),
                         (8, 13)]
            NUM_EDGES = len(EDGES)
            # (int(np.mean(skeletons[j][i, 0])), int(np.mean(skeletons[j][i, 1])))
            res = []
            for j in range(len(skeletons)):
                res.append({})
            skeleton_index = {
                0: 'nose',
                1: 'left_eye',
                2: 'right_eye',
                3: 'left_ear',
                4: 'right_ear',
                5: 'left_shoulder',
                6: 'right_shoulder',
                7: 'left_elbow',
                8: 'right_elbow',
                9: 'left_wrist',
                10: 'right_wrist',
                11: 'left_crotch',
                12: 'right_crotch',
                13: 'left_knee',
                14: 'right_knee',
                15: 'left_ankle',
                16: 'right_ankle'
            }
            # i 是骨骼点 j是人
            for i in range(NUM_EDGES):
                for j in range(len(skeletons)):
                    skeleton_res = res[j]
                    skeleton_res[skeleton_index[i]] = {
                        'x': skeletons[j][i, 0],
                        'y': skeletons[j][i, 1]
                    }
            return res
    
    # 任务识别模型路径
    det_model_dir = ''
    keypoint_model_dir = ''
    # 失败类型GPU CPU
    device = 'GPU'
    # 行人检测模型
    # 取行人检测模型路径
    pred_config = PredictConfig(det_model_dir)
    # 探测器类型:轻量级检测模型PicoDet
    detector_func = 'Detector'
    if pred_config.arch == 'PicoDet':
        detector_func = 'DetectorPicoDet'
    # 加载模型
    detector = eval(detector_func)(pred_config,
                                   det_model_dir,
                                   device=device)
    # 关键点检测模型
    # 取关键点检测模型路径
    pred_config = PredictConfig_KeyPoint(keypoint_model_dir)
    assert KEYPOINT_SUPPORT_MODELS[
               pred_config.
                   arch] == 'keypoint_topdown', 'Detection-Keypoint unite inference only supports topdown models.'
    # 加载关键点检测模型
    topdown_keypoint_detector = KeyPoint_Detector(
        pred_config,
        keypoint_model_dir,
        device=device)
    
    # 预测base64
    def base64(base64_image):
        img_list = [decode_base64_image(base64_image, {})]
        # 自上而下联合预测
        keypoint_res = topdown_unite_predict(detector, topdown_keypoint_detector, img_list)
        return keypoint_res
    
    # 预测 array_buffer
    def array_buffer(img_array, width, height):
        img_list = [decode_array_buffer_image(img_array, width, height, {})]
        # 自上而下联合预测
        keypoint_res = topdown_unite_predict(detector, topdown_keypoint_detector, img_list)
        return keypoint_res
    

总结

技术点:红包分钱算法,抢红包防止多发、防止多次发钱、识别拜年手势

学习点:分布式锁,分配红包金额,小程序arrat_buffer的处理