小程序埋点上报方案

490 阅读2分钟

本方案采用自动埋点 + 手动埋点相结合的方式

    • 小程序启动时,由神策数据SDK自动采集事件,并上报至后端接口
    • 其他自定义事件,手动通过 track() 方法追踪用户行为事件,SDK内部将事件数据上报至后端接口

整体上分为三块:前端采集埋点并上报;服务端接收上报埋点事件日志,解析并推送至kafka;其他业务应用消费kafka中消息并存储至数据库

主要流程:

前端接入神策数据SDK

manual.sensorsdata.cn/sa/latest/t…

神策数据格式说明

manual.sensorsdata.cn/sa/latest/t…

神策数据格式示例


[
    {
        "distinct_id":"1694073858838-7820301-0dfbf1ac243d86-11754052", //相当于用户ID
        "lib":{
            "$lib":"BytedanceMini",
            "$lib_method":"code",
            "$lib_version":"0.12.0"
        },
        "properties":{
            "$lib":"BytedanceMini",
            "$lib_version":"0.12.0",
            "$timezone_offset":-480,
            "$network_type":"WIFI",
            "$manufacturer":"devtools",
            "$model":"iPhone 12",
            "$brand":"DEVTOOLS",
            "$screen_width":390,
            "$screen_height":844,
            "$os":"devtools",
            "$os_version":"14",
            "$mp_client_app_version":"6.6.3",
            "$mp_client_basic_library_version":"2.76.0",//小程序基础库
            "tenantId":"0", //租户ID
            "deviceType":"tt-mp", //小程序类型
            "$latest_scene":"byte-990001",
            "name":"call-up", //自定义事件类型
            "$is_first_day":true
        },
        "anonymous_id":"1694073858838-7820301-0dfbf1ac243d86-11754052",
        "type":"track",
        "event":"click", //事件类型
        "_track_id":672541593,
        "time":1694077501593,
        "_nocache":"6127186213991",
        "_flush_time":1694077503002
    }
]

前端数据上报过来的时候是经过URLEncode和Base64编码的,拿到数据后需要解码还原出原始数据的格式,通过Spring Cloud Stream 发送到Kafka消息队列,最后由业务监听方消费消息,并写入相关数据库

后端接收日志的接口处理逻辑:

    @Autowired
    private StreamKafkaProducer streamKafkaProducer;    

    @PostMapping("/sensor-auto-tarck/up")
    public Result getAutoTrackLog(@RequestBody String data) {

        // 使用 Spring Cloud Stream 发送到Kafka消息队列
        EventMessage eventMessage = EventMessage.builder()
        .. 
        //填充其他属性
        .content(data)
        .build();

        streamKafkaProducer.sendStreamMsg(eventMessage, "trackLog-out-0");
        return Result.ok();
    }

消费消息:

@Component
@Slf4j
public class KafkaLogStreamConsumer {

    @Bean
    Consumer<EventMessage> trackLog() { //方法名必须与生产消息时自定义的绑定名称一致
        log.info("初始化订阅");
        return eventMessage -> {
            log.info("通过stream消费到消息 => {}", eventMessage);
            
            String data = eventMessage.getContent();
            String dataList = "";
    
            try {
                dataList = URLDecoder.decode(data, "utf-8");
            } catch (IOException e) {
                e.printStackTrace();
            }
    
            log.info(">>>获取原始日志:" + dataList.replace("data_list=",""));
            log.info("\n\n");
            String based = dataList.replace("data_list=","");
            byte[] base64Data = Base64.decodeBase64(based);
            String str = new String(base64Data, StandardCharsets.UTF_8);
            log.info(">>>获取解析后日志:" + str);

            // TODO 写入数据库

        };
    }
}

Spring Cloud Stream For Kafka 配置

spring:
  cloud:
    stream:
      kafka:
        binder:
          brokers: localhost:9092
      bindings:
        trackLog-out-0:
          destination: track-log-topic
          contentType: application/json
          group: track_group
          binder: kafka
        trackLog-in-0:
          destination: track-log-topic
          contentType: application/json
          group: track_group
          binder: kafka