从零构建前端日志平台,监控平台

6,069 阅读11分钟

持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第1天,点击查看活动详情

背景

本人前端开发一枚,2015年参加工作。发现大多数公司的前端开发都不是很注重前端埋点,有一些做的不错的也就是将一些产品需要的数据,或者前端异常的数据通过接口的形式上报给到后端,不利于前端自己查一些业务日志。于是我经过一段时间的学习,想搭建一套自己的前端日志平台,于是有了此文,方便自己时常稳固学习。

前置学习

  • 通过docker-compose启动docker容器:docker-compose up -d
  • 通过docker-compose关闭docker容器:docker-compose down
  • 通过docker-compose list docker容器:docker-compose ps
  • 进入容器:docker container exec -it 容器name bash
  • 查看容器日志:docker container logs 容器name
  • kafka创建消费者:kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic quickstart-events --from-beginning
  • kafka创建生产者: kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092

数据流

graph LR
前端数据 --> nodejs服务 --> 写入本地文件 --> filebeat --> kafka --> logstash --> es --> mysql聚合 --> dashboard展示

项目启动

创建空的文件夹

mkdir elk && cd elk

创建docker-compose.yml,内容如下

version: "2.2"

services:
  setup:
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
    user: "0"
    command: >
      bash -c '
        if [ x${ELASTIC_PASSWORD} == x ]; then
          echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
          exit 1;
        elif [ x${KIBANA_PASSWORD} == x ]; then
          echo "Set the KIBANA_PASSWORD environment variable in the .env file";
          exit 1;
        fi;
        if [ ! -f config/certs/ca.zip ]; then
          echo "Creating CA";
          bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
          unzip config/certs/ca.zip -d config/certs;
        fi;
        if [ ! -f config/certs/certs.zip ]; then
          echo "Creating certs";
          echo -ne \
          "instances:\n"\
          "  - name: es01\n"\
          "    dns:\n"\
          "      - es01\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es02\n"\
          "    dns:\n"\
          "      - es02\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          "  - name: es03\n"\
          "    dns:\n"\
          "      - es03\n"\
          "      - localhost\n"\
          "    ip:\n"\
          "      - 127.0.0.1\n"\
          > config/certs/instances.yml;
          bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
          unzip config/certs/certs.zip -d config/certs;
        fi;
        echo "Setting file permissions"
        chown -R root:root config/certs;
        find . -type d -exec chmod 750 \{\} \;;
        find . -type f -exec chmod 640 \{\} \;;
        echo "Waiting for Elasticsearch availability";
        until curl -s --cacert config/certs/ca/ca.crt http://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
        echo "Setting kibana_system password";
        until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" http://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
        echo "All done!";
      '
    healthcheck:
      test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
      interval: 1s
      timeout: 5s
      retries: 120

  es01:
    depends_on:
      setup:
        condition: service_healthy
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata01:/usr/share/elasticsearch/data
      - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    ports:
      - ${ES_PORT}:9200
    environment:
      - node.name=es01
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es02,es03
      - ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es01/es01.key
      - xpack.security.http.ssl.certificate=certs/es01/es01.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es01/es01.key
      - xpack.security.transport.ssl.certificate=certs/es01/es01.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt http://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es02:
    depends_on:
      - es01
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata02:/usr/share/elasticsearch/data
      - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    environment:
      - node.name=es02
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es03
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es02/es02.key
      - xpack.security.http.ssl.certificate=certs/es02/es02.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es02/es02.key
      - xpack.security.transport.ssl.certificate=certs/es02/es02.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}
    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt http://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  es03:
    depends_on:
      - es02
    image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
    volumes:
      - certs:/usr/share/elasticsearch/config/certs
      - esdata03:/usr/share/elasticsearch/data
      - ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
    environment:
      - node.name=es03
      - cluster.name=${CLUSTER_NAME}
      - cluster.initial_master_nodes=es01,es02,es03
      - discovery.seed_hosts=es01,es02
      - bootstrap.memory_lock=true
      - xpack.security.enabled=false
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.key=certs/es03/es03.key
      - xpack.security.http.ssl.certificate=certs/es03/es03.crt
      - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.key=certs/es03/es03.key
      - xpack.security.transport.ssl.certificate=certs/es03/es03.crt
      - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.license.self_generated.type=${LICENSE}

    mem_limit: ${MEM_LIMIT}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s --cacert config/certs/ca/ca.crt http://localhost:9200 | grep -q 'missing authentication credentials'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  kibana:
    image: docker.elastic.co/kibana/kibana:${STACK_VERSION}
    volumes:
      - certs:/usr/share/kibana/config/certs
      - kibanadata:/usr/share/kibana/data
    ports:
      - ${KIBANA_PORT}:5601
    environment:
      - SERVERNAME=kibana
      - ELASTICSEARCH_HOSTS=http://es01:9200
      - ELASTICSEARCH_USERNAME=kibana_system
      - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
      - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
    mem_limit: ${MEM_LIMIT}
    healthcheck:
      test:
        [
          "CMD-SHELL",
          "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
        ]
      interval: 10s
      timeout: 10s
      retries: 120

  head:
    image: mobz/elasticsearch-head:5
    ports:
      - 9100:9100

  head:
    image: mobz/elasticsearch-head:5
    ports:
      - 9100:9100

  logstash:
    image: logstash:${ELASTIC_STACK_VERSION}
    ports:
      - 5300:5000
    volumes: 
      - ./logstash/pipeline/:/usr/share/logstash/pipeline
      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml

  zookeeper:
    image: 'bitnami/zookeeper:latest'
    ports:
      - '2181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:
    image: 'bitnami/kafka:latest'
    ports:
      - '9093:9093'
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT
      - KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093
      - KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=CLIENT
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper

volumes:
  certs:
    driver: local
  esdata01:
    driver: local
  esdata02:
    driver: local
  esdata03:
    driver: local
  kibanadata:
    driver: local

创建.env环境变量文件

# Password for the 'elastic' user (at least 6 characters)
ELASTIC_PASSWORD=abcd1234

# Password for the 'kibana_system' user (at least 6 characters)
KIBANA_PASSWORD=abcd1234

# Version of Elastic products
STACK_VERSION=8.0.1

# Set the cluster name
CLUSTER_NAME=docker-cluster

# Set to 'basic' or 'trial' to automatically start the 30-day trial
LICENSE=basic
#LICENSE=trial

# Port to expose Elasticsearch HTTP API to the host
ES_PORT=9200
#ES_PORT=127.0.0.1:9200

# Port to expose Kibana to the host
KIBANA_PORT=5601
#KIBANA_PORT=80

# Increase or decrease based on the available host memory (in bytes)
MEM_LIMIT=1073741824

# Project namespace (defaults to the current folder name if not set)
#COMPOSE_PROJECT_NAME=myproject

ELASTIC_STACK_VERSION=8.4.3

创建es的配置文件

相对路径为,/config/elasticsearch.yml,内容如下,允许跨域,方便es head链接

cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"

创建logstash的配置文件

  • 配置文件
mkdir -p logstash/config && cd logstash/config

创建logstash.yml文件,输出如下

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://es01:9200" ]
xpack.monitoring.enabled: true
  • 回到elk根目录下的logstash目录
  • logstash执行文件
mkdir -p pipeline && cd pipeline

创建logstash.conf文件,输出如下,链接kafka,输出kafka内容到elasticsearch

input {
    kafka {
        id => "my_plugin_id"
        bootstrap_servers =>["kafka:9092"]
        topics => ["my-topic"]
        group_id => "filebeat"
        auto_offset_reset => "latest"
        type => "pengclikafka"
        ssl_endpoint_identification_algorithm => ""
    }
}

output {
    elasticsearch {
        hosts => ["es01:9200"]
        index => "logstash-system-localhost-%{+YYYY.MM.dd}"
    }
}

启动docker-compose

在docker-compose.yml目录下执行

docker-compose up -d

查看容器运行情况

稍等两分钟,执行

docker-compose ps

查看容器运行情况,输出内容如下,则表示各容器运行正常

elk-es01-1          "/bin/tini -- /usr/l…"   es01                running (unhealthy)   0.0.0.0:9200->9200/tcp, 9300/tcp
elk-es02-1          "/bin/tini -- /usr/l…"   es02                running (unhealthy)   9200/tcp, 9300/tcp
elk-es03-1          "/bin/tini -- /usr/l…"   es03                running (unhealthy)   9200/tcp, 9300/tcp
elk-head-1          "/bin/sh -c 'grunt s…"   head                running               0.0.0.0:9100->9100/tcp
elk-kafka-1         "/opt/bitnami/script…"   kafka               running               9092/tcp, 0.0.0.0:9093->9093/tcp
elk-kibana-1        "/bin/tini -- /usr/l…"   kibana              running (healthy)     0.0.0.0:5601->5601/tcp
elk-logstash-1      "/usr/local/bin/dock…"   logstash            running
elk-setup-1         "/bin/tini -- /usr/l…"   setup               running (healthy)     9200/tcp, 9300/tcp
elk-zookeeper-1     "/opt/bitnami/script…"   zookeeper           running               2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp

某一容器失败

如果某一容器失败,可以下面命令查看日志,然后自行百度。

docker container logs 容器id

访问head

访问head:http://localhost:9100/

Screen Shot 2022-10-17 at 4.22.12 PM.png 来链接es集群

流程解析

nodejs服务

前端通过调用nodejs api,将所要上报的数据写入本地文件

  • 文件格式以/年/月/日/时/分.log的实行存储
const express = require('express')
const fs = require('fs')
const path = require('path')
const os = require('os')
const app = express()

// 获取时间
const getDate = () => {
  const date = new Date();
  const year = date.getFullYear().toString()
  const month = (date.getMonth() + 1).toString();
  const day = date.getDate().toString();
  const hour = date.getHours().toString();
  const minutes = date.getMinutes().toString();
  return {
    date, year, month, day, hour, minutes
  }
}

// 获得文件的绝对路径和文件名称
const getAbsolutePath = () => {
  const { year, month, day, hour, minutes } = getDate()
  const absolutePath = path.join(year, month, day, hour)
  return [absolutePath, minutes]
}

// 检查目录是否存在,不存在则创建
const checkAndMdkirPath = (dirpath, mode = 0777) => {
  if (!fs.existsSync(dirpath)) {
    let pathtmp;
    dirpath.split(path.sep).forEach(function (dirname) {
        console.log('dirname', dirname)
        if (pathtmp) {
          pathtmp = path.join(pathtmp, dirname);
        }
        else {
          pathtmp = dirname;
        }
        if (!fs.existsSync(pathtmp)) {
          if (!fs.mkdirSync(pathtmp, mode)) {
            return false;
          }
        }
    });
  }
  return true; 
}

// 的到测试数据
const getLogs = () => {
  const date = new Date();
  const message = 'test message'
  return JSON.stringify({ date, message })
}

// 写入log
const fileLogs = () => {
  const [absolutePath, filepath] = getAbsolutePath()
  const mkdirsuccess = checkAndMdkirPath(absolutePath)
  if (!mkdirsuccess) return
  const logs = getLogs()
  fs.appendFile(`${absolutePath}/${filepath}.log`, logs + os.EOL, (err) => {
    if (err) throw err;
    console.log('The file has been saved!');
  });
  return logs
}

// 浏览器访问localhost:3000, 写入日志到本地
app.get('/', function (req, res) {
  const logs = fileLogs()
  res.send(logs)
})

// 监听端口
app.listen(3000)

启动nodejs服务,node ./index.js,访问localhost:3000,浏览器返回结果

Screen Shot 2022-10-14 at 5.27.34 PM.png 当前服务下会多出一个以年月日时分的目录,文件名字时当前时间的分钟.log Screen Shot 2022-10-14 at 5.28.20 PM.png

filebeat

filebeat,监听上一步骤nodejs文件变化,获取文件内容,为什么不用logstash呢?因为filebeat比logstash更轻量,占用内存更小,如果在没有清洗数据的前提下(清洗数据logstash有优势),filebeat比logstash更有优势。

# filebeat mac下载,其他版本请访问上述链接
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.4.3-darwin-x86_64.tar.gz
tar xzvf filebeat-8.4.3-darwin-x86_64.tar.gz
graph LR
本地文件 --> filebeat获取日志

解压,进入解压后的文件夹,编辑filebeat.yml为如下内容,读取nodejs项目下的日志文件,输出日志文件到控制台

filebeat.inputs:
- type: filestream
  paths:
    - /nodejs项目的绝对路径/**/*.log
output.console:
  pretty: true

执行命令

./filebeat -e -c filebeat.yml

输出如下:

Screen Shot 2022-10-17 at 10.15.28 AM.png 其中,messag字段端内容为我们mock的用户日志,这个时候ctrl + c,结束filebeat进程,在重新执行上面的命令,会发现控制台上没有了输出,因为filebeat已经消费过现有的log数据了,要想重新输出之前的内容,可以删除执行命令:rm -rf data/registry/filebeat删除filebeat的记录数据,记录数据如下

{"op":"set","id":1}
{"k":"filestream::.global::native::19630011-16777223","v":{"cursor":null,"meta":{"source":"/Users/xxx/tools/fe-log-server/2022/10/14/17/27.log","identifier_name":"native"},"ttl":0,"updated":[281470681743360,18446744011573954816]}}
{"op":"set","id":2}
{"k":"filestream::.global::native::19630011-16777223","v":{"updated":[2061957913080,1665972877],"cursor":null,"meta":{"source":"/Users/xxx/tools/fe-log-server/2022/10/14/17/27.log","identifier_name":"native"},"ttl":1800000000000}}
{"op":"set","id":3}
{"k":"filestream::.global::native::19630011-16777223","v":{"updated":[2061958151080,1665972877],"cursor":{"offset":61},"meta":{"identifier_name":"native","source":"/Users/xxx/tools/fe-log-server/2022/10/14/17/27.log"},"ttl":1800000000000}}

其中offest记录了filebeat当前消费文件的偏移量,也可以手动修改这个字端,然后重新启动filebeat,查看效果

kafka

让我们想一想,当我们的日志量非常巨大的时候,直接将日志写入es,很可能造成日志的堆积和丢失,为了解决这个问题,让我们引入消息中间件,kafka,如果你按照最上面的方式启动docker-compose,kafka就会被启动的,让我们一起来测试一下kafka

  • 首先先进入kafka容器
  1. 通过docker-compose ps查看kafka容器name,加入name是elk-kafka-1
  2. 执行下面命令,进入kafka
docker container exec -it elk-kafka-1 bash
  • 创建一个生产者
kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
  • 创建一个消费者
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic quickstart-events --from-beginning

随后你就可以在kafka的生产者容器中随意写入一些内容,kafka消费者会呈现出对应的内容

Screen Shot 2022-10-17 at 11.31.44 AM.png

Screen Shot 2022-10-17 at 11.31.56 AM.png

  • 创建一个topic
kafka-topics.sh --bootstrap-server localhost:9092 --create --topic my-topic --partitions 1 \
  --replication-factor 1 --config max.message.bytes=64000 --config flush.messages=1
  • 查看topic列表
kafka-topics.sh --list --bootstrap-server localhost:9092

Screen Shot 2022-10-17 at 11.41.59 AM.png 其他具体操作可以参考:# 真的,Kafka 入门一篇文章就够了

filebeat输出日志到kafka

  • 删除filebeat的offest缓存
rm -rf data/registry/filebeat
  • 修改filebeat的filebeat.yml文件,内容为
filebeat.inputs:
- type: filestream
  paths: 
    - '/Users/pengcli/tools/fe-log-server/**/*.log'
output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["localhost:9093"]

  # message topic selection + partitioning
  topic: 'my-topic'

  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000
  • 重启filebeat
./filebeat -e -c filebeat.yml
  • 进入kafka容器,启动消费者
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning
  • 访问localhost:3000,访问nodejs服务,nodejs将log数据首先写入到本地文件,其次filebeat通过监控本地文件的变化(生产环境可以设置filebeat的input为containers),将数据输出到kafka。消费者输出日志如下 Screen Shot 2022-10-17 at 1.20.33 PM.png
  • 数据流程
graph LR
前端数据 --> nodejs服务 --> 写入本地文件 --> filebeat --> kafka

logstash读取kafka数据,并输出到es,

logstash的用法和filebeat的用法相似,只不过中间多了一层filter,可以对数据进行清洗,过滤。目前此项目还没有用到数据过滤的流程,其实可以用filebeat代替logstash,这里为了多学习一下,所以采用logstash,获取kafka的数据,并最终将数据输出到es中

  • 进入logstash容器,首先查看容器name,然后进入
docker container exec -it elk-logstash-1 bash
  • 消费kafka数据
logstash -f /usr/share/logstash/pipeline/logstash.conf

其中,/usr/share/logstash/pipeline/logstash.conf这个文件是通过docker-compose中的logstash下的volumes做的映射,映射代码

  logstash:
    ...
    volumes: 
      - ./logstash/pipeline/:/usr/share/logstash/pipeline
      - xxx

其中,logstash.yml我们已经提前在根目录下创建完成,具体路径是./logstash/pipeline/logstash.yml,其中内容如下

input {
    kafka {
        id => "my_plugin_id"
        bootstrap_servers =>["kafka:9092"]
        topics => ["my-topic"]
        group_id => "filebeat"
        auto_offset_reset => "earliest"
        type => "pengclikafka"
        ssl_endpoint_identification_algorithm => ""
    }
}

output {
    elasticsearch {
        hosts => ["es01:9200"]
        index => "logstash-system-localhost-%{+YYYY.MM.dd}"
    }
}

简单来说,就是读取从头开始读取kafka数据,topics是my-topic,输出到es中,es的索引是logstash-system-localhost-%{+YYYY.MM.dd},其中%{+YYYY.MM.dd}是logstash中的变量

注意,如通过docker-compose启动服务,我们的bootstrap_servers是kafka:9092,kafka这个name则是我们在docker-compose文件中定义好的

  • 如果报错Logstash could not be started because there is already another instance using the configured data directory.  If you wish to run multiple instances, you must change the "path.data" setting.,则执行下面下面命令,删除.lcok文件
rm -r ./data/.lock

kibana数据展示

ok,到目前为止,我们已经成功将数据通过nodejs服务,写入到了es集群中了,现在我们需要配置下es的dataViews,访问本地链接http://localhost:5601/app/management/kibana/dataViews

Screen Shot 2022-10-17 at 3.43.57 PM.png 点击右上角的 create data view按钮,进入编辑页面

Screen Shot 2022-10-17 at 3.45.09 PM.png 编辑完成,则可以通过kibanadiscover来查看数据了

Screen Shot 2022-10-17 at 3.48.55 PM.png

通过nodejs查询es数据

目前为止,我们已经可以将数据成功写入到es集群中了,并成功将它们展示出来,如果到了这一步,我们已经成功的搭建了一个前端日志平台,如果我们需要前端监控平台来展示前端的性能监控异常监控等等,我们需要在构建一个前端dashboard,需要nodejs从es集群中读取数据,那么如何做呢,代码如下

const { Client } = require('@elastic/elasticsearch')

const client = new Client({
  node: 'http://localhost:9200',
})

const essearch = async (req, res, next) => {
  try {
    const result = await client.search({
      index: 'logstash-system-localhost-2022.10.13',
      body: {
        "query": {
          "match_all": {}
        }
      }
      ,
    })
    res.json(result)
  } catch (err) {
    res.json(err)
  }
}

其中,如果想要验证query部分的正确性,可以访问本地链接,http://localhost:5601/app/dev_tools#/console

Screen Shot 2022-10-17 at 4.10.46 PM.png 关于kibana的其他查询语句,本文不做过多介绍,可以参考这篇文章# Elasticsearch Query DSL查询入门 你可以写一个定时器,每分钟获取es的数据,然后持久化到mysql中,具体细节这里不再详聊

最后

这就是如何搭建一个前端日志平台,或者前端监控平台的详细步骤。当然,如果是在生产环境中,我认为会简单一些,我们需要吧一些工具,如elk封装进k8s中,我想这都是现成的,我们只需要写一些配置文件就可以了,然后在通过nodejs将日志写进容器日志文件就好了,其他的流程都是一样的

参考链接

参考书籍

陈辰老师的:从零开始搭建前端监控平台