云上topic自动化任务需求文档

34 阅读2分钟

一、描述

  1. 自建区的topic创建工具已经不适用云上ELK的特点。

  2. 云上的paas服务,需要调用API。

  3. 阿里云的接口需要有AKSK,华为云相同。

  4. 自建区创建topic只包含两步骤:kafka的topic创建、logstash配置文件创建

  5. kafka在云下自建机房




二、需求

  1. 流程逻辑

    1. 用户通过Jenkins的job填写信息,然后开始build执行job
    2. 传入的参数有三个:环境、日志量参数、topic名
    3. 自动化任务在kakfa中创建topic
    4. 调用云上AIP是生成logstash的管道信息
    5. 在es中生成对应topic的index patterns(此步骤存疑)
    6. 完成

问题点:

· 没有日志进入ELK的话,自动创建index patterns如何实现?

  1. 用户操作入口

通过Jenkins实现,用户在job中填写信息点击build后,触发topic创建。

参数有如下三个:

环境(unit)、日志模式(log_mode)、topic名称(topic_name)

说明:

  1. unit为选择参数,只有2个值:ALI VOL
  2. log_mode为日志量大小的参数,只有2个值:smallhuge。对应为logstash的pipeline。
  3. topic_name 为topic名称,用户输入的string。

  1. 实现方式

python




三、材料

自建区创建topic的脚本如下:

  1. KS环境

shell脚本

#!/usr/bin/env bash

CONF_PATH=/etc/ansible/ks/logstash.conf
for var in "$@"
do
    echo "$var"
done

for topic in "$@"
do
cat << EOF > ${CONF_PATH}/${topic}-ksa.conf
input {
   kafka  {
       codec => "plain"
       topics => ["${topic}"]
       bootstrap_servers => "172.25.106.31:9092,172.25.106.30:9092,172.25.106.29:9092,172.25.106.23:9092,172.25.106.22:9092"
       max_poll_interval_ms => "3000000"
       session_timeout_ms => "6000"
       heartbeat_interval_ms => "2000"
       auto_offset_reset => "latest"
       group_id => "${topic}-ksaelk"
       type => "${topic}"
     }
}
EOF
echo "created: " ${CONF_PATH}/${topic}-ksa.conf
done

for topic in "$@"
do
#ansible kslogstash -m copy -a "src=${CONF_PATH}/${topic}-ksa.conf dest=/etc/logstash/conf.d/${topic}-ksa.conf" -f 100
ansible -i /data/elktopic/hosts/ks_logstashhosts small -m copy -a "src=${CONF_PATH}/${topic}-ksa.conf dest=/etc/logstash/conf.d/${topic}-ksa.conf" -f 100
echo "synced: " ${CONF_PATH}/${topic}-ksa.conf "to" /etc/logstash/conf.d/${topic}-ksa.conf

# create topic
echo "creating topic ${topic}..."
#ansible ksshelkkafka1 -m shell -a "/opt/kafka/bin/kafka-topics.sh --zookeeper 172.25.106.29:2181 --create --replication-factor 2 --partitions 15 --topic ${topic}"
ansible -i /data/elktopic/hosts/ks_kafkahosts new_one -m shell -a "/opt/kafka/bin/kafka-topics.sh --zookeeper 172.25.106.29:2181 --create --replication-factor 2 --partitions 15 --topic ${topic}"
echo "topic ${topic} created"

done

echo "restaring logstash..."
#ansible kslogstash -m shell -a "supervisorctl restart ks_logstash" -f 100
ansible -i /data/elktopic/hosts/ks_logstashhosts small -m shell -a "supervisorctl restart ks_logstash" -f 100
echo "restarted logstash"

2. ## WG环境

shell脚本

#!/usr/bin/env bash

CONF_PATH=/etc/ansible/wg/logstash.conf
for var in "$@"
do
    echo "$var"
done

for topic in "$@"
do
cat << EOF > ${CONF_PATH}/${topic}-wga.conf
input {
   kafka  {
       codec => "plain"
       topics => ["${topic}"]
       bootstrap_servers => "172.21.241.185:9092,172.21.241.186:9092,172.21.241.187:9092,172.21.241.188:9092,172.21.241.189:9092"
       max_poll_interval_ms => "3000000"
       session_timeout_ms => "6000"
       heartbeat_interval_ms => "2000"
       auto_offset_reset => "latest"
       group_id => "${topic}-wgaelk"
       type => "${topic}"
     }
}
EOF
echo "created: " ${CONF_PATH}/${topic}-wga.conf
done

for topic in "$@"
do
#ansible wglogstash -m copy -a "src=${CONF_PATH}/${topic}-wga.conf dest=/opt/elk/wg_logstash/logstash/config/conf.d/${topic}-wga.conf" -f 100
ansible -i /data/elktopic/hosts/wg_logstashhosts final -m copy -a "src=${CONF_PATH}/${topic}-wga.conf dest=/opt/elk/wg_logstash/logstash/config/conf.d/${topic}-wga.conf" -f 100
echo "synced: " ${CONF_PATH}/${topic}.conf "to" /opt/elk/wg_logstash/logstash/config/conf.d/${topic}.conf

# create topic
echo "creating topic ${topic}..."
#ansible wgelkkafka1 -m shell -a "/opt/kafka/bin/kafka-topics.sh --zookeeper 172.21.241.185:2181 --create --replication-factor 2 --partitions 15 --topic ${topic}"
ansible -i /data/elktopic/hosts/wg_kafkahosts new_one -m shell -a "/opt/kafka/bin/kafka-topics.sh --zookeeper 172.21.241.185:2181 --create --replication-factor 2 --partitions 15 --topic ${topic}"
echo "topic ${topic} created"

done
#ansible wglogstash -m shell -a "chown -R elk:elk /opt/elk/wg_logstash/logstash/config/conf.d/"

#/data/projects/xops/restart_logstash.sh
echo "restaring logstash..."
#ansible wglogstash -m shell -a "supervisorctl restart wg_logstash" -f 100
ansible -i  /data/elktopic/hosts/wg_logstashhosts final -m shell -a "supervisorctl restart wg_logstash" -f 100
echo "restarted logstash"

3. ## 脚本逻辑说明

自建区创建topic的脚本逻辑如下:

  • 创建logstash的input文件

  • 将input文件上传到logstash实例中

  • Kafka中创建topic

  • 重启logstash的服务

  • 完成




四、注意

部署开源es exporter的3个节点。