kafka整合Nginx实现日志收集(单机)

2,365 阅读3分钟

1. 配置java环境

上传压缩包,并解压
tar -zxvf jdk-8u201-linux-x64.tar.gz -C /usr/local/java
配置环境变量
vi /etc/profile

export JAVA_HOME=/usr/local/java
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}
使配置生效
source /etc/profile

2. 安装zookeeper

上传压缩包

解压压缩包

tar -zxvf zookeeper-3.4.14.tar.gz -C /opt/baixw/servers/

创建data和log目录

#创建zk存储数据目录
mkdir -p /opt/baixw/servers/zookeeper-3.4.14/data 
#创建zk日志文件目录
mkdir -p /opt/baixw/servers/zookeeper-3.4.14/data/logs 
#修改zk配置文件
cd /opt/baixw/servers/zookeeper-3.4.14/conf
#文件改名
mv zoo_sample.cfg zoo.cfg

修改配置文件

# 编辑zoo.cfg文件 
vim zoo.cfg


#更新datadir
dataDir=/opt/baixw/servers/zookeeper-3.4.14/data

配置环境变量

vi /etc/profile

#ZOOKEEPER
export ZOOKEEPER_HOME=/opt/baixw/servers/zookeeper-3.4.14
export PATH=$PATH:${ZOOKEEPER_HOME}/bin
export ZOO_LOG_DIR=/var/baixw/zookeeper/data

使配置生效

source /etc/profile

启动并验证

zkServer.sh status
zkServer.sh start
zkServer.sh status

3. 安装kafka

上传jar包并解压

tar -zxvf kafka_2.12-1.0.2.tgz -C /opt/baixw/servers/

配置环境变量

vim /etc/profile


#KAFKA
export KAFKA_HOME=/opt/baixw/servers/kafka_2.12-1.0.2
export PATH=$PATH:${KAFKA_HOME}/bin

配置配置文件

cd /opt/baixw/servers/kafka_2.12-1.0.2/config
vi server.properties


log.dir=/opt/baixw/servers/kafka_2.12-1.0.2/kafka-logs
zookeeper.connect=localhost:2181/myKafka

启动kafka(先启动zookeeper)

kafka-server-start.sh -daemon /opt/baixw/servers/kafka_2.12-1.0.2/config/server.properties

4. 安装Nginx,并配置Nginx整合kafka

Nginx的安装

安装前确认系统中安装了gcc、pcre-devel、zlib-devel、openssl-devel
yum -y install gcc pcre-devel zlib-devel openssl openssl-devel
下载Nginx
https://nginx.org/download/

进入上方链接,选择所需要的版本,鼠标右击,拷贝链接

下载Nginx:
## 下载Nginx
cd /opt/baixw/servers/softwore
wget  https://nginx.org/download/nginx-1.9.9.tar.gz

配置Nginx-kafka

nginx-kafka模块集成

安装nginx-kafka插件

nginx可以直接把数据写到kafka里面去。

1.安装git
	yum install -y git
2.切换到/usr/local/src目录,然后将kafka的c客户端源码clone到本地
cd /opt/baixw/servers/
git clone https://github.com/edenhill/librdkafka
3.进入到librdkafka,然后进行编译
cd librdkafka 
yum install -y gcc gcc-c++ pcre-devel zlib-devel 
./configure 

make 
make install
4.安装nginx整合kafka的插件,进入到/usr/local/src,clone nginx整合kafka的源码
cd /opt/baixw/servers/
git clone https://github.com/brg-liuwei/ngx_kafka_module
5.配置Nginx
## 解压
tar -zxvf nginx-1.9.9.tar.gz -C /opt/baixw/servers/

##进入nginx目录
cd /opt/baixw/servers/nginx-1.9.9

## 配置
./configure --add-module=/opt/baixw/servers/ngx_kafka_module/

## 编译
make
make install

注:此处编译后文件不在/opt/baixw/servers/nginx-1.9.9目录下,编译后文件在/usr/local/下/nginx

6.修改nginx的配置文件,详情请查看当前目录的nginx.conf
mv /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx-kafka.conf
vi /usr/local/nginx/conf/nginx-kafka.conf
#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
    #access_log  logs/access.log  main;
    sendfile        on;
    #tcp_nopush     on;
    #keepalive_timeout  0;
    keepalive_timeout  65;
    #gzip  on;
   
		##kfaka配置  
    kafka;
    kafka_broker_list bigDate1:9092; 	
    ##指定kafka集群kafka_broker_list ip | host:port;
		##location 可以根据topic划分URL

    server {
        listen       80;
        server_name  node-6.xiaoniu.com;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;

    ##监听kafka请求
    	location = /kafka/access {
                kafka_topic access;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }

}
7.启动nginx
/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx-kafka.conf 

报错,找不到kafka.so.1的文件

error while loading shared libraries: librdkafka.so.1: cannot open shared object file: No such file or directory

原因是没有加载库编译

8.加载so库
#开机加载/usr/local/lib下面的库
echo "/usr/local/lib" >> /etc/ld.so.conf

#手动加载
ldconfig
9.测试

测试前把nginx开启,记得要ping通才能测试,而且开启相应的端口,开始测试:向nginx中写入数据,然后观察kafka的消费者能不能消费到数据

curl localhost/kafka/access -d "test nginx to kafka"
curl localhost/kafka/access -d "baixw111"