Pacemaker+Corosync-01-配合haproxy做高可用负载均衡web服务集群

128 阅读6分钟

本文已参与「新人创作礼」活动,一起开启掘金创作之路。

一、目标

使用Pacemaker+Corosync配合配合haproxy做高可用负载均衡web服务集群。

二、环境介绍

两台服务器: 10.100.100.31 master31 master31.com 10.100.100.32 slave32 slave32.com 10.100.100.33 web33 web33.com 10.100.100.34 web34 web34.com

三、主要步骤

1.保证主机都能联网,便于安装插件(废话)

2.ip31主机上,的域名设定

vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.100.100.31 master31 master31.com 10.100.100.32 slave32 slave32.com 10.100.100.33 web33 web33.com 10.100.100.34 web34 web34.com 3.ip32主机上,的域名设定

vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.100.100.31 master31 master31.com 10.100.100.32 slave32 slave32.com 10.100.100.33 web33 web33.com 10.100.100.34 web34 web34.com 4.在ip31上,对ip32做免密登录

ssh-keygen -t rsa ssh-copy-id -i ~/.ssh/id_rsa.pub root@10.100.100.32 5.两台服务器都做,关闭防火墙等常规操作

systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 6.两台服务器都做,安装pacemaker和corosync以及插件

yum install pcs pacemaker corosync fence-agents-all -y systemctl start pcsd.service systemctl enable pcsd.service 7.两台服务器都做,给软件自己创建的用户设置密码

echo "pwd123" |passwd --stdin hacluster 8.在ip31上做,创建集群

[root@master31 ~]# pcs cluster auth master31.com slave32.com Username: hacluster <---这里输入用户名hacluster,必须是这个。 Password: <---这里输入刚才设置的密码pwd123并回车 #slave32.com: Authorized #master31.com: Authorized 9.在ip31上做,新建集群名x_cluster。并将两个主机加进来。

[root@master31 ~]# pcs cluster setup --start --name x_cluster master31.com slave32.com Destroying cluster on nodes: master31.com, slave32.com... <---下面全是提示信息,不用输入任何东西 master31.com: Stopping Cluster (pacemaker)... slave32.com: Stopping Cluster (pacemaker)... master31.com: Successfully destroyed cluster slave32.com: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'master31.com', 'slave32.com' slave32.com: successful distribution of the file 'pacemaker_remote authkey' master31.com: successful distribution of the file 'pacemaker_remote authkey' Sending cluster config files to the nodes... master31.com: Succeeded slave32.com: Succeeded

Starting cluster on nodes: master31.com, slave32.com... master31.com: Starting Cluster (corosync)... slave32.com: Starting Cluster (corosync)... master31.com: Starting Cluster (pacemaker)... slave32.com: Starting Cluster (pacemaker)...

Synchronizing pcsd certificates on nodes master31.com, slave32.com... slave32.com: Success master31.com: Success Restarting pcsd on the nodes in order to reload the certificates... slave32.com: Success master31.com: Success

10.在ip31上做,设置集群开机自启

[root@master31 ~]# pcs cluster enable --all master31.com: Cluster Enabled slave32.com: Cluster Enabled 11.在ip31上做,查看集群状态

[root@master31 ~]# pcs cluster status Cluster Status: <---下面全是提示信息,不用输入任何东西 Stack: corosync Current DC: master31.com (version 1.1.21-4.el7-f14e36fd43) - partition WITHOUT quorum Last updated: Mon Jul 27 22:58:37 2020 Last change: Mon Jul 27 22:55:26 2020 by hacluster via crmd on master31.com 2 nodes configured 0 resources configured

PCSD Status: slave32.com: Online <---必须是online master31.com: Online <---必须是online 12.两台服务器都查看,查看心跳是否正常

[root@master31 ~]# corosync-cfgtool -s Printing ring status. <---下面全是提示信息,不用输入任何东西 Local node ID 1 RING ID 0 id = 10.100.100.31 status = ring 0 active with no faults 13.两台服务器都查看,查看corosync状态

[root@master31 ~]# pcs status corosync Membership information <---下面全是提示信息,不用输入任何东西

Nodeid      Votes Name
     1          1 master31.com (local)         <---如果有两个节点,那这里必须显示两个节点
     2          1 slave32.com

[root@slave32 ~]# pcs status corosync Membership information

Nodeid      Votes Name
     1          1 master31.com
     2          1 slave32.com (local)

14.在ip31上做,关闭爆头stonith,忽略仲裁策略,检查配置是否正确

[root@master31 ~]# pcs property set stonith-enabled=false [root@master31 ~]# pcs property set no-quorum-policy=ignore [root@master31 ~]# crm_verify -L -V <---这里如果正常执行是没有任何提示的。 15.在ip31上做,配置虚拟ip资源

#注意下面的命令是一行比较长。

[root@master31 ~]# pcs resource create xvip ocf:heartbeat:IPaddr2 ip=10.100.100.30 nic='ens33' cidr_netmask='24' broadcast='10.100.100.255' op monitor interval=5s timeout=20s on-fail=restart 注释:ens33是你的网卡名字,(虚拟IP)xvip=10.100.100.30,on-fail=restart如果失败了就重启,xvip是资源名

16.(必略)如果刚才的资源建错了怎么办?删掉呗

如pcs resource delete xvip 17.在ip31上做,查看虚拟ip是否生成

[root@master31 ~]# ip addr |grep 10.100.100.30 inet 10.100.100.30/24 brd 10.100.100.255 scope global secondary ens33 18.两台服务器都做,安装haproxy

yum install -y haproxy 19.两台服务器都做,修改haproxy配置文件,让两台服务器都能通过haproxy负载均衡到两台web服务器

cp /etc/haproxy/haproxy.cfg{,.bak} vim /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2

chroot      /var/lib/haproxy
pidfile     /var/run/haproxy.pid
maxconn     4000
user        haproxy
group       haproxy
daemon

stats socket /var/lib/haproxy/stats

#--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- frontend main *:80 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js

use_backend static          if url_static
default_backend             xweb

#--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #---------------------------------------------------------------------

20.在两台web服务器上搭建web服务(略),要保证每台web服务器都能直接通过ip访问web默认80端口

21.重启两台服务器ip31和ip32的haproxy服务

systemctl restart haproxy.service 22.通过测试访问http://10.100.100.31和32都能正常负载均衡的访问两台web服务器ip33和ip34

23.在ip32上执行

注意ip32的haproxy最后应当是关闭的systemctl stop haproxy

systemctl stop haproxy 24.在ip31上做,定义监控haproxy资源

[root@master31 ~]# pcs resource create haproxy systemd:haproxy op monitor interval="5s" 注释:监控haproxy进程,5秒检查一次(这里写create haproxy的haproxy可能是资源名,可能能自定义),如果检查haproxy服务停了,就尝试重启haproxy

25.在ip31上做,定义运行的haproxy和xvip资源在同一个节点上

[root@master31 ~]# pcs constraint colocation add xvip haproxy INFINITY 26.在ip31上做,定义服务的启动顺序(先启动虚拟ip--xvip服务,再启动haproxy服务)

[root@master31 ~]# pcs constraint order xvip then haproxy 27.在ip31上做,看下pcs的状态

root@master31 ~]# pcs status Cluster name: x_cluster <---下面全是提示信息,不用输入任何东西 Stack: corosync Current DC: master31.com (version 1.1.21-4.el7-f14e36fd43) - partition with quorum Last updated: Wed Jul 29 22:10:07 2020 Last change: Wed Jul 29 22:09:22 2020 by root via cibadmin on master31.com

2 nodes configured 2 resources configured

Online: [ master31.com slave32.com ]

Full list of resources:

xvip (ocf::heartbeat:IPaddr2): Started slave32.com <---这里显示服务虚拟ip30生成在ip32上 haproxy (systemd:haproxy): Started slave32.com

Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled

28.根据27步看到的结果,我们去生成vip ip30的主机上做,那就是去ip32上,观察是不是真的在ip32上生成了虚拟ip30

[root@slave32 ~]# ip addr |grep 10.100.100.30 inet 10.100.100.30/24 brd 10.100.100.255 scope global secondary ens33 <---没错,在ip32上果然看到了虚拟ip30 29.在真实机器上访问http://10.100.100.30,看看效果是不是负载均衡的显示33和34上的web页面。(结果当然必须得是负载均衡)

30.根据27步看到的结果,我们去生成vip ip30的主机上做,那就是去ip32上,将haproxy的配置文件改掉,然后停掉haproxy服务,其实就是模拟ip32宕机。(再次做的时候你得看虚拟ip生成在哪个服务器上了,然后这一步就在哪个服务器上做模拟宕机)

[root@slave32 ~]# mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak1 [root@slave32 ~]# systemctl stop haproxy 31.在真实机器上再次访问http://10.100.100.30,发现有大概10来秒是访问不了网页的,但之后就又能访问网页了,因为虚拟ip被自动切换到了ip31服务器上了

32.在ip31上看看虚拟ip是否被漂移回来了

[root@master31 ~]# ip addr |grep 10.100.100.30 inet 10.100.100.30/24 brd 10.100.100.255 scope global secondary ens33 <---当然必须漂移过来了。

至此,试验就结束了,还算成功。

----------xok----------------END---------------2020年7月29日22:27:52--------------