主要使用ansible工具来批量管理主机,这些主机通过接收编写的playbook文件来部署k8s集群,以此来实现自动化部署。
Ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、cfengine、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。Ansible是基于模块工作的,本身没有批量部署的能力。真正具有批量部署的是Ansible所运行的模块,Ansible只是提供一种框架。 k8s是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。
一.准备实验服务器
实验需要准备四台虚拟机,最小配置要求为2H2G,硬盘20G,操作系统为CentOS7.9最小化安装即可
其中ansible主机只是用来对k8s-master,k8s-node1,k8s-node2主机执行脚本操作,不作为k8s集群主机
ansible 192.168.100.10
k8s-master 192.168.100.100
k8s-node1 192.168.100.101
k8s-node2 192.168.100.102
二.配置Ansible主机
1.配置国内yum源
# 更换国内yum源
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
# 添加epel扩展源
curl -o /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
# 更新yum缓存
yum clean all&&yum makecacha
2.安装Ansible
[root@ansible ~]# yum -y install ansible
3.设置免密登录k8s主机
# 使用ssh-keygen生成密钥,再将公钥传给三台k8s主机
# 输入以下命令会提示让输入路径,密码,不用理会,按3下回车,保持为空即可
[root@ansible ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:cKaTjeYFwCKY8VNEUod2QfHu9iYE23xTkGbAt/gk8p8 root@k8s-master
The key's randomart image is:
+---[RSA 2048]----+
|oo.=*o=+.. . |
|+..o+o... * |
| .oo .o += o |
| . o@o o . |
| *BS+ . |
| o.+= + |
| ..oo o |
| ...E |
| o. |
+----[SHA256]-----+
# 以上内容为密钥生成成功
# 拷贝公钥到其它主机节点
[root@ansible ~]# ssh-copy-id root@192.168.100.100
[root@ansible ~]# ssh-copy-id root@192.168.100.101
[root@ansible ~]# ssh-copy-id root@192.168.100.102
4.在Ansible服务器上的/etc/ansible/hosts文件中添加k8s服务器节点
[root@ansible ~]# vim /etc/ansible/hosts
[master]
192.168.100.100
[nodes]
192.168.100.101
192.168.100.102
# 测试主机是否在线,返回结果为绿色则代表正常
[root@ansible ~]# ansible all -m ping
192.168.100.100 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.100.101 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
192.168.100.102 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
三.部署k8s集群master节点
- 下载离线yum文件到ansible主机
# 创建一个目录来存储离线yum包文件
[root@ansible ~]# mkdir /root/src
# 进入目录
[root@ansible ~]# cd /root/src
# 下载资源包
[root@ansible ~]# curl http://tools.k8s.ycshell.com/IT/Linux/kubernetes/1.23.4/download.sh|sh
- 创建部署master节点的playbook文件,并执行
[root@ansible ~]# vim deply-k8s-master.yaml
---
- hosts: master
tasks:
# 停止防火墙
- name: stop firewalld
service:
name: firewalld
state: stopped
enabled: no
# 关闭SElinux
- name: disable selinux
shell: setenforce 0&&swapoff -a&&sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 拷贝并在主机上解压docker离线yum包
- name: unarchive docker file
unarchive:
copy: yes
src: "/root/src/docker-ce_v20.10.12.tar.gz"
dest: "/root"
mode: "0755"
# 拷贝并在主机上解压kubelet_kubeadm_kubectl离线yum包
- name: unarchive kubectl file
unarchive:
copy: yes
src: "/root/src/kubelet_kubeadm_kubectl_v1.23.4.tar.gz"
dest: "/root"
mode: "0755"
# 拷贝kubernetes镜像包文件
- name: copy kubernetes init images
copy:
src: "/root/src/kubernetes_init_images_package_v1.23.4.tar"
dest: "/root"
mode: "0755"
# 拷贝flannel镜像包文件
- name: copy flannel images
copy:
src: "/root/src/flannel.tar"
dest: "/root"
mode: "0755"
# 拷贝flannel.yaml文件
- name: copy flannel yaml
copy:
src: "/root/src/kube-flannel.yml"
dest: "/root"
mode: "0755"
# 安装docker
- name: install docker rpm
yum:
name: "{{ packages }}"
state: present
vars:
packages:
- /root/docker-ce/audit-libs-python-2.8.5-4.el7.x86_64.rpm
- /root/docker-ce/checkpolicy-2.5-8.el7.x86_64.rpm
- /root/docker-ce/containerd.io-1.4.12-3.1.el7.x86_64.rpm
- /root/docker-ce/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
- /root/docker-ce/docker-ce-20.10.12-3.el7.x86_64.rpm
- /root/docker-ce/docker-ce-cli-20.10.12-3.el7.x86_64.rpm
- /root/docker-ce/docker-ce-rootless-extras-20.10.12-3.el7.x86_64.rpm
- /root/docker-ce/docker-scan-plugin-0.12.0-3.el7.x86_64.rpm
- /root/docker-ce/fuse3-libs-3.6.1-4.el7.x86_64.rpm
- /root/docker-ce/fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
- /root/docker-ce/libcgroup-0.41-21.el7.x86_64.rpm
- /root/docker-ce/libsemanage-python-2.5-14.el7.x86_64.rpm
- /root/docker-ce/policycoreutils-python-2.5-34.el7.x86_64.rpm
- /root/docker-ce/python-IPy-0.75-6.el7.noarch.rpm
- /root/docker-ce/setools-libs-3.3.8-4.el7.x86_64.rpm
- /root/docker-ce/slirp4netns-0.4.3-4.el7_8.x86_64.rpm
# 创建docker配置文件目录
- name: create docker config dir
file:
path: /etc/docker
state: directory
# 拷贝docker配置文件到远程主机
- name: change docker config
copy:
src: "/root/src/daemon.json"
dest: "/etc/docker/daemon.json"
backup: yes
# 启动docker并设置开机启动
- name: start docker
service:
name: docker
state: started
enabled: yes
# k8s系统内核文件修改
- name: change kernel
blockinfile:
path: "/etc/sysctl.d/k8s.conf"
block: "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1"
marker: "# {mark} diy change kernel"
create: yes
# 使修改内容生效
- name: system shell
shell: sysctl --system
# 加载镜像
- name: load kubernetes images
shell: docker load -i /root/kubernetes_init_images_package_v1.23.4.tar
# 加载flannel镜像
- name: load kubernetes images
shell: docker load -i /root/flannel.tar
# 安装kubernetes离线yum包
- name: install kubernetes rpm
yum:
name: "{{ packages }}"
state: present
vars:
packages:
- /root/kubernetes/67ffa375b03cea72703fe446ff00963919e8fce913fbc4bb86f06d1475a6bdf9-cri-tools-1.19.0-0.x86_64.rpm
- /root/kubernetes/7a0d50ba594f62deddd266db3400d40a3b745be71f10684faa9c1632aca50d6b-kubelet-1.23.4-0.x86_64.rpm
- /root/kubernetes/ae22dad233f0617861909955e30f527067e6f5535c1d1a9cda7b3a288fe62cd2-kubectl-1.23.4-0.x86_64.rpm
- /root/kubernetes/c8a17896ac2f24c43770d837f9f751acf161d6c33694b5dad42f5f638c6dd626-kubeadm-1.23.4-0.x86_64.rpm
- /root/kubernetes/conntrack-tools-1.4.4-7.el7.x86_64.rpm
- /root/kubernetes/db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
- /root/kubernetes/libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
- /root/kubernetes/libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
- /root/kubernetes/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
- /root/kubernetes/socat-1.7.3.2-2.el7.x86_64.rpm
# 启动kubelet并设置开机启动
- name: start kubelet
service:
name: kubelet
state: started
enabled: yes
# 初始化k8s
- name: init kubernetes
shell: kubeadm init --kubernetes-version v1.23.4 --apiserver-advertise-address 192.168.100.44 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
register: init_k8s_result
# 打印输出结果
- name: echo init result
debug:
var: init_k8s_result.stdout_lines
# 创建配置文件目录
- name: create config kubeadm
shell: mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
# 安装flannel网络引擎
- name: install flannel
shell: kubectl apply -f /root/kube-flannel.yml
# 执行playbook
[root@ansible ~]# ansible-playbook deply-k8s-master.yaml
等待输出
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.100.100 : ok=22 changed=20 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
保存集群加入token
# 编辑脚本
[root@localhost src]# vim getNodeToken.sh
#!/bin/bash
echo kubeadm join 192.168.100.100:6443 --token `kubeadm token list|grep -v 'TOKEN'|awk '{print \$1}'` --discovery-token-ca-cert-hash sha256:`openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'`
# 执行脚本
[root@localhost src]# ansible master -m script -a '/root/src/getNodeToken.sh'
192.168.100.44 | CHANGED => {
"changed": true,
"rc": 0,
"stderr": "Shared connection to 192.168.100.44 closed.\r\n",
"stderr_lines": [
"Shared connection to 192.168.100.44 closed."
],
"stdout": "kubeadm join 192.168.100.100:6443 --token rpku21.59wgbdlhmz6hwubo --discovery-token-ca-cert-hash sha256:a37fc8db1a94049dd2a38ed789c43372cb4ea32772d1291ee5e5c5b4a13e72ec\r\n",
"stdout_lines": [
"kubeadm join 192.168.100.100:6443 --token rpku21.59wgbdlhmz6hwubo --discovery-token-ca-cert-hash sha256:a37fc8db1a94049dd2a38ed789c43372cb4ea32772d1291ee5e5c5b4a13e72ec"
]
}
保存下stdout输出结果复制到deply-k8s-nodes.yaml的加入集群位置(109行)
状态都为ok则部署成功
四.部署k8s集群node节点
[root@ansible ~]# vim deply-k8s-nodes.yaml
注意替换stdout输出结果复制到deply-k8s-nodes.yaml的加入集群位置(109行)
---
- hosts: nodes
tasks:
# 停止防火墙
- name: stop firewalld
service:
name: firewalld
state: stopped
enabled: no
# 关闭SElinux
- name: disable selinux
shell: setenforce 0&&swapoff -a&&sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 拷贝并在主机上解压docker离线yum包
- name: unarchive docker file
unarchive:
copy: yes
src: "/root/src/docker-ce_v20.10.12.tar.gz"
dest: "/root"
mode: "0755"
# 拷贝并在主机上解压kubelet_kubeadm_kubectl离线yum包
- name: unarchive kubectl file
unarchive:
copy: yes
src: "/root/src/kubelet_kubeadm_kubectl_v1.23.4.tar.gz"
dest: "/root"
mode: "0755"
# 拷贝kubernetes镜像包文件
- name: copy kubernetes init images
copy:
src: "/root/src/kubernetes_init_images_package_v1.23.4.tar"
dest: "/root"
mode: "0755"
# 安装docker
- name: install docker rpm
yum:
name: "{{ packages }}"
state: present
vars:
packages:
- /root/docker-ce/audit-libs-python-2.8.5-4.el7.x86_64.rpm
- /root/docker-ce/checkpolicy-2.5-8.el7.x86_64.rpm
- /root/docker-ce/containerd.io-1.4.12-3.1.el7.x86_64.rpm
- /root/docker-ce/container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm
- /root/docker-ce/docker-ce-20.10.12-3.el7.x86_64.rpm
- /root/docker-ce/docker-ce-cli-20.10.12-3.el7.x86_64.rpm
- /root/docker-ce/docker-ce-rootless-extras-20.10.12-3.el7.x86_64.rpm
- /root/docker-ce/docker-scan-plugin-0.12.0-3.el7.x86_64.rpm
- /root/docker-ce/fuse3-libs-3.6.1-4.el7.x86_64.rpm
- /root/docker-ce/fuse-overlayfs-0.7.2-6.el7_8.x86_64.rpm
- /root/docker-ce/libcgroup-0.41-21.el7.x86_64.rpm
- /root/docker-ce/libsemanage-python-2.5-14.el7.x86_64.rpm
- /root/docker-ce/policycoreutils-python-2.5-34.el7.x86_64.rpm
- /root/docker-ce/python-IPy-0.75-6.el7.noarch.rpm
- /root/docker-ce/setools-libs-3.3.8-4.el7.x86_64.rpm
- /root/docker-ce/slirp4netns-0.4.3-4.el7_8.x86_64.rpm
# 创建docker配置文件目录
- name: create docker config dir
file:
path: /etc/docker
state: directory
# 拷贝docker配置文件到远程主机
- name: change docker config
copy:
src: "/root/src/daemon.json"
dest: "/etc/docker/daemon.json"
backup: yes
# 启动docker并设置开机启动
- name: start docker
service:
name: docker
state: started
enabled: yes
# k8s系统内核文件修改
- name: change kernel
blockinfile:
path: "/etc/sysctl.d/k8s.conf"
block: "net.bridge.bridge-nf-call-ip6tables = 1\nnet.bridge.bridge-nf-call-iptables = 1"
marker: "# {mark} diy change kernel"
create: yes
# 使修改内容生效
- name: system shell
shell: sysctl --system
# 加载镜像
- name: load kubernetes images
shell: docker load -i /root/kubernetes_init_images_package_v1.23.4.tar
# 安装kubernetes离线yum包
- name: install kubernetes rpm
yum:
name: "{{ packages }}"
state: present
vars:
packages:
- /root/kubernetes/67ffa375b03cea72703fe446ff00963919e8fce913fbc4bb86f06d1475a6bdf9-cri-tools-1.19.0-0.x86_64.rpm
- /root/kubernetes/7a0d50ba594f62deddd266db3400d40a3b745be71f10684faa9c1632aca50d6b-kubelet-1.23.4-0.x86_64.rpm
- /root/kubernetes/ae22dad233f0617861909955e30f527067e6f5535c1d1a9cda7b3a288fe62cd2-kubectl-1.23.4-0.x86_64.rpm
- /root/kubernetes/c8a17896ac2f24c43770d837f9f751acf161d6c33694b5dad42f5f638c6dd626-kubeadm-1.23.4-0.x86_64.rpm
- /root/kubernetes/conntrack-tools-1.4.4-7.el7.x86_64.rpm
- /root/kubernetes/db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
- /root/kubernetes/libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm
- /root/kubernetes/libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm
- /root/kubernetes/libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm
- /root/kubernetes/socat-1.7.3.2-2.el7.x86_64.rpm
# 启动kubelet并设置开机启动
- name: start kubelet
service:
name: kubelet
state: started
enabled: yes
# 加入集群
- name: init kubernetes
shell: kubeadm join 192.168.100.100:6443 --token 6fd7p4.clxhxkchi337pqwu --discovery-token-ca-cert-hash sha256:8712f295356d88874bdaeb440223a3c01427e1af40dc0c923701dac806cb7930
register: init_k8s_result
# 打印输出结果
- name: echo init result
debug:
var: init_k8s_result.stdout_lines
# 执行playbook
[root@ansible ~]# ansible-playbook deply-k8s-nodes.yaml
等待输出
PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************
192.168.100.101 : ok=17 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
192.168.100.102 : ok=17 changed=15 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
状态都为ok则部署成功
五.验证
登录到k8s-master主机
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 22m v1.23.4
k8s-node01 Ready <none> 6m9s v1.23.4
k8s-node02 Ready <none> 6m9s v1.23.4