3-Kubernetes基于Centos7构建基础环境(三)

187 阅读15分钟

Kubernetes基于Centos7构建基础环境(三)

环境准备

准备三台虚拟机,每台虚机请参照Kubernetes基于Centos7构建基础环境(一)、Kubernetes基于Centos7构建基础环境(二)进行安装构建

一、 三台虚拟机配置表

  1. 服务器配置
服务器IP域名别名服务器类别登录用户登录密码CPU内存
192.168.1.55master55.xincan.cnmaster55masterrootroot2核4G
192.168.1.56slave56.xincan.cnslave56slaverootroot4核8G
192.168.1.57slave57.xincan.cnslave57slaverootroot4核8G
  1. 工具版本

二、 修改虚机域名

  1. 依次修改各个虚拟机域名为master55.xincan.cn、 slave56.xincan.cn、 slave57.xincan.cn
    • 确保3台机器服器域名、别名配置规范
# master55服务器
[root@localhost ~]# vi /etc/hostname
master55.xincan.cn
[root@localhost ~]#

# slave56服务器
[root@localhost ~]# vi /etc/hostname
slave56.xincan.cn
[root@localhost ~]#

# slave57服务器
[root@localhost ~]#  vi /etc/hostname
slave57.xincan.cn
[root@localhost ~]#

三、 配置台机器互相用域名、别名访问

  1. 修改/etc/hosts文件设置,3台服务器同时增加如下代码
    • 192.168.1.55 master55.xincan.cn master55
    • 192.168.1.56 slave56.xincan.cn slave56
    • 192.168.1.57 slave57.xincan.cn slave57
[root@localhost /]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.55 master55.xincan.cn master55
192.168.1.56 slave56.xincan.cn  slave56
192.168.1.57 slave57.xincan.cn  slave57
[root@localhost /]#

四、 重启三台虚拟机,执行reboot

  1. 分别链接台服务器,由之前的localhost已经改成服务器别名
 # master55
[root@master55 ~]#

 # master55
[root@slave56 ~]#

 # master55
[root@slave57 ~]#

五、 三台服务器时间同步

  1. 台服务器都安装ntp,提示Complete!则安装成功
[root@localhost ~]# sudo yum install -y ntp
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirrors.huaweicloud.com
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.ustc.edu.cn
base                                                                                                           | 3.6 kB  00:00:00
extras                                                                                                         | 2.9 kB  00:00:00
updates                                                                                                        | 2.9 kB  00:00:00
(1/4): base/7/x86_64/group_gz                                                                                  | 153 kB  00:00:00
(2/4): extras/7/x86_64/primary_db                                                                              | 194 kB  00:00:00
(3/4): updates/7/x86_64/primary_db                                                                             | 2.1 MB  00:00:01
(4/4): base/7/x86_64/primary_db                                                                                | 6.1 MB  00:00:05
Resolving Dependencies
--> Running transaction check
---> Package ntp.x86_64 0:4.2.6p5-29.el7.centos will be installed
--> Processing Dependency: ntpdate = 4.2.6p5-29.el7.centos for package: ntp-4.2.6p5-29.el7.centos.x86_64
--> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-29.el7.centos.x86_64
--> Running transaction check
---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
---> Package ntpdate.x86_64 0:4.2.6p5-29.el7.centos will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================
 Package                Arch              Version                    Repository                   Size
=====================================================================================================================================================
Installing:
 ntp                    x86_64           4.2.6p5-29.el7.centos       base                        548 k
Installing for dependencies:
 autogen-libopts         x86_64          5.18-5.el7                  base                         66 k
 ntpdate                 x86_64          4.2.6p5-29.el7.centos       base                         86 k

Transaction Summary
=========================================================================================================================================
Install  1 Package (+2 Dependent packages)

Total download size: 701 k
Installed size: 1.6 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/base/packages/ntpdate-4.2.6p5-29.el7.centos.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY- ETA
Public key for ntpdate-4.2.6p5-29.el7.centos.x86_64.rpm is not installed
(1/3): ntpdate-4.2.6p5-29.el7.centos.x86_64.rpm                                                                               |  86 kB  00:00:00
(2/3): autogen-libopts-5.18-5.el7.x86_64.rpm                                                                                  |  66 kB  00:00:00
(3/3): ntp-4.2.6p5-29.el7.centos.x86_64.rpm                                                                                   | 548 kB  00:00:00
-----------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                956 kB/s | 701 kB  00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-6.1810.2.el7.centos.x86_64 (@anaconda)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : autogen-libopts-5.18-5.el7.x86_64                                                                            1/3
  Installing : ntpdate-4.2.6p5-29.el7.centos.x86_64                                                                         2/3
  Installing : ntp-4.2.6p5-29.el7.centos.x86_64                                                                             3/3
  Verifying  : ntp-4.2.6p5-29.el7.centos.x86_64                                                                             1/3
  Verifying  : ntpdate-4.2.6p5-29.el7.centos.x86_64                                                                         2/3
  Verifying  : autogen-libopts-5.18-5.el7.x86_64                                                                            3/3

Installed:
  ntp.x86_64 0:4.2.6p5-29.el7.centos

Dependency Installed:
  autogen-libopts.x86_64 0:5.18-5.el7                                     ntpdate.x86_64 0:4.2.6p5-29.el7.centos

Complete!
[root@localhost ~]#
  1. 台服务器同时设置,查看当前系统时间、并设置当前时间为上海
[root@localhost /]# date
Thu Jun  4 05:28:48 UTC 2020
[root@localhost /]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@localhost /]# date
Thu Jun  4 13:29:01 CST 2020
[root@localhost /]# 
  1. master55设置ntp,同步阿里云时间服务器
    • 执行vi /etc/ntp.conf
    • 找到server四行代码注释掉,在其下面增加server aliyun.com iburst
    • 然后通过sudo systemctl start ntpd启动服务,稍等一会执行ntpq -p查看是否同步,如果出现前面的*则,同步成功
    • systemctl start ntpd 启动ntp
    • systemctl restart ntpd 重启ntp
    • systemctl enable ntpd.service    开机启动
    • ntpdc -c loopinfo 查看与时间同步服务器的时间差
    • 注:如果硬件资源允许,单独搞一台服务器作为时间同步服务器,其他几台服务器同步此一台服务器即可
[root@master55 /]# vi /etc/ntp.conf

# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst

server ntp.aliyun.com iburst

[root@master55 /]#
[root@master55 /]# sudo systemctl start ntpd
[root@master55 /]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@localhost /]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*203.107.6.88    10.137.38.86     2 u   37   64    1   53.476   -5.668   2.224
[root@master55 /]#
  1. slave56设置ntp,同步阿里云时间服务器
    • 执行vi /etc/ntp.conf
    • 找到server四行代码注释掉,在其下面增加server aliyun.com iburst
    • 然后通过sudo systemctl start ntpd启动服务,稍等一会执行ntpq -p查看是否同步,如果出现前面的*则,同步成功
    • systemctl start ntpd 启动ntp
    • systemctl restart ntpd 重启ntp
    • systemctl enable ntpd.service    开机启动
    • ntpdc -c loopinfo 查看与时间同步服务器的时间差
    • 注:如果硬件资源允许,单独搞一台服务器作为时间同步服务器,其他几台服务器同步此一台服务器即可
[root@slave56 /]# vi /etc/ntp.conf

# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst

server master55.xincan.cn iburst

[root@slave56 /]#
[root@slave56 /]# sudo systemctl start ntpd
[root@slave56 /]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@slave56 /]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*master55.xincan 203.107.6.88     3 u   12   64    1    0.367   10.659   0.054
[root@slave56 /]#
  1. slave57设置ntp,同步阿里云时间服务器
    • 执行vi /etc/ntp.conf
    • 找到server四行代码注释掉,在其下面增加server aliyun.com iburst
    • 然后通过sudo systemctl start ntpd启动服务,稍等一会执行ntpq -p查看是否同步,如果出现前面的*则,同步成功
    • systemctl start ntpd 启动ntp
    • systemctl restart ntpd 重启ntp
    • systemctl enable ntpd.service    开机启动
    • ntpdc -c loopinfo 查看与时间同步服务器的时间差
    • 注:如果硬件资源允许,单独搞一台服务器作为时间同步服务器,其他几台服务器同步此一台服务器即可
[root@slave57 /]# vi /etc/ntp.conf

# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst

server master55.xincan.cn iburst

[root@slave57 /]#
[root@slave57 /]# sudo systemctl start ntpd
[root@slave57 /]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@slave57 /]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*master55.xincan 203.107.6.88     3 u   12   64    1    0.367   10.659   0.054
[root@slave57 /]#

六、 台服务器同时配置kubernetes网桥

  1. 编辑/etc/sysctl.d/k8s.conf文件
  2. 增加如下代码
    • net.bridge.bridge-nf-call-ip6tables = 1
    • net.bridge.bridge-nf-call-iptables = 1
    • net.ipv4.ip_forward=1
[root@master55 /]# sudo bash -c 'cat << EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
EOF'

[root@master55 /]#

# 使之刚才设置的永久生效
[root@master55 /]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...
[root@master55 /]#

七、 台服务器同时配置kubernetes下载源,关闭SELinux

  1. 分别在台服务器上,在/etc/yum.repos.d/文件夹下创建kubernetes.repo文件并写入如下内容
[root@master55 /]# sudo bash -c 'cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF'

[root@master55 /]#

  1. 分别在台服务器上关闭Selinux
[root@master55 /]# sudo setenforce 0
setenforce: SELinux is disabled
[root@master55 /]#

八、 查看台服务器下载源列表,如下成功

[root@master55 /]# cd /etc/yum.repo.d/ && ll
drwxr-xr-x. 2 root root  187 Jun 16 13:05 backup
-rw-r--r--. 1 root root 2523 Jun 16  2018 CentOS-Base.repo
-rw-r--r--. 1 root root 2424 Oct 19  2019 docker-ce.repo
-rw-r--r--. 1 root root  272 Jun 16 16:34 kubernetes.repo

九、 三台服务器设置免密登录

1:在master55节点上执行:ssh-keygen -t rsa 一路回车到结束,在/root/.ssh/下面会生成一个公钥文件id_rsa.pub

[root@master55 /]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:I0PR5fMj01uGGb1Z3pbRFjzwTIFb2ONyZ1M2I9OVTNY root@master55.xincan.cn
The key's randomart image is:
+---[RSA 2048]----+
|      .. ..  .X**|
|       ...   *=%E|
|      .   o . B=X|
|     .     + * X*|
|      o S o * Bo*|
|       o . o = . |
|            .    |
|                 |
|                 |
+----[SHA256]-----+
[root@master55 /]#     
  1. 将公钥追加到authorized_keys
[root@master55 /]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[root@master55 /]# cd ~/.ssh/
[root@master55 .ssh]# ll
total 12
-rw-r--r-- 1 root root  405 Jun 16 17:03 authorized_keys
-rw------- 1 root root 1675 Jun 16 17:01 id_rsa
-rw-r--r-- 1 root root  405 Jun 16 17:01 id_rsa.pub
[root@master55 .ssh]#  
  1. 修改authorized_keys权限:
[root@master55 /]# chmod 600 ~/.ssh/authorized_keys
[root@master55 /]# 
  1. ~/.ssh文件夹从master55节点分发到slave56、slave57节点的~/下,执行:
  • scp -r ~/.ssh/ root@slave56:~/.ssh/
  • scp -r ~/.ssh/ root@slave57:~/.ssh/
  • 过程中需要填写yes,然后提示输入slave56、slave57两个节点的登录密码
  • 注:如果slave56、slave57服务器上有.ssh文件夹,那么需要将刚才生成的文件一次cp过去,不需要cp文件夹,不然会在之前的.ssh文件夹下在生成一层.ssh文件夹,导致后续免密登录设置失败
[root@master55 .ssh]# scp -r ~/.ssh/ root@slave56:~/.ssh/
The authenticity of host 'slave56 (192.168.1.56)' can't be established.
ECDSA key fingerprint is SHA256:KhL6Vyv6q5fHHcZ3+xoLn6W/mZ7SBAFD+n/TCXEHtSM.
ECDSA key fingerprint is MD5:71:35:87:3d:ff:73:04:fc:d7:a2:07:30:68:b8:62:5b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave56,192.168.1.56' (ECDSA) to the list of known hosts.
root@slave56's password:
id_rsa                                                                                             100% 1675     1.1MB/s   00:00
id_rsa.pub                                                                                         100%  405   282.4KB/s   00:00
authorized_keys                                                                                    100%  405   277.0KB/s   00:00
known_hosts                                                                                        100%  182   104.6KB/s   00:00
[root@master55 .ssh]# scp -r ~/.ssh/ root@slave57:~/.ssh/
The authenticity of host 'slave57 (192.168.1.57)' can't be established.
ECDSA key fingerprint is SHA256:Gfz+xXR217Yb2ZWOIMsRzSe+iynRvpxLnt98cI4kBRA.
ECDSA key fingerprint is MD5:8b:1d:cd:1d:24:79:de:80:c3:53:7c:d3:87:e0:d4:96.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave57,192.168.1.57' (ECDSA) to the list of known hosts.
root@slave57's password:
id_rsa                                                                                             100% 1675     1.0MB/s   00:00
id_rsa.pub                                                                                         100%  405   304.6KB/s   00:00
authorized_keys                                                                                    100%  405   352.7KB/s   00:00
known_hosts                                                                                        100%  364   271.2KB/s   00:00
[root@master55 .ssh]# 
  1. 验证master55、slave56、slave57三个节点免密登录
  • master55节点验证
[root@master55 /]# ssh root@slave56
Last login: Tue Jun 16 15:11:10 2020 from 192.168.1.182
[root@slave56 ~]# exit
logout
Connection to slave56 closed.
[root@master55 /]# ssh root@slave57
Last login: Tue Jun 16 15:11:23 2020 from 192.168.1.182
[root@slave57 ~]# exit
logout
Connection to slave57 closed.
[root@master55 /]#

slave56节点验证,第一次链接需要输入目标服务密码,后续则不用

[root@slave56 ~]# ssh root@master55
The authenticity of host 'master55 (192.168.1.55)' can't be established.
ECDSA key fingerprint is SHA256:Dv4+42UAUC3FCEqZjwxJECtUHMgAYUtD2UsRASyffFw.
ECDSA key fingerprint is MD5:fe:0b:32:39:20:9c:e1:3e:67:b7:3d:42:a1:22:df:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master55,192.168.1.55' (ECDSA) to the list of known hosts.
Last login: Tue Jun 16 15:58:21 2020 from 192.168.1.182
[root@master55 ~]# exit
logout
Connection to master55 closed.
[root@slave56 ~]# ssh root@master55
Last login: Tue Jun 16 17:17:38 2020 from 192.168.1.56
[root@master55 ~]# exit
logout
Connection to master55 closed.
[root@slave56 ~]# ssh root@slave57
The authenticity of host 'slave57 (192.168.1.57)' can't be established.
ECDSA key fingerprint is SHA256:Gfz+xXR217Yb2ZWOIMsRzSe+iynRvpxLnt98cI4kBRA.
ECDSA key fingerprint is MD5:8b:1d:cd:1d:24:79:de:80:c3:53:7c:d3:87:e0:d4:96.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave57,192.168.1.57' (ECDSA) to the list of known hosts.
Last login: Tue Jun 16 17:15:27 2020 from 192.168.1.55
[root@slave57 ~]# exit
logout
Connection to slave57 closed.
[root@slave56 ~]# ssh root@slave57
Last login: Tue Jun 16 17:17:59 2020 from 192.168.1.56
[root@slave57 ~]# exit
logout
Connection to slave57 closed.
[root@slave56 ~]#

slave57节点验证,第一次链接需要输入目标服务密码,后续则不用

[root@slave57 /]# ssh root@master55
The authenticity of host 'master55 (192.168.1.55)' can't be established.
ECDSA key fingerprint is SHA256:Dv4+42UAUC3FCEqZjwxJECtUHMgAYUtD2UsRASyffFw.
ECDSA key fingerprint is MD5:fe:0b:32:39:20:9c:e1:3e:67:b7:3d:42:a1:22:df:2a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master55,192.168.1.55' (ECDSA) to the list of known hosts.
Last login: Tue Jun 16 17:17:42 2020 from 192.168.1.56
[root@master55 ~]# exit
logout
Connection to master55 closed.
[root@slave57 yum.repos.d]# ssh root@master55
Last login: Tue Jun 16 17:19:31 2020 from 192.168.1.57
[root@master55 ~]# exit
logout
Connection to master55 closed.
[root@slave57 yum.repos.d]# ssh root@slave56
Last login: Tue Jun 16 17:15:11 2020 from 192.168.1.55
[root@slave56 ~]# exit
logout
Connection to slave56 closed.
[root@slave57 /]#

十、真对Kubernetes的安装,调整内核参数(每个节点都安装

  1. master55、slave56、slave57调整内核参数、以下以master节点为例子

    • 将如下代码写入kubernetes.conf文件

    • vm.swappiness=0 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它

    • vm.overcommit_memory=1 不检查物理内存是否够用

    • vm.panic_on_oom=0 开启OOM

[root@master /]# cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
  1. 将生成的kubernetes.conf文件复制到/etc/sysctl.d/kubernetes.conf
[root@master /]# cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
  1. 使之刚才生成的文件立即生效
[root@master /]# sysctl -p /etc/sysctl.d/kubernetes.conf

十一、kube-proxy开启ipvs的前置条件

  1. 自动载入br_netfilter
[root@master /]# modprobe br_netfilter
  1. ipvs永久生效
[root@master /]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
[root@master /]#
  1. 设置相应文件权限
[root@master /]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

十一、 台服务器同时安装kubelet        kubeadm       kubectl

  1. 三台服务器下载安装
    • 以下是指定版本安装
    • 也可以不用指定版本安装,及安装最新版本
[root@master /]#  yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2
  1. 三台服务器设置Kubernetes组件启动,并开机自启
[root@master /]# systemctl enable kubelet && sudo systemctl start kubelet

十二、以master55Kubernetes主节点进行Kubernetes初始化

  1. 执行命令进行初始化
  • sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --apiserver-advertise-address 192.168.1.55 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
[root@master55 /]# sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --apiserver-advertise-address 10.1.10.120 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
W0616 17:24:47.742105    8831 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master55.xincan.cn kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.55]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master55.xincan.cn localhost] and IPs [192.168.1.55 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master55.xincan.cn localhost] and IPs [192.168.1.55 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0616 17:29:47.640484    8831 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0616 17:29:47.646613    8831 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.505848 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master55.xincan.cn as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master55.xincan.cn as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 991hr9.scqkkyphn1cjjcl7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.1.10.1:6443 --token 991hr9.scqkkyphn1cjjcl7 \
    --discovery-token-ca-cert-hash sha256:1dcf2607e09f83160ce9bc99a941d9a6bd74e99b6b8d3adb63af800ffee19baf
[root@master55 /]#
  1. 根据初始化提示,在master55节点上执行如下命令
    • mkdir -p $HOME/.kube
    • sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    • sudo chown (idu):(id -u):(id -g) $HOME/.kube/config
    • 出现上述创建的代码和提示加入主节点命令则初始化主节点成功
[root@master55 /]# mkdir -p $HOME/.kube
[root@master55 /]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master55 /]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@master55 /]#
  1. 根据初始化提示,在slave56、slave57节点上分别执行如下命令
    • kubeadm join 192.168.1.55:6443 --token 991hr9.scqkkyphn1cjjcl7 --discovery-token-ca-cert-hash sha256:1dcf2607e09f83160ce9bc99a941d9a6bd74e99b6b8d3adb63af800ffee19baf
    • 出现kubectl get nodes标识安装成功
W0616 17:50:09.914108    4585 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

十三、Kubernetes命令自动补全(墙裂建议

  1. 当然大哥硬要以后手敲命令,在下拜服
  2. 安装自动补全工具
[root@master55 ~]# yum install -y epel-release bash-completion
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
kubernetes/signature                                                                                                                                                   |  454 B  00:00:00
kubernetes/signature                                                                                                                                                   | 1.4 kB  00:00:00 !!!
Resolving Dependencies
--> Running transaction check
---> Package bash-completion.noarch 1:2.1-6.el7 will be updated
---> Package bash-completion.noarch 1:2.1-8.el7 will be an update
---> Package epel-release.noarch 0:7-11 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=========================================================================================================================================
 Package                     Arch                       Version                        Repository                                 Size
=========================================================================================================================================
Installing:
 epel-release                noarch                     7-11                           extras                                     15 k
Updating:
 bash-completion             noarch                     1:2.1-8.el7                    base                                       87 k

Transaction Summary
=========================================================================================================================================
Install  1 Package
Upgrade  1 Package

Total download size: 101 k
Downloading packages:
No Presto metadata available for base
(1/2): bash-completion-2.1-8.el7.noarch.rpm                                                                     |  87 kB  00:00:00
(2/2): epel-release-7-11.noarch.rpm                                                                             |  15 kB  00:00:00
-----------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                         225 kB/s | 101 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : epel-release-7-11.noarch                                                                           1/3
  Updating   : 1:bash-completion-2.1-8.el7.noarch                                                                 2/3
  Cleanup    : 1:bash-completion-2.1-6.el7.noarch                                                                 3/3
  Verifying  : 1:bash-completion-2.1-8.el7.noarch                                                                 1/3
  Verifying  : epel-release-7-11.noarch                                                                           2/3
  Verifying  : 1:bash-completion-2.1-6.el7.noarch                                                                 3/3

Installed:
  epel-release.noarch 0:7-11

Updated:
  bash-completion.noarch 1:2.1-8.el7

Complete!
[root@master55 ~]# 
  1. 设置资源生效
[root@master55 ~]# source /usr/share/bash-completion/bash_completion
[root@master55 ~]#
[root@master55 ~]# source <(kubectl completion bash)
[root@master55 ~]#
[root@master55 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
# 输入kubectl, 双击Tab
[root@master55 ~]# kubectl
alpha       apply       certificate    convert      delete       edit        get       options      proxy        scale          uncordon
annotate    attach      cluster-info   cordon       describe     exec        kustomize patch        replace      set            version
api-resources  auth     completion     cp           diff         explain     label     plugin       rollout      taint          wait
api-versions   autoscale config        create       drain        expose      logs      port-forward   run        top
[root@master55 ~]# kubectl

十四、Kubernetes查看所有节点

  1. master55节点查看所有节点,当前三台服务器的状态都是NotReady
    • 应为我们没有安装网络插件
[root@master55 /]# kubectl get nodes
NAME                 STATUS     ROLES    AGE   VERSION
master55.xincan.cn   NotReady   master   16h   v1.18.3
slave56.xincan.cn    NotReady   <none>   15h   v1.18.3
slave57.xincan.cn    NotReady   <none>   16h   v1.18.3
[root@master55 /]#

十五、Kubernetes查看所有命名空间下所有Pod

  1. 发现coredns一直处在pending状态,需要安装Kubernetes网络插件
[root@master55 /]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   coredns-7ff77c879f-hz59h                     0/1     Pending   0          16h
kube-system   coredns-7ff77c879f-kkdpn                     0/1     Pending   0          16h
kube-system   etcd-master55.xincan.cn                      1/1     Running   2          16h
kube-system   kube-apiserver-master55.xincan.cn            1/1     Running   2          16h
kube-system   kube-controller-manager-master55.xincan.cn   1/1     Running   2          16h
kube-system   kube-proxy-kdxlv                             1/1     Running   2          16h
kube-system   kube-proxy-mxm5n                             1/1     Running   2          16h
kube-system   kube-proxy-sdnxb                             1/1     Running   2          15h
kube-system   kube-scheduler-master55.xincan.cn            1/1     Running   2          16h
[root@master55 /]#

十六、Kubernetes安装网络插件

  1. 我们这里选取calico网络插件(提供企业级支持)

  2. master55节点上创建文件夹,用于存放下载的网络插件,我这里下载的是calico-3.13.1.yaml

[root@master55 /]# mkdir k8s
[root@master55 /]# cd k8s/
[root@master55 k8s]# mkdir calico && cd calico
[root@master55 calico]# wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
--2020-06-17 17:42:44--  https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
Resolving kuboard.cn (kuboard.cn)... 119.3.92.138, 122.112.240.69
Connecting to kuboard.cn (kuboard.cn)|119.3.92.138|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 21079 (21K) [application/octet-stream]
Saving to: ‘calico-3.13.1.yaml’

100%[================================================================================>] 21,079      --.-K/s   in 0s

2020-06-17 17:42:51 (221 MB/s) - ‘calico-3.13.1.yaml’ saved [21079/21079]

[root@master55 calico]# ls
calico-3.13.1.yaml
[root@master55 calico]#
  1. 安装calico-3.13.1.yaml
[root@master55 calico]# kubectl apply -f calico-3.13.1.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@master55 calico]#

十七、再次查看node和pod

  1. 需要等待一会时间

  2. 查看nodes,发现状态已经为Ready

[root@master55 /]# kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
master55.xincan.cn   Ready    master   16h   v1.18.3
slave56.xincan.cn    Ready    <none>   16h   v1.18.3
slave57.xincan.cn    Ready    <none>   16h   v1.18.3
[root@master55 /]# 
  1. 查看pods,发现状态都为Running
[root@master55 /]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5b8b769fcd-cbqpr     1/1     Running   0          6m21s
kube-system   calico-node-fnv55                            1/1     Running   0          6m21s
kube-system   calico-node-rbpc8                            1/1     Running   0          6m21s
kube-system   calico-node-xrsbf                            1/1     Running   0          6m21s
kube-system   coredns-7ff77c879f-hz59h                     1/1     Running   0          16h
kube-system   coredns-7ff77c879f-kkdpn                     1/1     Running   0          16h
kube-system   etcd-master55.xincan.cn                      1/1     Running   2          16h
kube-system   kube-apiserver-master55.xincan.cn            1/1     Running   2          16h
kube-system   kube-controller-manager-master55.xincan.cn   1/1     Running   2          16h
kube-system   kube-proxy-kdxlv                             1/1     Running   2          16h
kube-system   kube-proxy-mxm5n                             1/1     Running   2          16h
kube-system   kube-proxy-sdnxb                             1/1     Running   2          16h
kube-system   kube-scheduler-master55.xincan.cn            1/1     Running   2          16h
[root@master55 /]# 

十八、kube-proxy开启IPvsIPvs替换IPTables

  1. 查询kube-system命名空间下ConfigMap
[root@master /]# kubectl -n kube-system get cm
NAME                                 DATA   AGE
calico-config                        4      2d18h
coredns                              1      2d19h
extension-apiserver-authentication   6      2d19h
kube-proxy                           2      2d19h
kube-root-ca.crt                     1      2d19h
kubeadm-config                       2      2d19h
kubelet-config-1.21                  1      2d19h
[root@master /]#
  1. 修改kube-proxy,把 mode: "" 改为 mode: “ipvs" 保存退出即可
[root@master /]# kubectl -n kube-system edit cm kube-proxy

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    # mode: ""
    mode: "ipvs" # 增加ipvs
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
 ...........
 
 configmap/kube-proxy edited
[root@master /]#
  1. 删除之前的kube-proxy命名空间中的所有pod
[root@master /]# kubectl -n kube-system get pod | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-57fpz" deleted
pod "kube-proxy-dkd9d" deleted
pod "kube-proxy-fznb9" deleted
pod "kube-proxy-p7k4j" deleted
[root@master /]#
  1. 查看kube-proxy命名空间中pod运行状态
[root@master /]# kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-5s47c                                   1/1     Running   0          46s
kube-proxy-8jbf5                                   1/1     Running   0          37s
kube-proxy-cf7gl                                   1/1     Running   0          47s
kube-proxy-rsrvp                                   1/1     Running   0          49s
[root@master /]#
  1. 查看kube-proxy其中一个pod的日志,,如果有 Using ipvs Proxier. 说明kube-proxyIPvs开启成功
[root@master /]# kubectl -n kube-system logs kube-proxy-5s47c
I0527 01:53:23.503945       1 node.go:172] Successfully retrieved node IP: 192.168.1.81
I0527 01:53:23.504014       1 server_others.go:140] Detected node IP 192.168.1.81
I0527 01:53:23.521393       1 server_others.go:206] kube-proxy running in dual-stack mode, IPv4-primary
I0527 01:53:23.521436       1 server_others.go:274] Using ipvs Proxier.
I0527 01:53:23.521456       1 server_others.go:276] creating dualStackProxier for ipvs.
W0527 01:53:23.521471       1 server_others.go:512] detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6
W0527 01:53:23.521839       1 proxier.go:445] IPVS scheduler not specified, use rr by default
W0527 01:53:23.522056       1 proxier.go:445] IPVS scheduler not specified, use rr by default
W0527 01:53:23.522089       1 ipset.go:113] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W0527 01:53:23.522108       1 ipset.go:113] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I0527 01:53:23.522380       1 server.go:643] Version: v1.21.1
I0527 01:53:23.524435       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0527 01:53:23.524731       1 config.go:315] Starting service config controller
I0527 01:53:23.524755       1 shared_informer.go:240] Waiting for caches to sync for service config
I0527 01:53:23.525167       1 config.go:224] Starting endpoint slice config controller
I0527 01:53:23.525177       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
W0527 01:53:23.527235       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
W0527 01:53:23.534166       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
I0527 01:53:23.625536       1 shared_informer.go:247] Caches are synced for service config
I0527 01:53:23.629189       1 shared_informer.go:247] Caches are synced for endpoint slice config
[root@master mysql]#

至此kubernetes安装完毕