使用工具kubeadm安装kubeadm-1.15.0版本集群

434 阅读4分钟

环境准备和设置

官网地址 kubernetes.io/docs/setup/…

机器准备
至少准备两台机器,一台master,一台worker

角色主机名简写配置
mastertest01v操作系统: Linux(centos7, 其它操作系统也可, 安装过程类似, 可参考官方文档)
机器配置: CPU >= 2, 内存 >= 2
workertest02v操作系统: Linux(centos7, 其它操作系统也可, 安装过程类似, 可参考官方文档)
机器配置: CPU >= 2, 内存 >= 2

本文所用的系统是: Linux ${test01v_hostname} 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/LinuxC

修改hostname

[root@test01v ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
${test01v_ip} ${test01v_hostname}
${test02v_ip} ${test01v_hostname}

[root@test02v ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
${test01v_ip} ${test01v_hostname}
${test02v_ip} ${test01v_hostname}

确认MAC和product_uuid的唯一性

[root@test01v ~]# ifconfig -a    # 查看MAC
[root@test01v ~]# cat /sys/class/dmi/id/product_uuid	# 查看product_uuid

防火墙

[root@test01v ~]# systemctl stop firewalld.service
[root@test01v ~]# systemctl disable firewalld.service
[root@test01v ~]# firewall-cmd --state

禁用SELinux

修改/etc/selinux/config, 设置SELINUX=disabled. 重启机器.
[root@test01v ~]# sestatus
SELinux status:                 disabled 

禁用交换内存

编辑/etc/fstab, 将swap注释掉. 重启机器.
[root@test01v ~]# vim /etc/fstab 
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

安装docker

master节点 和 worker节点 都需要安装docker

#卸载老得docker
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

#安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2

#
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sudo yum makecache fast

#安装指定18版本docker包(Docker请使用`18.09`, k8s暂不支持Docker最新版`19.x`)
yum  -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7 containerd.io

#启动docker
sudo systemctl start docker
sudo systemctl enable docker

#设置镜像加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["${用自己的阿里云镜像地址}"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d
sudo systemctl daemon-reload
sudo systemctl restart docker

setenforce 0
getenforce

k8s集群安装

第一步:初始安装

master节点 和 worker节点 都需要做这个初始安装

添加源

由于国内网络原因, 官方文档中的地址不可用, 本文替换为阿里云镜像地址, 执行以下代码即可:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

安装工具 kubelet kubeadm kubectl

执行安装指令
[root@test01v ~]# yum install -y kubelet-1.15.0-0 kubeadm-1.15.0-0 kubectl-1.15.0-0 --disableexcludes=kubernetes

输出结果如下:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.15.0-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.7.5 for package: kubeadm-1.15.0-0.x86_64
---> Package kubectl.x86_64 0:1.15.0-0 will be installed
---> Package kubelet.x86_64 0:1.15.0-0 will be installed
--> Running transaction check
---> Package kubernetes-cni.x86_64 0:1.1.1-0 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================
 Package                                  Arch                             Version                               Repository                            Size
============================================================================================================================================================
Installing:
 kubeadm                                  x86_64                           1.15.0-0                              kubernetes                           8.9 M
 kubectl                                  x86_64                           1.15.0-0                              kubernetes                           9.5 M
 kubelet                                  x86_64                           1.15.0-0                              kubernetes                            22 M
Installing for dependencies:
 kubernetes-cni                           x86_64                           1.1.1-0                               kubernetes                            15 M

Transaction Summary
============================================================================================================================================================
Install  3 Packages (+1 Dependent package)

Total download size: 55 M
Installed size: 239 M
Downloading packages:
(1/4): 7143f62ad72a1eb1849d5c1e9490567d405870d2c00ab2b577f1f3bdf9f547ba-kubeadm-1.15.0-0.x86_64.rpm                                  | 8.9 MB  00:00:28     
(2/4): 3d5dd3e6a783afcd660f9954dec3999efa7e498cac2c14d63725fafa1b264f14-kubectl-1.15.0-0.x86_64.rpm                                  | 9.5 MB  00:00:30     
(3/4): 14083ac8b11792469524dae98ebb6905b3921923937d6d733b8abb58113082b7-kubernetes-cni-1.1.1-0.x86_64.rpm                            |  15 MB  00:00:52     
(4/4): 557c2f4e11a3ab262c72a52d240f2f440c63f539911ff5e05237904893fc36bb-kubelet-1.15.0-0.x86_64.rpm                                  |  22 MB  00:01:17     
------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                       532 kB/s |  55 MB  00:01:46     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kubernetes-cni-1.1.1-0.x86_64                                                                                                            1/4 
  Installing : kubelet-1.15.0-0.x86_64                                                                                                                  2/4 
  Installing : kubectl-1.15.0-0.x86_64                                                                                                                  3/4 
  Installing : kubeadm-1.15.0-0.x86_64                                                                                                                  4/4 
  Verifying  : kubelet-1.15.0-0.x86_64                                                                                                                  1/4 
  Verifying  : kubernetes-cni-1.1.1-0.x86_64                                                                                                            2/4 
  Verifying  : kubeadm-1.15.0-0.x86_64                                                                                                                  3/4 
  Verifying  : kubectl-1.15.0-0.x86_64                                                                                                                  4/4 

Installed:
  kubeadm.x86_64 0:1.15.0-0                          kubectl.x86_64 0:1.15.0-0                          kubelet.x86_64 0:1.15.0-0                         

Dependency Installed:
  kubernetes-cni.x86_64 0:1.1.1-0                                                                                                                           

Complete!

检查安装得版本

[root@test01v ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

[root@test01v ~]# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

[root@test01v ~]# kubelet --version
Kubernetes v1.15.0

启动 kubelet

[root@test01v ~]# systemctl enable kubelet && systemctl start kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

修改网络配置

执行如下指令修改配置

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

执行如下指令使配置生效
[root@test01v ~]# sysctl --system

输出结果如下:

* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...

第二步:初始化master

只在master节点上执行

生成初始化文件

1.执行如下指令获取初始化yaml文件
[root@test01v ~]# kubeadm config print init-defaults > kubeadm-init.yaml

2.修改初始化yaml文件
1)将advertiseAddress: 1.2.3.4修改为本机地址${test01_ip}
2)将imageRepository: k8s.gcr.io修改为imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
3)检查版本是否正常,不正确需要修改

[root@test01v ~]# cat kubeadm-init.yaml |grep kubernetesVersion
kubernetesVersion: v1.15.0

修改完得完整的文件内容如下

[root@test01v ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: ${test01v_ip}
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: ${test01v_hostname}
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.15.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

下载镜像

[root@test01v ~]# kubeadm config images pull --config kubeadm-init.yaml

输出如下:
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1

执行初始化

[root@test01v ~]# kubeadm init --config kubeadm-init.yaml

输出如下:
[init] Using Kubernetes version: v1.15.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [${test01v_hostname} localhost] and IPs [${test01v_ip} 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [${test01v_hostname} localhost] and IPs [${test01v_ip} 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [${test01v_hostname} kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 ${test01v_ip}]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.005492 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ${test01v_hostname} as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ${test01v_hostname} as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join ${test01v_hostname}:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:0c04326d85789cd8db0d2213f9da29f960f2b118fd8da8c6bfc8f9862ffe345e

配置环境, 让当前用户可以执行kubectl命令

[root@test01v ~]# mkdir -p $HOME/.kube
[root@test01v ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@test01v ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@test01v ~]# kubectl get node
NAME                          STATUS     ROLES    AGE   VERSION
${test01v_hostname}   NotReady   master   12m   v1.15.0

注:此处的`NotReady`是因为网络还没配置

第三步:配置网络

1.下载描述文件

[root@test01v ~]# wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml --no-check-certificate

[root@test01v ~]# cat kubeadm-init.yaml | grep serviceSubnet:
  输出结果如下:
  serviceSubnet: 10.96.0.0/12

打开calico.yaml, 将192.168.0.0/16修改为10.96.0.0/12
需要注意的是, calico.yaml中的IP和kubeadm-init.yaml需要保持一致, 要么初始化前修改kubeadm-init.yaml, 要么初始化后修改calico.yaml.

2.初始化网络

[root@test01v ~]# kubectl apply -f calico.yaml
输出结果如下:
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

3.检查是否生效
此时查看node信息, master的状态已经是Ready了

[root@test01v ~]# kubectl get node
NAME                          STATUS   ROLES    AGE   VERSION
${test01v_hostname}   Ready    master   25m   v1.15.0

第四步:添加worker节点

以下步骤有的在master上执行,有的在worker节点上执行,看主机名判断

扩容指令样式(在需要扩容的机器上执行)
kubeadm join --token [TOKEN] [k8s-master-ip]:6443 --discovery-token-ca-cert-hash sha256:[SHA256]

1.获取扩容需要得密钥TOKEN的值

[root@test01v ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION   EXTRA GROUPS
abcdef.0123456789abcdef   18h       2023-01-31T11:12:41+08:00   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token

注:
每个token只有24小时的有效期,如果没有有效的token,可以通过以下两种方式新建
方式一:

[root@test01v ~]# kubeadm token create

方式二:

$ kubeadm token create --print-join-command
输出结果如下:
kubeadm join ${test01v_ip}:6443 --token mg4o13.4ilr1oi605tj850w     --discovery-token-ca-cert-hash sha256:363b5b8525ddb86f4dc157f059e40c864223add26ef53d0cfc9becc3cbae8ad3

2.获取扩容需要得SHA256的值

[root@test01v ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
输出结果如下:
0c04326d85789cd8db0d2213f9da29f960f2b118fd8da8c6bfc8f9862ffe345e

3.执行扩容指令(在扩容节点上执行)

[root@test02v ~]#  kubeadm join --token abcdef.0123456789abcdef ${test01v_ip}:6443 --discovery-token-ca-cert-hash sha256:0c04326d85789cd8db0d2213f9da29f960f2b118fd8da8c6bfc8f9862ffe345e

输出结果如下:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4.检查扩容是否成功
再主节点上查看扩容得情况,加入后会显示NotReady,等差不多1分钟左右就会变为Ready

[root@test01v ~]# kubectl get node
NAME                          STATUS     ROLES    AGE     VERSION
${test01v_hostname}   Ready      master   6h49m   v1.15.0
${test02v_hostname}   Ready      <none>   3m57s   v1.15.0