【记录】-- k8s学习之kubeadm搭建k8s集群

176 阅读4分钟

1.统一版本

  • Docker 18.09.0

  • kubeadm-1.14.0-0

  • kubelet-1.14.0-0

  • kubectl-1.14.0-0

    • k8s.gcr.io/kube-apiserver:v1.14.0
    • k8s.gcr.io/kube-controller-manager:v1.14.0
    • k8s.gcr.io/kube-scheduler:v1.14.0
    • k8s.gcr.io/kube-proxy:v1.14.0
    • k8s.gcr.io/pause:3.1
    • k8s.gcr.io/etcd:3.3.10
    • k8s.gcr.io/coredns:1.3.1
  • calico:v3.9

2.准备三台centos机器.

机器要求:最低2核2G,因为要安装docker,所以内核大于等于3.08.

在k8s文档中有要求: kubernetes.io/docs/setup/…

2 GB or more of RAM per machine (any less will leave little room for your apps).
2 CPUs or more.

我这边准备了一台4核8,两台临时的2核2.

node.png

3.修改haostname与hosts文件

1.`master节点`
hostnamectl set-hostname k8s-master
2.`两个work节点分别执行`
hostnamectl set-hostname k8s-work-1
hostnamectl set-hostname k8s-work-23.`三台机器都执行`
vim /etc/hosts
10.0.16.13 k8s-master
10.0.4.14 k8s-work-1
10.0.4.2 k8s-work-2

4.配置系统基础前提

1.关闭防火墙
  systemctl stop firewalld && systemctl disable firewalld
  tips:这一块儿血泪的教训 ,因为我使用的是云机器,所以需要在云服务上的控制台上设置防火墙规则...在kubeadm join的时候一直卡着 ,查了很多搞了很久才想到在云服务商的控制台上试试
​
2.关闭 selinux
  setenforce 0
  sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  
3.关闭 swap
  swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
  
4.配置iptables的ACCEPT规则
  iptables -F && iptables -X && iptables \
    -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
​
5.设置系统参数
  cat <<EOF >  /etc/sysctl.d/k8s.conf
  net.bridge.bridge-nf-call-ip6tables = 1
  net.bridge.bridge-nf-call-iptables = 1
  EOF
  sysctl --system
​

5.安装docker

1. `卸载以前安装的`
 yum remove -y docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-selinux \
                  docker-engine-selinux \
                  docker-engine
​
2.`更新yum`
  yum update -y3.`安装必要的依赖`
  yum install -y yum-utils device-mapper-persistent-data lvm2
  
4.`添加yum源`
  yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  
5.`更新yum缓存`
  yum makecache fast
  
6.`安装18.09.0版本的docker`
  yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
  
7.`启动docker并设置开机自启`
 systemctl start docker &&  systemctl enable docker
 
8.`设置镜像加速`
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["阿里云镜像加速地址"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
​
​
9.`验证docker成功安装` 
  docker info

6.安装 kubelet,kubectl,kubeadm

官方文档: kubernetes.io/docs/setup/…

1.配置yum源,使用阿里云的源
01 `配置yum源`
  cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  [kubernetes]
  name=Kubernetes
  baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  enabled=1
  gpgcheck=0
  repo_gpgcheck=0
  gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
         http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  EOF
​
2.`安装1.14.0-0版本的kubectl,kubelet,kubeadm`
  yum install -y kubectl-1.14.0-0 && yum install -y kubelet-1.14.0-0 && yum install -y kubeadm-1.14.0-0
  
3.`docker和k8s设置同一个cgroup`
  # 设置docker的cgroup,将配置加入到 /etc/docker/daemon.json
  "exec-opts": ["native.cgroupdriver=systemd"]
  # 更新配置后加载配置并重启docker: 
  systemctl daemon-reload && systemctl restart docker
  # kubelet设置cgroup
  sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g"   /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  
4.`启动kubelet并设置开机自启`
  # 启动失败没关系,不用管.init的时候会重启
  systemctl enable kubelet && systemctl start kubelet

6.手动从国内镜像安装k8s组件

1. `查看kubeadm使用的镜像`
  kubeadm config images list
# ====================================================================================
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
# ====================================================================================
​
2.`设置脚本,从国内镜像源拉取镜像`
# 创建kubeadm.sh脚本,用于拉取镜像/打tag/删除原有镜像
  vi kubeadm.sh
# ====================================================================================
#!/bin/bash
set -e
KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
  docker pull $ALIYUN_URL/$imageName
  docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
  docker rmi $ALIYUN_URL/$imageName
done
# ====================================================================================
​
3.`运行脚本,拉取镜像`
  sh ./kubeadm.sh
  # 查看镜像
  docker images
  
#====================================================================================
  REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.14.0             5cd54e388aba        3 years ago         82.1MB
k8s.gcr.io/kube-controller-manager   v1.14.0             b95b1efa0436        3 years ago         158MB
k8s.gcr.io/kube-scheduler            v1.14.0             00638a24688b        3 years ago         81.6MB
k8s.gcr.io/kube-apiserver            v1.14.0             ecf910f40d6e        3 years ago         210MB
k8s.gcr.io/coredns                   1.3.1               eb516548c180        3 years ago         40.3MB
k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        3 years ago         258MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        4 years ago         742kB
​
#====================================================================================

7.初始化master节点

7.1kubeadm init 初始化master节点

官方文档:kubernetes.io/docs/refere…

1.`初始化master节点`
  # 初始化集群状态(重置,以前没部署过就略过)
  kubeadm reset
  # 初始化
  kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=10.0.16.13 --pod-network-cidr=10.244.0.0/16
​
# 注意:记得保存好最后kubeadm join的信息。
# ====================================================================================
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 10.0.16.13:6443 --token  glcp46.15givrbmknthpiql \
    --discovery-token-ca-cert-hash sha256:53f9d887bc638279f0d4976d6df98cd4fbf4ba9d7d0fd531d4a8c5871ba239bc
# =====================================================================================
​
2.`根据日志的提示执行`
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
3.`保存join信息等下work节点加入使用`
  kubeadm join 10.0.16.13:6443 --token  glcp46.15givrbmknthpiql \
      --discovery-token-ca-cert-hash              sha256:53f9d887bc638279f0d4976d6df98cd4fbf4ba9d7d0fd531d4a8c5871ba239bc
​
4.`查看是否启动成功(组件pod时候已启动)`
  kubectl get pods -n kube-system
​
5.`健康检查`
  curl -k https://localhost:6443/healthz

7.2部署calico网络插件

k8s这部分的能力直接使用第三方就好了,在众多网络插件中选择一款即可

kubernetes.io/docs/concep…

1.`calico网络插件`
  https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart
​
2.`查看需要的镜像手动拉取下`
  curl https://docs.projectcalico.org/v3.9/manifests/calico.yaml | grep image
2.`制作脚本拉取镜像`
vim calico_pull.sh
# =====================================================================================
#!/bin/bash
set -e
CALICO_VERSION=$1
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/tiangeer
images=(calico/cni:${CALICO_VERSION}
calico/pod2daemon-flexvol:${CALICO_VERSION}
calico/node:${CALICO_VERSION}
calico/kube-controllers:${CALICO_VERSION})
for imageName in ${images[@]} ; do
  docker pull $imageName
  #docker tag  $imageName $ALIYUN_URL/$imageName
  #docker push $ALIYUN_URL/$imageName
  #docker rmi $ALIYUN_URL/$imageName
done
# =====================================================================================
sh ./calico_pull.sh
​
3.`拉取calicoyaml文件修改配置并启动`
  wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml
  # 因为我的 4核8机器,跑了其他的东西 ,已经有docker容器了.
  #直接以官方提供的yml启动会导致Readiness probe failed: caliconode is not ready: BIRD is not ready: BGP not established with
  # 所以修改calico.yaml 加入两行配置
- name: IP_AUTODETECTION_METHOD
  value: "interface=eth*"
 #启动
  kubectl apply -f calico.yaml
4.`确认是否安装成功`
  kubectl get pods --all-namespaces -w
​

calico.png

8.work节点加入集群

1.`在两个work节点分别执行`
kubeadm join 10.0.16.13:6443 --token  glcp46.15givrbmknthpiql \
    --discovery-token-ca-cert-hash sha256:53f9d887bc638279f0d4976d6df98cd4fbf4ba9d7d0fd531d4a8c5871ba239bc
​
2.`这里我遇到了坑 ,一直卡着不动`
  # 解决办法:查了资料可能token过期,替换后没用 .在说防火墙的原因的,可是我提前关闭了防火墙 .最后找到原因是云服务器需要在云服务商提供的控制台上关闭防火墙
  
3.`master节点验证集群信息`
  kubectl get nodes
# =====================================================================================
  NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   15h   v1.14.0
k8s-work-1   Ready    <none>   13h   v1.14.0
k8s-work-2   Ready    <none>   13h   v1.14.0
# =====================================================================================

\