2020K8S集群搭建

297 阅读3分钟

官方文档介绍的已经很详细了,对于每类组件的使用都给出了多种解决方案,但是选择太多对于新手来说反而是一种负担,官方文档更具普世性,有些操作对我们来说并不友好,比如下载某个组件的时候timeout~

环境

在阿里云申请了3个ECS实例,操作系统为Alibaba Cloud Linux,版本2.1903 LTS 64位

k8s-master 192.168.0.10

k8s-node001 192.168.0.11

k8s-node002 192.168.0.12

安装CRI

用的是docker

安装所需包

yum install yum-utils device-mapper-persistent-data lvm2

新增Docker仓库,使用的阿里云的

sudo yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装Docker CE

yum update -y && yum install -y \
    containerd.io-1.2.13 \
    docker-ce-19.03.11 \
    docker-ce-cli-19.03.11

创建/etc/docker目录

mkdir /etc/docker

设置daemon

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

重启

systemctl daemon-reload
systemctl restart docker

设置开机启动docker

sudo systemctl enable docker

安装 kubeadm、kubelet 和 kubectl

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0

sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet
  • 通过运行命令 setenforce 0sed ... 将 SELinux 设置为 permissive 模式可以有效的将其禁用。 这是允许容器访问主机文件系统所必须的,例如正常使用 pod 网络。 您必须这么做,直到 kubelet 做出升级支持 SELinux 为止

  • 一些 RHEL/CentOS 7 的用户曾经遇到过问题:由于 iptables 被绕过而导致流量无法正确路由的问题。您应该确保 在 sysctl 配置中的 net.bridge.bridge-nf-call-iptables 被设置为 1

    cat < /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system

  • 确保在此步骤之前已加载了 br_netfilter 模块。这可以通过运行 lsmod | grep br_netfilter 来完成。要显示加载它,请调用 modprobe br_netfilter

使用 kubeadm 创建集群

kubeadm init \
    --image-repository registry.aliyuncs.com/google_containers \
    --pod-network-cidr=10.244.0.0/16 \
    --apiserver-advertise-address=192.168.0.10
  • --image-repostitory 指定镜像仓库为阿里云的

  • --pod-network-cidr 因为我们要使用flannel作为CNI,需要指定cidr

    W1002 12:41:00.980139 52250 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.2 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [ingress-builder kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.10] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [ingress-builder localhost] and IPs [192.168.0.10 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [ingress-builder localhost] and IPs [192.168.0.10 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 17.502648 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node ingress-builder as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node ingress-builder as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: i64mn8.84z1ncfc155b0goc [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy

    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p HOME/.kubesudocpi/etc/kubernetes/admin.confHOME/.kube sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (idu):(id -u):(id -g) $HOME/.kube/config

    You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: kubernetes.io/docs/concep…

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 192.168.0.10:6443 --token i64mn8.84z1ncfc155b0goc
    --discovery-token-ca-cert-hash sha256:cf2bd244d6dc653980ae0fc3430a2656f4c1b6fcf0bfffbfde92058a0842330d

根据输出的内容提示设置kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看pod运行状态

# kubectl -n kube-system get pods
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-clpjd                  0/1     Pending   0          3m34s
coredns-6d56c8448f-cvtbj                  0/1     Pending   0          3m34s
etcd-ingress-builder                      1/1     Running   0          3m46s
kube-apiserver-ingress-builder            1/1     Running   0          3m46s
kube-controller-manager-ingress-builder   1/1     Running   0          3m46s
kube-proxy-9m9xq                          1/1     Running   0          3m34s
kube-scheduler-ingress-builder            1/1     Running   0          3m46s

可以看到除了coredns之外,其它pod状态都处于Running,这是因为还没有安装CNI

安装flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

再次查看pod状态,coredns已经在运行了

# kubectl -n kube-system get pods
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-clpjd                  1/1     Running   0          7m48s
coredns-6d56c8448f-cvtbj                  1/1     Running   0          7m48s
etcd-ingress-builder                      1/1     Running   0          8m
kube-apiserver-ingress-builder            1/1     Running   0          8m
kube-controller-manager-ingress-builder   1/1     Running   0          8m
kube-flannel-ds-gksrt                     1/1     Running   0          63s
kube-proxy-9m9xq                          1/1     Running   0          7m48s
kube-scheduler-ingress-builder            1/1     Running   0          8m

根据kubeadm init输出的提示,复制相关命令,在node节点上面执行,加入K8S集群

kubeadm join 192.168.0.10:6443 --token i64mn8.84z1ncfc155b0goc \
    --discovery-token-ca-cert-hash sha256:cf2bd244d6dc653980ae0fc3430a2656f4c1b6fcf0bfffbfde92058a0842330d

查看集群节点信息

# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master    Ready    master   10m   v1.19.2
k8s-node001   Ready    <none>   1m    v1.19.2
k8s-node002   Ready    <none>   1m    v1.19.2

设置kubectl自动补齐

bash用户

# echo "source <(kubectl completion bash)" >> ~/.bashrc
# source ~/.bashrc

zsh用户

# echo "source <(kubectl completion zsh)" >> ~/.zshrc
# source ~/.zshrc