Configure Kubernets with Kubeadm

159 阅读3分钟

This guide lead you to use kubeadm to configure kubernets cluster.

1. Environment

As prerequisites, below are needed before start:

  • One or more machines
  • Hardware with 2GB or more RAM, 2 or more CPUs, hard disk of at least 30GB
  • Available network connection among machines in the cluster
  • Accessible to the external network for pulling the images
  • Turn off the partition swap and firewall

In our test, we use ubuntu-18.04-image-based dockers as nodes. Details are as below and you are allowed to have your own enviroment different from ours:

node_nameip_ddressrole
master-node<master-node_ip_address>Kubernets Master
node1<node1_ip_address>Kubernets Node
node2<node2_ip_address>Kubernets Node

Note that ip_address are variables according to your environment.

Add the ip-hostname relationship table in all the nodes' \etc\hosts:

<master-node_ip_address> master-node
<node1_ip_address> node1
<node2_ip_address> node2
... ...

For example, the file should be appended like below:

172.18.0.3 master-node
172.18.0.4 node1
172.18.0.5 node2

2. Install Docker

First, you need to install docker on all nodes in the cluster.

For example, on the master-node, use below command under admin mode:

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun # official setup
service docker start

Check with command below and you will see docker is already running:

service docker status
 * Docker is running

Then input command vim /etc/default/docker and set proxy for docker in the file:

export http_proxy="<your_http_proxy>:<port>"
export https_proxy="<your_https_proxy>:<port>"

Please replace the proxy with the right ones in your environment.

Input command vim /etc/docker/daemon.json and add below in the json file to avoid storage-driver error:

{
    "storage-driver": "vfs"
}

Now run the command service docker restart and use command docker version to verify, if you see output like below, everything is OK until now:

Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:54:08 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:16 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Run the command docker run hello-world as a check demo. If you get results as below, you have installed docker successfully, otherwise please check the network proxy if unable to pull the image:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
b8dfde127a29: Pull complete
Digest: sha256:61bd3cb6014296e214ff4c6407a5a7e7092dfa8eefdbbec539e133e97f63e09f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Then install docker on the other nodes similar to the above steps on the master-node.

3. Install Kubeadm

Kubelet is a k8s-related service, kubectl is a k8s management client, and kubeadm is a k8s deployment tool. On ubuntu, we install them by apt-get. You can do the same thing on centos machines using yum.

apt-get install kubelet kubeadm kubectl

If the install process is stopped with message like Unable to locate kubelet, add the kubernets repository and refresh apt source:

echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update

when updating the apt-get source, if you encounter GPG error, fix it as below:

apt-key adv --keyserver-options http-proxy="<your_http_proxy>" --keyserver keyserver.ubuntu.com --recv-keys <the_public_key>

Please replace the variables with the right ones. The key can be found in the outputted GPG error messages.

Run the install command again after fixing the errors. Then you can check with the command kubeadm version and get an output like below:

kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:37:34Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}

The Git version is the version number of your kubeadm, please make sure kubelet, kubeadm and kubectl are installed the same version.

The same steps need to be done on the other nodes.

4. Configure Kubernets Master

Turn to the master-node, use the kubeadm init command to start the cluster master:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo '{"exec-opts": ["native.cgroupdriver=systemd"]}' | sudo tee /etc/docker/daemon.json
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
sudo kubeadm reset

sudo kubeadm init \
--apiserver-advertise-address=<master-node_ip_address> \
--image-repository <your_control_plane_image> \
--kubernetes-version <your_kubeadmin_GitVersion> \
--service-cidr=<your_service_cidr> \
--pod-network-cidr=<your_pod_network_cidr>
  • <master-node_ip_address>: ip address of your master-node, you can get it by the command cat /etc/hosts
  • <your_control_plane_images>: image name used as control plane image
  • <your_kubeadmin_GitVersion>: the version number of your kubernets, such as v1.22.2
  • <your_service_cidr>: ip range of your service, default value is 10.96.0.0/12
  • <your_pod_network_cidr>: pod network. Control plane will publish to the node on the network automatically, the value should have a format like 10.244.0.0/16 You can also do a quick start by simply use the default values like this:
kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU # if unprocess, please append parameter: --image-repository=registry.aliyuncs.com/google_containers , which allows you to use the repository in China

After a while, you can get the successful initiation message like below:

Your Kubernetes control-plane has initialized successfully!
To start using your cluster,you need to run the following as a regular user:

  mkdir -p $HOME/ .kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/ .kube/config
  sudo chown $(id -u):$(id -g)$HOME/ .kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork] .yaml" with one of the options listed at:
  https: //kubernetes.io/docs/concepts/cluster-administration/addons/

Moreover, note that there will be a kubectl join command with token and hash in the output. Please record it and use that command to join other nodes to the cluster.

Then make the master-node workable:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # if kubectl cannot get the internet file, please wget it first and then kubectl apply the local file
kubectl taint node <master_node_name> node-role.kubernetes.io/master:NoSchedule-
cd /run
mkdir flannel
cd flannel
echo "FLANNEL_NETWORK=10.244.0.0/16
 FLANNEL_SUBNET=10.244.0.1/24
 FLANNEL_MTU=1450
 FLANNEL_IPMASQ=true" >> subnet.env

If your kubectl are not accessible to the external network, you can first download the yaml file by wget the link, and then kubectl apply the downloaded file locally.

Check as below, you can see the master-node and check kube-system pods:

kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
icx-111   Ready    control-plane,master   57s   v1.22.2

kubectl get pods -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-558bd4d5db-pbpl5          1/1     Running   0          19m
kube-system   coredns-558bd4d5db-v549l          1/1     Running   0          19m
kube-system   etcd-icx-113                      1/1     Running   0          19m
kube-system   kube-apiserver-icx-113            1/1     Running   0          19m
kube-system   kube-controller-manager-icx-113   1/1     Running   0          19m
kube-system   kube-proxy-cnnrw                  1/1     Running   0          19m
kube-system   kube-scheduler-icx-113            1/1     Running   0          19m

5. Configure the Nodes

Pull the flannel image:

docker pull lizhenliang/flannel:v0.11.0-amd64

Then run the kubeadm join command, you can find the command in the output of kubeadm init command above:

kubeadm join <master-node_ip_address>:<port> --token <token> \ --discovery-token-ca-cert-hash <hash>

Do the same on the other nodes, and then you check and get a response like below:

NAME          STATUS   ROLES    AGE      VERSION
master-node   Ready    master   31m17s   v1.22.2
node1         Ready    <none>   22m31s   v1.22.2
node2         Ready    <none>   17m25s   v1.22.2

Now you can continue your work on the kubernets cluster.