Ubuntu 安装k8s, 导入rancher集群

401 阅读6分钟

Ubuntu 安装k8s, 导入rancher集群

前言

作为一个使用k8s新手, 作为经验分享和笔记的一篇文章. 本文介绍如何在ubuntu上部署k8s集群, 大致分为如下步骤:

  • 修改Ubuntu配置
  • 安装运行时容器
  • 安装kubeadm, kubectl以及kubelet
  • 配置master节点
  • 配置网络
  • 将slave节点加入网络

修改 ubuntu配置

swapoff -a

开放端口

端口名称作用
6443kube apiserver
8472K3s vxlan
10250kubelet
10254, 9099Health check
30000~32767k8s默认端口范围
2379,2380,2381etcd
8443, 8080rancher端口

安装containerd

apt remove containerd   							 									# 删除之前的containerd
apt update, apt install containerd.io  									# 更新并重新安装
rm /etc/containerd/config.toml 				 									# 删除原来的配置文件
systemctl restart containerd 														# 重新启动

安装kubeadm, kubectl以及kubelet

这部分是k8s的主要组件, 每台主机都要安装, 这里只做简单介绍

  • kubelet: 这是k8s的核心服务, 在kubernetes节点上运行, 确保节点上的容器都在运行
  • kubeadm: 是一种工具, 可以快速的创建kubernetes集群
  • kubectl: 命令行工具, 用于和kubernetes API Server进行通信, 用来管理kubernetes集群
  1. 使得 api支持ssl传输
 apt-get update && apt-get install -y apt-transport-https
  1. 下载gpg秘钥
 curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
  1. 添加k8s镜像源
 cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
 deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
 EOF
  1. 更新源列表
 apt-get update
  1. 安装kubectl, kubeadm以及kubelet
 apt-get install -y kubelet kubeadm kubectl
  1. 启动kubelet服务
 systemctl start kubelet
 systemctl enable kubelet

配置master节点

  1. 在安装之前可以使用 kubeadm init phase preflight命令来执行预检查操作, 确保系统就绪后再执行init

  2. 准备镜像

    为了加快集群的创建过程, 可以将所有镜像进行预加载, 通过kubeadm config images list 命令可以查看所需镜像列表

 registry.k8s.io/kube-apiserver:v1.27.4
 registry.k8s.io/kube-controller-manager:v1.27.4
 registry.k8s.io/kube-scheduler:v1.27.4
 registry.k8s.io/kube-proxy:v1.27.4
 registry.k8s.io/pause:3.9
 registry.k8s.io/etcd:3.5.7-0
 registry.k8s.io/coredns/coredns:v1.10.1

所以可以预先拉取

 kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers
  1. 配置master节点
 sudo kubeadm init --apiserver-advertise-address xx.xx.xx.xx --pod-network-cidr 10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers

--apiserver-advertise-address: 是apiserver的部署地址, 一定要填写你的内网ip, 切记!!!

--pod-network-cidr: k8s的节点网络

安装成功以后, 可以看到如下提示信息

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.4.16:6443 --token 768u8u.m1ymddxtotmtr4cx --discovery-token-ca-cert-hash sha256:409bb48a2585dc3d773a71d26a22d2e034c41d8910fdebca2e80844b61dd6997

这时候只需要按提示执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubeadm join 10.0.4.16:6443 --token 768u8u.m1ymddxtotmtr4cx --discovery-token-ca-cert-hash sha256:409bb48a2585dc3d773a71d26a22d2e034c41d8910fdebca2e80844b61dd6997

这个命令就是node节点的加入命令, 保存好后面会用到

  1. 一些错误

    当然事情很多时候没有那么一帆风顺, 很有可能会遇到下面这个报错

 [kubelet-check] Initial timeout of 40s passed.
 
 Unfortunately, an error has occurred:
         timed out waiting for the condition
 
 This error is likely caused by:
         - The kubelet is not running
         - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
 
 If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
         - 'systemctl status kubelet'
         - 'journalctl -xeu kubelet'
 
 Additionally, a control plane component may have crashed or exited when started by the container runtime.
 To troubleshoot, list all containers using your preferred container runtimes CLI.
 Here is one example how you may list all running Kubernetes containers by using crictl:
         - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
         Once you have found the failing container, you can inspect its logs with:
         - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
 error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
 To see the stack trace of this error execute with --v=5 or higher

这时候也不要慌, 可以根据他的提示去查看具体的问题.

比如: 执行 'journalctl -xeu kubelet' 发现containerd 执行的镜像不是我们下载的镜像版本不对

可以 containerd config default > /etc/containerd/config.toml 重新生成配置文件, 然后将镜像改为我们已经下载好的镜像

# sandbox_image = "registry.k8s.io/pause:3.6"   # 原镜像版本
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9"  # 修改后的镜像版本

​ 需要重新安装的话, 执行 kubeadm reset后再执行init命令

配置网络

此时运行kubectl查看node的状态, 你会发现节点都是notReady

root@VM-4-16-ubuntu:/home/ubuntu# kubectl get nodes
NAME              STATUS   ROLES           AGE     VERSION
vm-4-16-ubuntu    NotReady control-plane   3d21h   v1.27.3

通过命令kubectl describe node <nodename> 来查看node的状态, 会发现提示网络问题. 此时我们来为k8s安装网络插件, 这里我选择的是fannel, 配置如下

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.22.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.22.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.22.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

执行kubectl 命令安装

kubectl apply -f kube-flannel.yml

此时再来查看node状态, 会发现已经是ready了

root@VM-4-16-ubuntu:/home/ubuntu# kubectl get nodes
NAME              STATUS   ROLES           AGE     VERSION
vm-4-16-ubuntu    Ready    control-plane   3d21h   v1.27.3

将slave节点加入网络

  1. 首先要执行上述步骤

    • 修改Ubuntu配置
    • 安装运行时容器
    • 安装kubeadm, kubectl以及kubelet
    • 修改containerd的配置文件, 更改镜像
    • 安装网络插件
  2. 执行master节点安装成功后, 返回信息中的命令

 kubeadm join 10.0.4.16:6443 --token 768u8u.m1ymddxtotmtr4cx --discovery-token-ca-cert-hash sha256:409bb48a2585dc3d773a71d26a22d2e034c41d8910fdebca2e80844b61dd6997
  1. 查看集群状态
 root@VM-4-16-ubuntu:/home/ubuntu# kubectl get nodes
 NAME              STATUS   ROLES           AGE     VERSION
 vm-16-14-ubuntu   Ready    <none>          3d16h   v1.27.3
 vm-4-16-ubuntu    Ready    control-plane   3d21h   v1.27.3
 vm-4-2-ubuntu     Ready    <none>          3d16h   v1.27.3
 
 root@VM-4-16-ubuntu:/home/ubuntu# kubectl get cs
 NAME                 STATUS    MESSAGE                         ERROR
 scheduler            Healthy   ok
 controller-manager   Healthy   ok
 etcd-0               Healthy   {"health":"true","reason":""}

此时可以看到新加入的node节点

导入rancher

简单介绍一下rancher

rancher是一个开源的容器管理平台, 他提供了一整套强大的工具来简化k8s的复杂性

  1. 安装rancher

    这里我选择直接用docker的方式简单安装

 sudo docker run -d --name=rancer --restart=unless-stopped -p 8081:80 -p 8443:443 --privileged -v /data/rancher:/var/lib/rancher rancher/rancher:stable
  1. 将集群导入rancher

    • 登录以后点击导入

    • 选择导入任何集群 导入集群

    • 输入新建集群的名称

    • 在master节点上执行命令