centos 中搭建 k8s 集群

57 阅读5分钟

参考链接

centos7使用kubeadm部署k8s_centos7 kubeadm-CSDN博客

下面文章主要是参考这个链接的,小部分是自己加的。

环境条件

centos 7

vmware workstation

用vm搭建三台虚拟机。

公共部分 (master - salve 三台虚拟机上都要执行)

初始化脚本

mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak

sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

yum makecache

sudo yum update -y

sudo yum install net-tools vim telnet lrzsz wget -y

systemctl stop firewalld.service

systemctl disable firewalld.service

设置主机名

分别在三台虚拟机上执行

在master上执行

hostnamectl set-hostname k8s-master

在salve1上执行

hostnamectl set-hostname k8s-work1

在salve2上执行

hostnamectl set-hostname k8s-work2

设置主机名解析

这里的三个ip分别是master和两个salve的ip,是设置主机名解析

cat >> /etc/hosts << EOF
192.168.195.137 k8s-master
192.168.195.138 k8s-work1
192.168.195.139 k8s-work2
EOF

时间同步

yum install ntpdate -y
ntpdate time.windows.com

关闭selinux

setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

关闭swap分区

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

将桥接的ipv4流量传到iptables

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
modprobe br_netfilter
sysctl --system

开启ipvs

yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

卸载docker

sudo yum remove docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras


sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd


sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

sudo yum install docker-ce-19.03.11 docker-ce-cli-19.03.11 containerd.io -y

重中之重,加速镜像设置:

下面的 docker.uat.xxx.com 配置需要更换一下,这个是加速镜像地址,我是在公司搭建的用的公司搭建的地址。后续找到可用的我再更新,有小伙伴有的话欢迎在评论区分享下。

这里其实耗时挺久的,2024年中旬的时候一些镜像仓库上传了反动镜像,所以现在几乎所有的镜像仓库在国内都用不了了。最后还是找公司运维拿的公司的。

cat > /etc/docker/daemon.json <<EOF
{
    "registry-mirrors": ["https://docker.uat.xxx.com"],
    "insecure-registries" : ["production.cloudflare.docker.com"],
    "features": {
        "buildkit": false
    },

    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
        "overlay2.override_kernel_check=true"
    ]
}
EOF
sudo systemctl enable docker.service
sudo systemctl daemon-reload
sudo systemctl restart docker

安装kubeadm、kubelet和kubectl

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
vim /etc/sysconfig/kubelet

# 修改以下两处内容
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"

重启

systemctl enable kubelet

到此位置,公共部分就结束了,下面是在master上要安装的或者是salve上才要安装的。

master 节点

注意下面的ip地址:192.168.195.137,是master虚拟机上的ip。

kubeadm init \
  --apiserver-advertise-address=192.168.195.137 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16

执行完这里之后记得看控制台,控制台里有接下来的执行命令

1735202465442.png

上面红框中的命令(如下),需要放在master上进行执行。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

salve 节点

还是上面这个图,但是是下面部分,如下:

1735202723209.png

上图红框中的命令如下,需要放在两个salve上执行:

kubeadm join 192.168.195.137:6443 --token wi4kwb.b4rzexl60zjb90cs \
    --discovery-token-ca-cert-hash sha256:9fb1bebec662c1063e6c2fec9851bb43eb2a68b8f937fc7d790ae6a402dbc752

在master上部署cni网络插件

vim kube-flannel.yml

在这个文件中填入如下内容:

---
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: crpi-76p8c80pxtcpef3f.cn-hangzhou.personal.cr.aliyuncs.com/sljflannel/flannel:flannel-cni-plugin-v1.0.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: crpi-76p8c80pxtcpef3f.cn-hangzhou.personal.cr.aliyuncs.com/sljflannel/flannel:flannel-v0.15.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: crpi-76p8c80pxtcpef3f.cn-hangzhou.personal.cr.aliyuncs.com/sljflannel/flannel:flannel-v0.15.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
kubectl apply -f kube-flannel.yml

k8s 集群服务验证

kubectl get pods -n kube-system
kubectl create deployment nginx --image=nginx

kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods
kubectl get svc

1735203828840.png

然后去宿主机访问一下:

1735203851484.png

总结

  1. 在本地搭建了k8s集群
  2. 遇到的坑:加速镜像设置网上都找不到好使的,最后用的公司自己的。