kubernetes-环境部署-hybridnet插件-underlay网络

194 阅读4分钟

一键安装文档 juejin.cn/post/707068…

参考文档 mp.weixin.qq.com/s/ySnENeuII…

部署前提

  • 使用kubeadm部署Kubernetes集群的前提条件

  • 支持Kubernetes运行的Linux主机,例如Debian、RedHat及其变体等

  • 每主机2GB以上的内存,以及2颗以上的CPU

  • 各主机间能够通过网络无障碍通信

  • 独占的hostname、MAC地址以及product_uuid,主机名能够正常解析

  • 放行由Kubernetes使用到的各端口,或直接禁用iptables

  • 禁用各主机的上的Swap设备

  • 各主机时间同步

需要开放的节点端口

image.png

机器清单

192.168.31.201 node01 node01.wangfei.haidian
192.168.31.202 node02 node02.wangfei.haidian
192.168.31.203 node03 node03.wangfei.haidian
192.168.31.250 master01 master01.wangfei.haidian

部署思路

  • step1 前置准备
  • step2 安装容器运行时
    • docker-ce docker-cri
    • 或者 containerd
  • step3 安装kubelet kubectl kubeadm
  • step4 创建集群
    • 第一个节点执行kubeadm init 拉起控制平面节点
    • 将其他的控制平面节点 使用kubeadm join加入到集群中
    • 将worker节点加入集群

配置操作系统

时钟配置

apt-get install chrony
chronyc sources

image.png

查看swap设备

systemctl --type swap

### 关闭swap
swapoff -a

vim /etc/fstab 

安装docker

step1 每台机器 配置主机名

# 每台机器配置主机名
hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03

# 每台机器配置 vim /etc/hosts
# 172.30.66.169 kubeapi.magedu.com kubeapi

192.168.31.201 node01 node01.wangfei.haidian
192.168.31.202 node02 node02.wangfei.haidian
192.168.31.203 node03 node03.wangfei.haidian
192.168.31.250 master01 master01.wangfei.haidian

kubeapi.magedu.com 专用apiserver的 后面高可用的时候也会用到

step2 每台机器 安装并启动docker www.cnblogs.com/jiumo/p/159…

curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
apt update
apt -y install apt-transport-https ca-certificates curl software-properties-common

add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

apt -y install docker-ce

step3配置 vim /etc/docker/daemon.json

{
"registry-mirrors": [
"https://registry.docker-cn.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "200m"
},
"storage-driver": "overlay2"
}

配置国内 镜像加速

step4

systemctl daemon-reload
systemctl start docker.service
systemctl enable docker.service
systemctl status docker.service

安装cri-dockerd

step1

官方下载
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.1/cri-dockerd-0.3.1.amd64.tgz

tar xvf cri-dockerd-0.3.1.amd64.tgz

cp cri-dockerd/cri-dockerd /usr/local/bin/

step2 配置cri-dockerd

docker pull registry.aliyuncs.com/google_containers/pause:3.9

vim /lib/systemd/system/cri-docker.service

[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target

step3配置cri-docker.socket

vim /etc/systemd/system/cri-docker.socket

[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target

step4 reload服务

# 配置完成后,重载并重启cri-docker.service服务。
systemctl daemon-reload && systemctl enable cri-docker.service &&  systemctl restart cri-docker.service && systemctl status cri-docker.service

安装kubelet、kubeadm、kubectl

apt-get update && apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update
apt-cache madison kubeadm
apt-get install kubelet=1.24.10-00 kubeadm=1.24.10-00 kubectl=1.24.10-00

初始化集群

检查k8s的版本

kubeadm config images list --kubernetes-version v1.24.10

image.png

step1 下载镜像(所有节点节点操作)

cat <<EOF >images-download.sh

#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.24.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.24.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.24.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.6
EOF


bash images-download.sh

image.png

step2 初始化master节点(只在主节点操作)

kubeadm init --apiserver-advertise-address=192.168.31.250 --apiserver-bind-port=6443 --kubernetes-version=v1.24.10 --pod-network-cidr=10.200.0.0/16 --service-cidr=172.31.5.0/24 --service-dns-domain=cluster.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap --cri-socket unix:///var/run/cri-dockerd.sock

step3 第1个步骤提示, Kubernetes集群管理员认证到Kubernetes集群时使用的kubeconfig配置文件(只在主节点操作)

image.png

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

step4 初始化集群-添加node节点

所有的node节点执行

kubeadm join 192.168.31.250:6443 --token nsg8d5.05a7vajb4qvfzjje --discovery-token-ca-cert-hash
sha256:ad884fc30673f106eb830875ebfaa147ac930cc1e4271da9576aea2dcc245ea7 --cri-socket unix:///var/run/cri-dockerd.sock

分发kubeconfig文件

所有的node节点执行

mkdir /root/.kube -p

主节点执行

scp /root/.kube/config node01:/root/.kube/config
scp /root/.kube/config node02:/root/.kube/config
scp /root/.kube/config node03:/root/.kube/config

image.png

部署网络组件hybridnet

overlay 和 underlay都支持

step1 master节点安装helm

mkdir -p $HOME/bin
wget https://get.helm.sh/helm-v3.3.1-linux-amd64.tar.gz
tar -xvzf helm-v3.3.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/
helm version

step2 添加helm源

helm repo add hybridnet https://alibaba.github.io/hybridnet/

helm repo update

step3 配置overlay pod网络(使用kubeadm初始化时指定的pod网络)

注意: 如果不指定--set init.cidr=10.200.0.0/16默认会使用100.64.0.0/16

一定要个kubeadm init时指定的--pod-network-cidr=10.200.0.0/16 对齐

helm install hybridnet hybridnet/hybridnet -n kube-system --set init.cidr=10.200.0.0/16

image.png

step4

kubectl get pod -A

发现很多pending的 image.png

lable主机

kubectl label node node01.wangfei.haidian node-role.kubernetes.io/master=
kubectl label node node02.wangfei.haidian node-role.kubernetes.io/master=
kubectl label node node03.wangfei.haidian node-role.kubernetes.io/master=

pod都起来了 image.png

创建underlay网络并与node节点关联

为node主机添加underlay network标签

kubectl label node node01.wangfei.haidian network=underlay-nethost
kubectl label node node02.wangfei.haidian network=underlay-nethost
kubectl label node node03.wangfei.haidian network=underlay-nethost

创建network

宿主机的网络就是192.168.31.0/24

---
apiVersion: networking.alibaba.com/v1
kind: Network
metadata:
  name: underlay-network1
spec:
  netID: 0
  type: Underlay
  nodeSelector:
    network: "underlay-nethost"

---
apiVersion: networking.alibaba.com/v1
kind: Subnet
metadata:
  name: underlay-network1 
spec:
  network: underlay-network1
  netID: 0
  range:
    version: "4"
    cidr: "192.168.31.0/24"
    gateway: "192.168.31.1"     # 外部网关地址
    start: "192.168.31.100"
    end: "192.168.31.200"

image.png

测试创建overlay pod

kubectl create ns myserver

创建overlay pod

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-overlay-label
  name: myserver-tomcat-app1-deployment-overlay
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-overlay-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-overlay-selector
    spec:
      nodeName: node02.wangfei.haidian
      containers:
      - name: myserver-tomcat-app1-container
        #image: tomcat:7.0.93-alpine 
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1 
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 0.5
#            memory: "512Mi"
#          requests:
#            cpu: 0.5
#            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-overlay-label
  name: myserver-tomcat-app1-service-overlay
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30003
  selector:
    app: myserver-tomcat-app1-overlay-selector

image.png

需要通过servive nodeport访问

image.png

创建underlay pod

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-underlay-label
  name: myserver-tomcat-app1-deployment-underlay
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-underlay-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-underlay-selector
      annotations: #使用Underlay或者Overlay网络
        networking.alibaba.com/network-type: Underlay
    spec:
      #nodeName: k8s-node2.example.com
      containers:
      - name: myserver-tomcat-app1-container
        #image: tomcat:7.0.93-alpine 
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v2 
        imagePullPolicy: IfNotPresent
        ##imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 0.5
#            memory: "512Mi"
#          requests:
#            cpu: 0.5
#            memory: "512Mi"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-underlay-label
  name: myserver-tomcat-app1-service-underlay
  namespace: myserver
spec:
#  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    #nodePort: 40003
  selector:
    app: myserver-tomcat-app1-underlay-selector

创建出来的pod使用了192.168.31.100的地址 这个是宿主机所在的网络

image.png

image.png