Kubernetes 1.30 二进制安装

811 阅读8分钟

安装虚拟机

安装 vagrant

brew install vagrant

安装VirtualBox

Downloads – Oracle VM VirtualBox

下载Ubuntu 22.04 镜像

vagrant box add ubuntu https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64-vagrant.box

Vagrant配置文件

# -*- mode: ruby -*-
# vi: set ft=ruby :

boxes = [
  {
    name: "k8s-master1",
    eth1: "192.168.28.11",
    mem: "2048",
    cpu: "2",
    sshport: 22_211,
    synced_folder: "./master1"
  },
  {
    name: "k8s-master2",
    eth1: "192.168.28.12",
    mem: "2048",
    cpu: "2",
    sshport: 22_212,
    synced_folder: "./master2"
  },
  {
    name: "k8s-master3",
    eth1: "192.168.28.13",
    mem: "2048",
    cpu: "2",
    sshport: 22_213,
    synced_folder: "./master3"
  },
  {
    name: "k8s-node1",
    eth1: "192.168.28.101",
    mem: "1024",
    cpu: "2",
    sshport: 22_101,
    synced_folder: "./node1"
  },
  {
    name: "k8s-node2",
    eth1: "192.168.28.102",
    mem: "1024",
    cpu: "2",
    sshport: 22_102,
    synced_folder: "./node2"
  }
]

Vagrant.configure("2") do |config|
  config.vm.box = "ubuntu"

  boxes.each do |opts|
    config.vm.define opts[:name] do |config|
      config.vm.hostname = opts[:name]
      config.vm.network "private_network", ip: opts[:eth1]
      config.vm.network "forwarded_port",
                        guest: 22,
                        host: 2222,
                        id: "ssh",
                        disabled: "true"
      config.vm.network "forwarded_port", guest: 22, host: opts[:sshport]
      config.vm.synced_folder opts[:synced_folder], "/vagrant", type: "rsync"

      config.vm.provider "virtualbox" do |v|
        v.memory = opts[:mem]
        v.cpus = opts[:cpu]
        v.name = opts[:name]
      end
    end
  end
end

启动虚拟机

# 启动虚拟机
$ vagrant up {虚拟机名称}

# 连接虚拟机
$ vagrant ssh {虚拟机名称}

Welcome to Ubuntu 22.04.4 LTS (GNU/Linux 5.15.0-102-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/pro

  System information as of Wed Apr 24 06:52:55 UTC 2024

  System load:  0.4130859375      Processes:               102
  Usage of /:   4.3% of 38.70GB   Users logged in:         0
  Memory usage: 11%               IPv4 address for enp0s3: 10.0.2.15
  Swap usage:   0%                IPv4 address for enp0s8: 192.168.28.11


Expanded Security Maintenance for Applications is not enabled.

36 updates can be applied immediately.
9 of these updates are standard security updates.
To see these additional updates run: apt list --upgradable

Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status


Last login: Wed Apr 24 06:38:40 2024 from 10.0.2.2

更新源

vim /etc/apt/sources.list

deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe multiverse
$ apt-get update

参数调整

加载内核模块

$ cat > /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

$ systemctl restart systemd-modules-load.service

# 验证模块
$ lsmod | grep -e ip_vs -e nf_conntrack -e br_netfilter -e overlay

调整内核参数

$ cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv4.ip_forward = 1
EOF

$ sysctl --system

安装ipvsadm

$ apt-get install ipvsadm ipset sysstat conntrack -y

修改 Hosts

$ vim /etc/hosts

192.168.28.11  k8s-master1
192.168.28.101 k8s-node1

安装 runc

node节点安装

$ wget https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
$ chmod +x runc.amd64
$ cp runc.amd64 /usr/local/bin/runc
$ runc -v

runc version 1.1.12
commit: v1.1.12-0-g51d5e946
spec: 1.0.2-dev
go: go1.20.13
libseccomp: 2.5.4

安装 containerd

node节点安装

$ wget https://github.com/containerd/containerd/releases/download/v1.7.15/containerd-1.7.15-linux-amd64.tar.gz
$ tar xf containerd-1.7.15-linux-amd64.tar.gz
$ cp bin/* /usr/local/bin/
$ containerd  -v

containerd github.com/containerd/containerd v1.7.15 926c9586fe4a6236699318391cd44976a98e31f1

配置service

$ vim /etc/systemd/system/containerd.service

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
$ mkdir -p /etc/containerd
$ containerd config default | tee /etc/containerd/config.toml
$ vim /etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] 
# 修改此处为true
SystemdCgroup = true

[plugins."io.containerd.grpc.v1.cri"]
# 修改为国内镜像
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"

启动service

$ systemctl daemon-reload
$ systemctl start containerd
$ systemctl enable containerd

$ systemctl status containerd

● containerd.service - containerd container runtime
     Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-04-24 16:34:50 CST; 53s ago
       Docs: https://containerd.io
   Main PID: 1128 (containerd)
      Tasks: 7
     Memory: 13.9M
        CPU: 970ms
     CGroup: /system.slice/containerd.service
             └─1128 /usr/bin/containerd

运行镜像

$ ctr images pull docker.io/library/alpine:latest
$ ctr run -t --net-host docker.io/library/alpine:latest container1 sh

安装证书生成工具

$ apt-get install golang-cfssl

安装etcd

安装位置

可在master节点部署,可独立节点部署,部署节点必须是单数(eg: 3 节点、5 节点、7 节点)

生成ca证书

生成ca配置

$ mkdir -p /etc/kubernetes/pki/etcd
$ cd /etc/kubernetes/pki/etcd
$ cfssl print-defaults config > ca-config.json
$ cat ca-config.json

{
    "signing": {
        "default": {
            "expiry": "876000h"
        },
        "profiles": {
            "etcd": {
                "expiry": "876000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

生成ca证书请求文件

$ cfssl print-defaults csr > ca-csr.json
$ cat ca-csr.json

{
    "CN": "etcd",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "Etcd"
        }
    ]
}

生成ca自签名证书

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls *.pem

ca-key.pem ca.pem

生成服务端证书

证书请求文件

$ cfssl print-defaults csr > etcd-csr.json
$ cat etcd-csr.json

{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.28.11",
        "k8s-master1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "Etcd"
        }
    ]
}

证书生成

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd etcd-csr.json |cfssljson -bare etcd
$ ls etcd*.pem

etcd-key.pem etcd.pem

生成peer证书

证书请求文件

$ cfssl print-defaults csr > etcd-peer-csr.json
$ cat etcd-peer-csr.json

{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.28.11",
        "k8s-master1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "Etcd"
        }
    ]
}

证书生成

$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd etcd-csr.json |cfssljson -bare etcd-peer
$ ls etcd-peer*.pem

etcd-key.pem etcd.pem

下载

$ wget https://github.com/etcd-io/etcd/releases/download/v3.5.13/etcd-v3.5.13-linux-amd64.tar.gz
$ tar xf etcd-v3.5.13-linux-amd64.tar.gz
$ cp etcd-v3.5.13-linux-amd64/etcd* /usr/local/bin/
$ etcdctl version

etcdctl version: 3.5.13
API version: 3.5

创建服务

$ vim /etc/systemd/system/etcd.service

[Unit]
Description=Etcd Service
Documentation=https://etcd.io
After=network.target

[Service]
ExecStart=/usr/local/bin/etcd \
    --name=k8s-master1 \
    --data-dir=/var/lib/etcd \
    --wal-dir=/var/lib/etcd \
    --snapshot-count=10000 \
    --advertise-client-urls=https://192.168.28.11:2379 \
    --initial-advertise-peer-urls=https://192.168.28.11:2380 \
    --initial-cluster=k8s-master1=https://192.168.28.11:2380 \
    --listen-client-urls=https://127.0.0.1:2379,https://192.168.28.11:2379 \
    --listen-metrics-urls=http://127.0.0.1:2381 \
    --listen-peer-urls=https://192.168.28.11:2380 \
    --client-cert-auth=true \
    --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
    --key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \
    --cert-file=/etc/kubernetes/pki/etcd/etcd.pem \
    --peer-client-cert-auth=true \
    --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \
    --peer-cert-file=/etc/kubernetes/pki/etcd/etcd-peer.pem \
    --peer-key-file=/etc/kubernetes/pki/etcd/etcd-peer-key.pem \
    --experimental-initial-corrupt-check=true \
    --initial-cluster-state=new
    

Type=notify
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

启动服务

$ systemctl daemon-reload
$ systemctl start etcd
$ systemctl enable etcd

# 查看etcd日志
$ journalctl -u etcd.service -f

生成客户端证书

$ cd /etc/kubernetes/pki/etcd
$ cfssl print-defaults csr > etcd-client-csr.json
$ cat etcd-client-csr.json

{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.28.11",
        "k8s-master1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "Etcd"
        }
    ]
}
$ cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd etcd-client-csr.json | cfssljson  -bare etcd-client
$ ls etcd-client*.pem

etcd-client-key.pem etcd-client.pem

查看成员列表

$ etcdctl --endpoints='192.168.28.11:2379' --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd-client.pem --key=/etc/kubernetes/pki/etcd/etcd-client-key.pem member list -w table

查看节点状态

etcdctl --endpoints='192.168.28.11:2379' --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd-client.pem --key=/etc/kubernetes/pki/etcd/etcd-client-key.pem endpoint status -w table

下载k8s核心组件

master节点组件:kube-apiserver, kube-controller-manager, kube-scheduler, kubectl

node节点组件:kubelet, kube-proxy

$ wget https://dl.k8s.io/v1.30.0/bin/linux/amd64/kube-apiserver
$ wget https://dl.k8s.io/v1.30.0/bin/linux/amd64/kube-controller-manager
$ wget https://dl.k8s.io/v1.30.0/bin/linux/amd64/kube-scheduler
$ wget https://dl.k8s.io/v1.30.0/bin/linux/amd64/kubelet
$ wget https://dl.k8s.io/v1.30.0/bin/linux/amd64/kubectl
$ wget https://dl.k8s.io/v1.30.0/bin/linux/amd64/kube-proxy

$ chmod +x kube*

# 安装
$ cp kube* /usr/local/bin/

安装kube-apiserver

安装位置

每台master节点,apiserver属于无状态服务,多个节点可使用lb进行高可用和负载均衡。

生成kubernetes集群ca证书

生成ca配置

$ mkdir -p /etc/kubernetes/pki/ca
$ cd /etc/kubernetes/pki/ca
$ cfssl print-defaults config > ca-config.json
$ cat ca-config.json

{
    "signing": {
        "default": {
            "expiry": "876000h"
        },
        "profiles": {
            "kubernetes": {
                "expiry": "876000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

生成ca证书请求文件

$ cfssl print-defaults csr > ca-csr.json
$ cat ca-csr.json

{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成ca自签名证书

$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls *.pem

ca-key.pem ca.pem

生成服务端证书

证书请求文件

$ mkdir -p /etc/kubernetes/pki/apiserver
$ cd /etc/kubernetes/pki/apiserver
$ cfssl print-defaults csr > apiserver-csr.json
$ cat apiserver-csr.json

{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "192.168.28.11",
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca/ca.pem -ca-key=/etc/kubernetes/pki/ca/ca-key.pem -config=/etc/kubernetes/pki/ca/ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
$ ls apiserver*.pem

apiserver-key.pem  apiserver.pem

生成kubelet client证书

证书请求文件

$ cd /etc/kubernetes/pki/apiserver
$ cfssl print-defaults csr > kubelet-client-csr.json
$ cat kubelet-client-csr.json

{
    "CN": "system:kubelet",
    "hosts": [
	    "127.0.0.1",
        "192.168.28.11",
        "k8s-master1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca/ca.pem -ca-key=/etc/kubernetes/pki/ca/ca-key.pem -config=/etc/kubernetes/pki/ca/ca-config.json -profile=kubernetes kubelet-client-csr.json | cfssljson -bare kubelet-client
$ ls kubelet-client*.pem

kubelet-client-key.pem  kubelet-client.pem

生成etcd client证书

证书请求文件

$ cd /etc/kubernetes/pki/apiserver
$ cfssl print-defaults csr > etcd-client-csr.json
$ cat etcd-client-csr.json

{
    "CN": "system:etcd",
    "hosts": [
	    "127.0.0.1",
        "192.168.28.11",
        "k8s-master1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/etcd/ca.pem -ca-key=/etc/kubernetes/pki/etcd/ca-key.pem -config=/etc/kubernetes/pki/etcd/ca-config.json -profile=etcd etcd-client-csr.json | cfssljson -bare etcd-client
$ ls etcd-client*.pem

etcd-client-key.pem  etcd-client.pem

生成service account证书

证书请求文件

$ mkdir /etc/kubernetes/pki/sa
$ cd /etc/kubernetes/pki/sa
$ cfssl print-defaults csr > sa-csr.json
$ cat sa-csr.json

{
    "CN": "system:serviceAccount",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca/ca.pem -ca-key=/etc/kubernetes/pki/ca/ca-key.pem -config=/etc/kubernetes/pki/ca/ca-config.json -profile=kubernetes sa-csr.json | cfssljson -bare sa
$ ls sa*.pem

sa-key.pem  sa.pem

创建服务

$ vim /etc/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
    --bind-address=192.168.28.11  \
    --secure-port=6443 \
    --advertise-address=192.168.28.11 \
    --allow-privileged=true  \
    --authorization-mode=AlwaysAllow \
    --client-ca-file=/etc/kubernetes/pki/ca/ca.pem \
    --enable-admission-plugins=NodeRestriction \
    --enable-bootstrap-token-auth=true \
    --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem  \
    --etcd-certfile=/etc/kubernetes/pki/apiserver/etcd-client.pem  \
    --etcd-keyfile=/etc/kubernetes/pki/apiserver/etcd-client-key.pem  \
    --etcd-servers=https://192.168.28.11:2379 \
    --kubelet-certificate-authority=/etc/kubernetes/pki/ca/ca.pem \
    --kubelet-client-certificate=/etc/kubernetes/pki/apiserver/kubelet-client.pem \
    --kubelet-client-key=/etc/kubernetes/pki/apiserver/kubelet-client-key.pem \
    --service-account-issuer=https://kubernetes.default.svc.cluster.local \
    --service-account-signing-key-file=/etc/kubernetes/pki/sa/sa-key.pem \
    --service-account-key-file=/etc/kubernetes/pki/sa/sa.pem \
    --tls-cert-file=/etc/kubernetes/pki/apiserver/apiserver.pem \
    --tls-private-key-file=/etc/kubernetes/pki/apiserver/apiserver-key.pem \
    --kubelet-preferred-address-types=InternalIP,Hostname \
    --service-cluster-ip-range=10.96.0.0/16  \
    --service-node-port-range=30000-60000 \
    --requestheader-client-ca-file=/etc/kubernetes/pki/ca/ca.pem

Type=notify
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

启动服务

$ systemctl daemon-reload
$ systemctl start kube-apiserver.service
$ systemctl enable kube-apiserver.service
$ systemctl status kube-apiserver.service

安装 kube-controller-manager

安装位置

全部master节点,只有主节点提供服务,其余节点作为热备。

生成证书

证书请求文件

$ mkdir -p /etc/kubernetes/pki/controller-manager
$ cd /etc/kubernetes/pki/controller-manager
$ cfssl print-defaults csr > controller-manager-csr.json
$ cat manager-csr.json

{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca/ca.pem -ca-key=/etc/kubernetes/pki/ca/ca-key.pem -config=/etc/kubernetes/pki/ca/ca-config.json -profile=kubernetes controller-manager-csr.json | cfssljson -bare controller-manager
$ ls *.pem

controller-manager-key.pem  controller-manager.pem

设置集群项

配置集群信息

使用 kubectl config set-cluster 命令在kubeconfig文件中设置集群信息,包括证书颁发机构、证书、kube-apiserver地址等。

  • --certificate-authority 指定了集群的证书颁发机构(CA)的路径,这个CA会验证kube-apiserver提供的证书是否合法
  • --embed-certs 选项用于将证书嵌入到生成的kubeconfig文件中,这样就不需要在kubeconfig文件中单独指定证书文件路径
  • --server 选项指定了kube-apiserver的地址
  • --kubeconfig 选项指定了生成的kubeconfig文件的路径和名称
$ kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca/ca.pem \
     --embed-certs=true \
     --server=https://192.168.28.11:6443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

配置环境上下文

使用 kubectl config set-context 配置 Kubernetes 控制器管理器的上下文信息。

$ kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  • 设置上下文的名称为 system:kube-controller-manager@kubernetes
  • --cluster=kubernetes 指定集群的名称为 kubernetes
  • --user=system:kube-controller-manager 指定使用的用户身份,这是一个特殊的用户身份,具有控制 Kubernetes 控制器管理器的权限

配置用户项

kubectl config set-credentials 用于设置 Kubernetes 的 controller-manager 组件的客户端凭据。

$ kubectl config set-credentials system:kube-controller-manager \
       --client-certificate=/etc/kubernetes/pki/controller-manager/controller-manager.pem \
       --client-key=/etc/kubernetes/pki/controller-manager/controller-manager-key.pem \
       --embed-certs=true \
       --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
  • system:kube-controller-manager 是设置用户凭据的名称
  • --client-certificate 证书路径
  • --client-key 证书私钥
  • --embed-certs 将证书和私钥直接嵌入到生成的 kubeconfig 文件中

设置默认环境

指定kubectl使用指定的上下文环境来执行操作。上下文环境是kubectl用来确定要连接到哪个Kubernetes集群以及使用哪个身份验证信息的配置

$ kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

创建服务

vim /etc/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
    --authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    --authorization-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    --bind-address=127.0.0.1 \
    --client-ca-file=/etc/kubernetes/pki/ca/ca.pem \
    --cluster-name=kubernetes \
    --cluster-signing-cert-file=/etc/kubernetes/pki/ca/ca.pem \
    --cluster-signing-key-file=/etc/kubernetes/pki/ca/ca-key.pem \
    --controllers=*,bootstrapsigner,tokencleaner \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \
    --requestheader-client-ca-file=/etc/kubernetes/pki/ca/ca.pem \
    --root-ca-file=/etc/kubernetes/pki/ca/ca.pem \
    --service-account-private-key-file=/etc/kubernetes/pki/sa/sa-key.pem \
    --use-service-account-credentials=true \
    --tls-cert-file=/etc/kubernetes/pki/controller-manager/controller-manager.pem \
    --tls-private-key-file=/etc/kubernetes/pki/controller-manager/controller-manager-key.pem \
    --leader-elect=true \
    --node-monitor-grace-period=40s \
    --node-monitor-period=5s \
    --allocate-node-cidrs=true \
    --cluster-cidr=10.244.0.0/16 \
    --node-cidr-mask-size=24

Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

启动服务

$ systemctl daemon-reload
$ systemctl start kube-controller-manager.service
$ systemctl enable kube-controller-manager.service
$ systemctl status kube-controller-manager.service

● kube-controller-manager.service - Kubernetes Controller Manager
     Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
     Active: active (running) since 6s ago
       Docs: https://github.com/kubernetes/kubernetes
   Main PID: 3875 (kube-controller)
      Tasks: 6 (limit: 2309)
     Memory: 17.5M
        CPU: 413ms
     CGroup: /system.slice/kube-controller-manager.service
             └─3875 /usr/local/bin/kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --authorizati>
     ...

安装kube-scheduler

安装位置

master节点

生成证书

证书请求文件

$ mkdir -p /etc/kubernetes/pki/scheduler
$ cd /etc/kubernetes/pki/scheduler
$ cfssl print-defaults csr > scheduler-csr.json
$ cat scheduler-csr.json

{
    "CN": "system:kube-scheduler",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca/ca.pem -ca-key=/etc/kubernetes/pki/ca/ca-key.pem -config=/etc/kubernetes/pki/ca/ca-config.json -profile=kubernetes scheduler-csr.json | cfssljson -bare scheduler
$ ls *.pem

scheduler-key.pem  scheduler.pem

设置集群项

配置集群信息

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca/ca.pem \
     --embed-certs=true \
     --server=https://192.168.28.11:6443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

配置环境上下文

kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

配置用户项

kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

设置默认环境

kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig

创建服务

vim /etc/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
    --kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
    --authentication-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
    --authorization-kubeconfig=/etc/kubernetes/scheduler.kubeconfig \
    --bind-address=127.0.0.1 \
    --leader-elect=true

Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

启动服务

$ systemctl daemon-reload
$ systemctl start kube-scheduler.service
$ systemctl enable kube-scheduler.service
$ systemctl status kube-scheduler.service

● kube-scheduler.service - Kubernetes Scheduler
     Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
     Active: active (running) since 7s ago
       Docs: https://github.com/kubernetes/kubernetes
   Main PID: 4180 (kube-scheduler)
      Tasks: 7 (limit: 2309)
     Memory: 13.6M
        CPU: 902ms
     CGroup: /system.slice/kube-scheduler.service
             └─4180 /usr/local/bin/kube-scheduler

admin.kubeconfig配置

生成位置

master节点

生成admin证书

$ mkdir -p /etc/kubernetes/pki/admin
$ cd /etc/kubernetes/pki/admin

cat > admin-csr.json << EOF 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
EOF
$ cfssl gencert \
   -ca=/etc/kubernetes/pki/ca/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca/ca-key.pem \
   -config=/etc/kubernetes/pki/ca/ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare admin

配置kubeconfig

$ kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca/ca.pem \
  --embed-certs=true \
  --server=https://192.168.28.11:6443 \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
$ kubectl config set-credentials kubernetes-admin \
  --client-certificate=/etc/kubernetes/pki/admin/admin.pem \
  --client-key=/etc/kubernetes/pki/admin/admin-key.pem \
  --embed-certs=true \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
$ kubectl config set-context kubernetes-admin@kubernetes \
  --cluster=kubernetes \
  --user=kubernetes-admin \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
$ kubectl config use-context kubernetes-admin@kubernetes  \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig

拷贝配置项

$ mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

查看集群状态

$ kubectl get cs

NAME                 STATUS    MESSAGE   ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   ok

node节点拷贝

将master节点相关证书配置拷贝到node节点

master节点拷贝到node节点
/etc/kubernetes/pki/ca/ca.pem->/etc/kubernetes/pki/ca/ca.pem
/etc/kubernetes/pki/ca/ca-key.pem->/etc/kubernetes/pki/ca/ca-key.pem
/etc/kubernetes/pki/ca/ca-config.json->/etc/kubernetes/pki/ca/ca-config.json
/etc/kubernetes/bootstrap-kubelet.kubeconfig->/etc/kubernetes/bootstrap-kubelet.kubeconfig

安装kubelet

安装位置

master节点 node节点

生成证书

证书请求文件

$ mkdir -p /etc/kubernetes/pki/kubelet
$ cd /etc/kubernetes/pki/kubelet
$ cfssl print-defaults csr > kubelet-csr.json
$ cat kubelet-csr.json

{
    "CN": "system:kubelet",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
        "127.0.0.1",
        "192.168.28.101", // 本机ip
    ],
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "Kubernetes",
            "OU": "System"
        }
    ]
}

生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca/ca.pem -ca-key=/etc/kubernetes/pki/ca/ca-key.pem -config=/etc/kubernetes/pki/ca/ca-config.json -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet
$ ls *.pem

kubelet-key.pem  kubelet.pem

设置集群项

配置集群信息

kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca/ca.pem \
     --embed-certs=true \
     --server=https://192.168.28.11:6443 \
     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig

配置环境上下文

kubectl config set-credentials system:kubelet \
     --client-certificate=/etc/kubernetes/pki/kubelet/kubelet.pem \
     --client-key=/etc/kubernetes/pki/kubelet/kubelet-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig

配置用户项

kubectl config set-context system:kubelet@kubernetes \
     --cluster=kubernetes \
     --user=system:kubelet \
     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig

设置默认环境

kubectl config use-context system:kubelet@kubernetes \
     --kubeconfig=/etc/kubernetes/kubelet.kubeconfig

配置文件

$ vim /etc/kubernetes/kubelet-conf.yaml

apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s

创建服务

$ vim /etc/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=network-online.target containerd.service
Wants=network-online.target
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
    --config=/etc/kubernetes/kubelet-conf.yaml \
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock \
    --node-labels=node.kubernetes.io/node=k8s-node1 \
    --resolv-conf=/run/systemd/resolve/resolv.conf \
    --tls-cert-file=/etc/kubernetes/pki/kubelet/kubelet.pem \
    --tls-private-key-file=/etc/kubernetes/pki/kubelet/kubelet-key.pem
[Install]
WantedBy=multi-user.target

启动服务

$ systemctl daemon-reload
$ systemctl start kubelet.service
$ systemctl enable kubelet.service
$ systemctl status kubelet.service

● kubelet.service - Kubernetes Kubelet
     Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
     Active: active (running) since 5s ago
       Docs: https://github.com/kubernetes/kubernetes
   Main PID: 3452 (kubelet)
      Tasks: 11 (limit: 1102)
     Memory: 19.8M
        CPU: 388ms
     CGroup: /system.slice/kubelet.service
             └─3452 /usr/local/bin/kubelet

安装kube-proxy

安装位置

master节点 node节点

生成证书

$ mkdir -p /etc/kubernetes/pki/proxy
$ cd /etc/kubernetes/pki/proxy

$ cat > kube-proxy-csr.json  << EOF 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "System"
    }
  ]
}
EOF
$ cfssl gencert \
   -ca=/etc/kubernetes/pki/ca/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca/ca-key.pem \
   -config=/etc/kubernetes/pki/ca/ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare kube-proxy

设置集群项

配置集群信息

$ kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/pki/ca/ca.pem \
  --embed-certs=true \
  --server=https://192.168.28.11:6443 \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

配置环境上下文

$ kubectl config set-credentials kube-proxy \
  --client-certificate=/etc/kubernetes/pki/proxy/kube-proxy.pem \
  --client-key=/etc/kubernetes/pki/proxy/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

配置用户项

$ kubectl config set-context kube-proxy@kubernetes \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

设置默认环境

$ kubectl config use-context kube-proxy@kubernetes \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

配置文件

$ vim /etc/kubernetes/kube-proxy.yaml

apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

创建服务

$ vim /etc/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy.yaml

[Install]
WantedBy=multi-user.target

启动服务

$ systemctl daemon-reload
$ systemctl start kube-proxy.service
$ systemctl enable kube-proxy.service
$ systemctl status kube-proxy.service

● kube-proxy.service - Kubernetes Kube Proxy
     Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
     Active: active (running) since 7s ago
       Docs: https://github.com/kubernetes/kubernetes
   Main PID: 5474 (kube-proxy)
      Tasks: 5 (limit: 1102)
     Memory: 11.6M
        CPU: 346ms
     CGroup: /system.slice/kube-proxy.service
             └─5474 /usr/local/bin/kube-proxy --config=/etc/kubernetes/kube-proxy.yaml

安装网络插件

安装Calico operator

$ wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/tigera-operator.yaml
$ kubectl create -f tigera-operator.yaml

namespace/tigera-operator created
...
deployment.apps/tigera-operator created

安装Calico自定义资源

$ wget https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/custom-resources.yaml

$ vim custom-resources.yaml

spec:
  calicoNetwork:

    ipPools:
    - blockSize: 26
      cidr: 10.244.0.0/16 # 此处修改为pod的cidr

查看安装状态

$ watch kubectl get pods -n calico-system

NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-bc6b6f4d5-xtkkh   1/1     Running   0          14m
calico-node-cr5z8                         1/1     Running   0          14m
calico-typha-69c7c64497-bnr8j             1/1     Running   0          14m
csi-node-driver-ghvgq                     2/2     Running   0          14m

安装CoreDNS

见官方文档:deployment/kubernetes at master · coredns/deployment (github.com)

相关注意事项:

  • 在ubuntu 中需要将 forward 指向/run/systemd/resolve/resolv.conf

安装Metrics

下载安装文件

$ wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

修改配置

$ vim components.yaml

# 增加启动参数:
...
  ¦ spec:
  ¦ ¦ containers:
  ¦ ¦ - args:
  ¦ ¦ ¦ - --cert-dir=/tmp
  ¦ ¦ ¦ - --secure-port=10250
  ¦ ¦ ¦ - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  ¦ ¦ ¦ - --kubelet-use-node-status-port
  ¦ ¦ ¦ - --metric-resolution=15s
  ¦ ¦ ¦ - --kubelet-insecure-tls
  ¦ ¦ ¦ - --requestheader-client-ca-file=/etc/kubernetes/pki/ca/ca.pem
  ¦ ¦ ¦ image: registry.k8s.io/metrics-server/metrics-server:v0.7.1
...

# 增加挂载
  ¦ ¦ ¦ volumeMounts:
  ¦ ¦ ¦ - mountPath: /tmp
  ¦ ¦ ¦ ¦ name: tmp-dir
  ¦ ¦ ¦ - name: ca-ssl
  ¦ ¦ ¦ ¦ mountPath: /etc/kubernetes/pki

# 定义挂载
  ¦ ¦ volumes:
  ¦ ¦ - emptyDir: {}
  ¦ ¦ ¦ name: tmp-dir
  ¦ ¦ - name: ca-ssl
  ¦ ¦ ¦ hostPath:
  ¦ ¦ ¦ ¦ path: /etc/kubernetes/pki

查看

$ kubectl top node

NAME          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
k8s-master1   144m         7%     1164Mi          62%
k8s-node1     101m         5%     655Mi           76%