1 安装前准备
使用了腾讯云的一台centos云主机, k8s需要containerd
yum install docker-ce
该命令将会安装docker-ce, docker-ce-cli和containerd
2 安装 kubeadm、kubelet 和 kubectl
直接使用yum安装,找不着, 因此需要添加配置/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
systemctl start kubelet
setenforce 0
在kubeadm init前, 查看systemctl status kubelet状态,是不成功的,因为它需要在kubeadm init生成的配置文件。接下来执行
kubeadm init --image-repository=registry.aliyuncs.com/google_containers
发现报错
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
根据提示查看原因
kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = DeadlineExceeded desc = failed to get sandbox image \"registry.k8s.io/pause:3.9\": failed to pull image \"registry.k8s.io/pause:3.9\": failed to pull and unpack image \"registry.k8s.io/pause:3.9\
解决办法是修改/etc/containerd/config.toml里面内容, 但是打开, 基本是空内容, 这个时候可以查看containerd config dump ,并将其内容写入config.toml进行覆盖。然后修改sandbox_image
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
修改完记得要重启 systemctl restart containerd 接下来执行
kubeadm reset
kubeadm init --image-repository=registry.aliyuncs.com/google_containers
发现启动成功,然后按信息提示进行配置。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.16.12:6443 --token 7y60uv.xrauoinoozq8blec \
--discovery-token-ca-cert-hash sha256:21e20ee465f0eb9f407e702fbd12fba3e5c63ebab6bd0c8509f0a9d605a24b83
安装完网络插件后, 运行命令
kubectl get nodes
3 定义一个Pod和Service
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
kubectl apply -f redis.yaml
之所以是pending状态,是因为当前node是有taint的
Taints: node-role.kubernetes.io/control-plane:NoSchedule
需要在yaml中添加相应属性, 才可以, 加入了 tolerations, operator: "Exists", effect: "NoSchedule".
key:污点的键,如果指定,Pod只能容忍具有该键的污点。operator:操作符,可以是Exists(表示容忍任何键的污点)或Equal(需要指定value字段,表示容忍具有特定键和值的污点)。effect:污点的效果,可以是NoSchedule(默认值,Pod不会调度到带有匹配污点的节点上)、PreferNoSchedule(节点上没有其他Pod时,Pod可以被调度到该节点上)或NoExecute(Pod不会被调度到带有匹配污点的节点上,如果已经被调度,则会被驱逐)。
apiVersion: v1
kind: Pod
metadata:
name: redis
labels:
app: redis
spec:
containers:
- name: redis
image: redis:latest
ports:
- containerPort: 6379
tolerations:
- operator: "Exists"
effect: "NoSchedule"
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: NodePort
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
此时
4 直接访问api-server
上述均是通过kubectl间接访问api-server, 那么如何直接访问api-server呢?
- cat ~/.kube/config , 找到client-certificate-data与client-key-data以及访问地址 server: https://10.0.16.12:6443
- 数据转化echo '<base64_encoded_data' | base64 --decode > client.crt
- curl -k --cert ~/.kube/client.crt --key ~/.kube/client.key https://127.0.0.1:6443/api/v1/pods