四、K8S初上手:集群的公共组件部署

313 阅读1分钟
检查所需镜像

$ kubeadm config images list
W0825 16:17:37.456192   13906 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.8
k8s.gcr.io/kube-controller-manager:v1.18.8
k8s.gcr.io/kube-scheduler:v1.18.8
k8s.gcr.io/kube-proxy:v1.18.8
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
注意这里的坑

默认情况下,我们在进行k8s初始化时,会随同进程,实时的去拉取所需的镜像,然后看到“k8s.gcr.io”就明白了,这些镜像很大程度都被墙了,所以你可以先试着以上面的镜像地址去获取

如果失败了,那你可以先去找替代的镜像,然后,注意这里,再通过docker tag把原镜像改为上面对应的镜像名称,因为替代的镜像,肯定跟所需的镜像名称完全不一样

# 我准备了一个脚本(因为安装的最新版本,搜了很久才找到对应的版本),记得更改对应的版本
 
#!/bin/bash
images=(etcd:3.4.3-0 coredns:1.6.7 pause:3.2)
for imageName in ${images[@]} ; do
docker pull registry.cn-shenzhen.aliyuncs.com/image-kubernetes/$imageName
docker tag registry.cn-shenzhen.aliyuncs.com/image-kubernetes/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-shenzhen.aliyuncs.com/image-kubernetes/$imageName
done
 
# 太难找了,完全对应到目前最新的版本
docker pull mesosphere/kube-proxy-amd64:v1.18.8_d2iq.0
docker tag mesosphere/kube-proxy-amd64:v1.18.8_d2iq.0 k8s.gcr.io/kube-proxy:v1.18.8
docker rmi mesosphere/kube-proxy-amd64:v1.18.8_d2iq.0
 
 
docker pull mesosphere/kube-scheduler-amd64:v1.18.8_d2iq.0
docker tag mesosphere/kube-scheduler-amd64:v1.18.8_d2iq.0 k8s.gcr.io/kube-scheduler:v1.18.8
docker rmi mesosphere/kube-scheduler-amd64:v1.18.8_d2iq.0
 
docker pull mesosphere/kube-controller-manager-amd64:v1.18.8_d2iq.0
docker tag mesosphere/kube-controller-manager-amd64:v1.18.8_d2iq.0 k8s.gcr.io/kube-controller-manager:v1.18.8
docker rmi mesosphere/kube-controller-manager-amd64:v1.18.8_d2iq.0
 
docker pull gotok8s/kube-apiserver:v1.18.8
docker tag gotok8s/kube-apiserver:v1.18.8 k8s.gcr.io/kube-apiserver:v1.18.8
docker rmi gotok8s/kube-apiserver:v1.18.8

保存上面的内容到k8s-install.sh,给他赋予权限并运行

chmod +x k8s-install.sh && sh k8s-install.sh
 
# 然后你祷告能一次pull完,完成人生最高光的奇迹吧
# 返回寻找和整理后
docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-scheduler            v1.18.8             600e066d21b8        8 days ago          112MB
k8s.gcr.io/kube-controller-manager   v1.18.8             d27cd511adc1        8 days ago          203MB
k8s.gcr.io/kube-proxy                v1.18.8             1f29d96a89c2        8 days ago          132MB
k8s.gcr.io/kube-apiserver            v1.18.8             92d040a0dca7        11 days ago         173MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        6 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        6 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        10 months ago       288MB

至此,master和node节点所需的所有环境和组件就都搞完了

接下来,开始把该虚拟镜像做个备份,然后以此进度,复制出2个镜像来,作为node节点来使用