以kubeadm方式部署Kubernetes

287 阅读3分钟

Kubernetes的部署

  • kubeadm部署Kubernetes。不适合生产环境。
  • kops、SaltStack、Ansible等方式专业的部署。不适合学习。
  • 容器化的方式部署Kubernetes。kubelet容器化会受到资源隔离的影响。但不推荐。

以kubeadm部署

kubeadm的部署方式:把kubelet直接运行在宿主机上,使用容器部署其他的Kubernetes组件。

kubeadm的前期准备和安装,可以参考官网

其中Docker的安装,以Ubuntu为例参考官网的便捷脚本

kubeadm的部署命令很简单,要掌握的是kubeadm initkubeadm join

这个开源项目适合入手Kubernetes集群配置,所以给自己设定两个目标:

  • 从源码探究kubeadm执行配置流程,达到理解Kubernetes集群原理的目的。
  • 定制kubeadm.yaml配置文件,执行kubeadm init --config kubeadm.yaml,达到在国内环境下,使用这个配置文件,可以顺利部署的目的。
  • 使用kubeadm部署一个Kubernetes集群

环境

Ubuntu 16.04 LTS

kubeadm v1.11.5 kubelet v1.11.5 kubectl v1.11.5

 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
 cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
 deb http://apt.kubernetes.io/ kubernetes-xenial main
 EOF
 apt update && apt install -y docker.io kubeadm=1.11.5-00  kubectl=1.11.5-00  kubelet=1.11.5-00

执行kubeadm init的示例输出:

 ​
 root@ubuntu-s-2vcpu-2gb-sfo2-01:~# kubeadm init
 [init] using Kubernetes version: v1.11.5
 [preflight] running pre-flight checks
 I1207 06:04:21.290234    6088 kernel_validator.go:81] Validating kernel version
 I1207 06:04:21.290363    6088 kernel_validator.go:96] Validating kernel config
 [preflight/images] Pulling images required for setting up a Kubernetes cluster
 [preflight/images] This might take a minute or two, depending on the speed of your internet connection
 [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
 [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 [preflight] Activating the kubelet service
 [certificates] Generated ca certificate and key.
 [certificates] Generated apiserver certificate and key.
 [certificates] apiserver serving cert is signed for DNS names [ubuntu-s-2vcpu-2gb-sfo2-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 104.248.184.139]
 [certificates] Generated apiserver-kubelet-client certificate and key.
 [certificates] Generated sa key and public key.
 [certificates] Generated front-proxy-ca certificate and key.
 [certificates] Generated front-proxy-client certificate and key.
 [certificates] Generated etcd/ca certificate and key.
 [certificates] Generated etcd/server certificate and key.
 [certificates] etcd/server serving cert is signed for DNS names [ubuntu-s-2vcpu-2gb-sfo2-01 localhost] and IPs [127.0.0.1 ::1]
 [certificates] Generated etcd/peer certificate and key.
 [certificates] etcd/peer serving cert is signed for DNS names [ubuntu-s-2vcpu-2gb-sfo2-01 localhost] and IPs [104.248.184.139 127.0.0.1 ::1]
 [certificates] Generated etcd/healthcheck-client certificate and key.
 [certificates] Generated apiserver-etcd-client certificate and key.
 [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
 [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
 [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
 [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
 [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
 [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
 [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
 [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
 [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
 [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
 [init] this might take a minute or longer if the control plane images have to be pulled
 [apiclient] All control plane components are healthy after 41.502318 seconds
 [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
 [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
 [markmaster] Marking the node ubuntu-s-2vcpu-2gb-sfo2-01 as master by adding the label "node-role.kubernetes.io/master=''"
 [markmaster] Marking the node ubuntu-s-2vcpu-2gb-sfo2-01 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
 [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ubuntu-s-2vcpu-2gb-sfo2-01" as an annotation
 [bootstraptoken] using token: 9jgjk7.d1p4rc5vdonyhd64
 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
 [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
 [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
 [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
 [addons] Applied essential addon: CoreDNS
 [addons] Applied essential addon: kube-proxy
 ​
 Your Kubernetes master has initialized successfully!
 ​
 To start using your cluster, you need to run the following as a regular user:
 ​
   mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
 ​
 You should now deploy a pod network to the cluster.
 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
   https://kubernetes.io/docs/concepts/cluster-administration/addons/
 ​
 You can now join any number of machines by running the following on each node
 as root:
 ​
   kubeadm join 104.248.184.139:6443 --token 9jgjk7.d1p4rc5vdonyhd64 --discovery-token-ca-cert-hash sha256:d66686836fb65111ca9b3ac7088a0e92da373ab8cb61bb93cee8454a08119043
 ​
   mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config

探究kubeadm的执行配置流程

定制kubeadm.yaml文件

kubeadm-v1.11.yaml

 apiVersion: kubeadm.k8s.io/v1alpha1
 kind: MasterConfiguration
 controllerManagerExtraArgs:
   horizontal-pod-autoscaler-use-rest-clients: "true"
   horizontal-pod-autoscaler-sync-period: "10s"
   node-monitor-grace-period: "10s"
 apiServerExtraArgs:
   runtime-config: "api/all=true"
 kubernetesVersion: "stable-1.11"

执行

 kubeadm init --config kubeadm-v1.11.yaml

使用kubeadm部署一个Kubernetes集群

安装网络插件

 sysctl net.bridge.bridge-nf-call-iptables=1
 ​
 kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

查看

 kubectl get pods -n kube-system