kubeadm init 的执行共经历了14个阶段:
-
NewPreflightPhase:在做出变更前运行一系列的预检项来验证系统状态,可以通过指定
--ignore-preflight-errors=<错误列表>参数来忽略错误 -
NewCertsPhase:生成一个自签名的 CA 证书来为集群中的每一个组件建立身份标识
-
NewKubeConfigPhase:建立配置目录及默认或指定的配置文件,以便 kubelet、控制器管理器和调度器用来连接到 API 服务器
-
NewKubeletStartPhase:在一个节点上启动 kubelet
-
NewControlPlanePhase:用来引导创建控制面节点,生成 apiserver、controller-manager、scheduler 静态pod描述文件
-
NewEtcdPhase:实现对 etcd 的处理,没有提供外部的 etcd 时,会生成一份 etcd 的静态资源文件
-
NewWaitControlPlanePhase:是在控制平面和 etcd 阶段之后运行的隐藏阶段,作用是等待控制面节点任务的执行,如果 kubelet 启动异常或者控制面节点崩溃将会停止后面的流程
-
NewUploadConfigPhase:上传配置
-
NewUploadCertsPhase:上传证书
-
NewMarkControlPlanePhase:为 master 做标记,即添加污点
-
NewBootstrapTokenPhase:生成bootstrap token和ca证书configmap,后续 node 可以通过生成的 token join加入集群
-
NewKubeletFinalizePhase:在 TLS 引导后更新与 kubelet 相关的设置,其实就是将kubelet与kube-apiserver通信的kubeconfig文件中的证书替换成由kube-controller-manager签发返回的证书
-
NewAddonPhase:通过 API 服务器安装一个 DNS 服务器 (CoreDNS) 和 kube-proxy 附加组件
-
NewShowJoinCommandPhase:打印初始化成功的命令,同时为用户提供后续的操作指导,例如工作节点的加入等
NewPreflightPhase()
作用为:创建一个 kubeadm 工作流阶段,为新的控制平面节点实施预检,⼀些检查项⽬仅仅触发警告, 其它的则会被视为错误并且退出kubeadm,除⾮问题得到解决或者⽤户指定了 --ignore-preflight-errors=<错误列表> 参数
源码位置位于 app/cmd/phases/init/preflight.go,可以查看到程序的主要逻辑:调用 RunInitNodeChecks()方法,执行完后会按照顺序调用 RunPullImagesCheck() 方法
// runPreflight executes preflight checks logic.
func runPreflight(c workflow.RunData) error {
...
if err := preflight.RunInitNodeChecks(utilsexec.New(), data.Cfg(), data.IgnorePreflightErrors(), false, false); err != nil {
return err
}
...
return preflight.RunPullImagesCheck(utilsexec.New(), data.Cfg(), data.IgnorePreflightErrors())
}
RunInitNodeChecks
进入函数查看源码,位置在 app/preflight/checks.go
核心代码在函数 InitNodeChecks() 方法中
func InitNodeChecks(execer utilsexec.Interface, cfg *kubeadmapi.InitConfiguration, ignorePreflightErrors sets.Set[string], isSecondaryControlPlane bool, downloadCerts bool) ([]Checker, error) {
if !isSecondaryControlPlane {
// First, check if we're root separately from the other preflight checks and fail fast
if err := RunRootCheckOnly(ignorePreflightErrors); err != nil {
return nil, err
}
}
manifestsDir := filepath.Join(kubeadmconstants.KubernetesDir, kubeadmconstants.ManifestsSubDirName)
checks := []Checker{
NumCPUCheck{NumCPU: kubeadmconstants.ControlPlaneNumCPU},
// Linux only
// TODO: support other OS, if control-plane is supported on it.
MemCheck{Mem: kubeadmconstants.ControlPlaneMem},
KubernetesVersionCheck{KubernetesVersion: cfg.KubernetesVersion, KubeadmVersion: kubeadmversion.Get().GitVersion},
FirewalldCheck{ports: []int{int(cfg.LocalAPIEndpoint.BindPort), kubeadmconstants.KubeletPort}},
PortOpenCheck{port: int(cfg.LocalAPIEndpoint.BindPort)},
PortOpenCheck{port: kubeadmconstants.KubeSchedulerPort},
PortOpenCheck{port: kubeadmconstants.KubeControllerManagerPort},
FileAvailableCheck{Path: kubeadmconstants.GetStaticPodFilepath(kubeadmconstants.KubeAPIServer, manifestsDir)},
FileAvailableCheck{Path: kubeadmconstants.GetStaticPodFilepath(kubeadmconstants.KubeControllerManager, manifestsDir)},
FileAvailableCheck{Path: kubeadmconstants.GetStaticPodFilepath(kubeadmconstants.KubeScheduler, manifestsDir)},
FileAvailableCheck{Path: kubeadmconstants.GetStaticPodFilepath(kubeadmconstants.Etcd, manifestsDir)},
HTTPProxyCheck{Proto: "https", Host: cfg.LocalAPIEndpoint.AdvertiseAddress},
}
...
}
-
首先检查是否与其他preflight检查分开独立存在
-
第二步检查manifestsDir的目录,因为 kubeadm 默认组件都是static pod,这里是对目录的检查
-
接下来就是系统基础资源的检查
- NumCPUCheck CPU核心数检查,默认kubeadm部署init节点最低是2 CPU,即常量
kubeadmconstants.ControlPlaneNumCPU = 2,可以看到相关代码位置:
// Check number of CPUs required by kubeadm func (ncc NumCPUCheck) Check() (warnings, errorList []error) { numCPU := runtime.NumCPU() if numCPU < ncc.NumCPU { errorList = append(errorList, errors.Errorf("the number of available CPUs %d is less than the required %d", numCPU, ncc.NumCPU)) } return warnings, errorList }-
MemCheck 同理是内存使用情况的检查,可以得知最低内存要求为 1700MI
-
KubernetesVersionCheck 检查版本情况,确保kubelet、kubectl与通过kubeadm安装的控制面版本相匹配,否则会发生版本偏差的风险,导致一些预料之外的错误和问题
-
FirewalldCheck 检查防火墙的情况。防火墙处于活动状态,确保端口已经打开,否则集群可能无法正常运行
-
PortOpenCheck 打开端口的检查
协议 方向 端口 作用 使用者 TCP 入站 6443 api-server 所有组件 TCP 入站 2379-2380 etcd api-server, etcd TCP 入站 10250 kubelet kubelet, 控制面组件 TCP 入站 10259 scheduler scheduler TCP 入站 10257 controller-manager controller-manager -
静态资源文件可用性检查
-
网络代理的可用性检查
...
- NumCPUCheck CPU核心数检查,默认kubeadm部署init节点最低是2 CPU,即常量
RunPullImagesCheck
如果系统找不到所需的镜像,该方法会去拉取kubeadm需要的镜像文件
进入函数查看源码,位置在 app/preflight/checks.go 核心代码在函数 RunPullImagesCheck() 方法中
// RunPullImagesCheck will pull images kubeadm needs if they are not found on the system
func RunPullImagesCheck(execer utilsexec.Interface, cfg *kubeadmapi.InitConfiguration, ignorePreflightErrors sets.Set[string]) error {
containerRuntime, err := utilruntime.NewContainerRuntime(utilsexec.New(), cfg.NodeRegistration.CRISocket)
if err != nil {
return &Error{Msg: err.Error()}
}
checks := []Checker{
ImagePullCheck{
runtime: containerRuntime,
imageList: images.GetControlPlaneImages(&cfg.ClusterConfiguration),
sandboxImage: images.GetPauseImage(&cfg.ClusterConfiguration),
imagePullPolicy: cfg.NodeRegistration.ImagePullPolicy,
},
}
return RunChecks(checks, os.Stderr, ignorePreflightErrors)
}
containerRuntime 是用于处理容器运行时deCRI接口,判断用的运行时环境,随后获取需要的镜像列表及Policy
NewCertsPhase()
该阶段作用为:生成一个自签名的CA证书来为集群中的每一个组件建立身份标识。
可以通过命令 --cert-dir 配置证书的目录路径,默认为 /etc/kubernetes/pki,用户可以提供自己的 CA 证书及密钥。可以打开集群的 /etc/kubernetes/pki 目录,可以看到几个文件:
来看看文件的具体生成过程,源码位于 app/cmd/phases/init/certs.go
通过命令 kubeadm init --help 可以看到编译后的 certs 部分的结果:
- /ca 生成自签名的Kubernetes ca,为其他Kubernetes组件提供标识
- /apiserver 生成apiserver的证书
- /apiserver-kubelet-client 生成apiserver连接kubectl的证书
- /front-proxy-ca 生成自签名CA,为前端代理提供标识
- /front-proxy-client 生成前端代理客户端证书
- /etcd-ca 生成自签名 CA 为 etcd 提供标识
- /etcd-server 生成etcd服务证书
- /etcd-peer ⽣成etcd节点间通信证书
- /etcd-healthcheck-client 生成healthcheck etcd探针证书
- /apiserver-etcd-client ⽣成apiserver⽤来访问etcd的证书
- /sa ⽣成⽤于签名服务账户令牌的私钥及其公钥
NewKubeConfigPhase()
这个阶段会创建建立控制平面和管理 kubeconfig 文件所需的所有 kubeconfig 文件,从集群目录 /etc/kubernetes 可以看到文件:
具体的工作流程:将 kubeconfig ⽂件写⼊ /etc/kubernetes/ ⽬录以便 kubelet、controller-manager和scheduler⽤来连接到 API 服务器,它们每⼀个都有⾃⼰的身份标识,同时⽣成⼀个名为 admin.conf 的独⽴的 kubeconfig ⽂件,⽤于管理操作。
查看源码 /app/cmd/phases/init/kubeconfig.go:
var (
kubeconfigFilePhaseProperties = map[string]struct {
name string
short string
long string
}{
kubeadmconstants.AdminKubeConfigFileName: {
name: "admin",
short: "Generate a kubeconfig file for the admin to use and for kubeadm itself",
long: "Generate the kubeconfig file for the admin and for kubeadm itself, and save it to %s file.",
},
kubeadmconstants.KubeletKubeConfigFileName: {
name: "kubelet",
short: "Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes",
long: cmdutil.LongDesc(`
Generate the kubeconfig file for the kubelet to use and save it to %s file.
Please note that this should *only* be used for cluster bootstrapping purposes. After your control plane is up,
you should request all kubelet credentials from the CSR API.`),
},
kubeadmconstants.ControllerManagerKubeConfigFileName: {
name: "controller-manager",
short: "Generate a kubeconfig file for the controller manager to use",
long: "Generate the kubeconfig file for the controller manager to use and save it to %s file",
},
kubeadmconstants.SchedulerKubeConfigFileName: {
name: "scheduler",
short: "Generate a kubeconfig file for the scheduler to use",
long: "Generate the kubeconfig file for the scheduler to use and save it to %s file.",
},
}
)
// NewKubeConfigPhase creates a kubeadm workflow phase that creates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file.
func NewKubeConfigPhase() workflow.Phase {
return workflow.Phase{
Name: "kubeconfig",
Short: "Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file",
Long: cmdutil.MacroCommandLongDescription,
Phases: []workflow.Phase{
{
Name: "all",
Short: "Generate all kubeconfig files",
InheritFlags: getKubeConfigPhaseFlags("all"),
RunAllSiblings: true,
},
NewKubeConfigFilePhase(kubeadmconstants.AdminKubeConfigFileName),
NewKubeConfigFilePhase(kubeadmconstants.KubeletKubeConfigFileName),
NewKubeConfigFilePhase(kubeadmconstants.ControllerManagerKubeConfigFileName),
NewKubeConfigFilePhase(kubeadmconstants.SchedulerKubeConfigFileName),
},
Run: runKubeConfig,
}
}
- kubeconfig ⽣成建⽴控制平⾯和管理库配置⽂件所需的所有库配置⽂件
- admin ⽣成⼀个kubeconfig⽂件供管理员使⽤和kubeadm本身使⽤
- kubelet 为kubelet⽣成⼀个kubeconfig⽂件,仅⽤于集群引导⽬的
- controller-manager ⽣成⼀个kubeconfig⽂件供控制器管理器使⽤
- scheduler ⽣成⼀个kubeconfig⽂件供调度程序使⽤
⽽根据这个核⼼代码可以看到kubeconfig代表=all,会直接调⽤all来⽣成包含admin、kubelet、controller-manager、scheduler的kubeconfig⽂件
NewKubeletStartPhase()
创建一个在节点上启动 kubelet 的 kubeadm 工作流阶段
[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"
[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"
[kubelet-start] Starting the kubelet
主要做了:
- 将env的变量写⼊到/var/lib/kubelet/kubeadm-flags.env
- 并将配置配置写⼊到/var/lib/kubelet/config.yaml
- 然后启动kubelet
源码:
// runKubeletStart executes kubelet start logic.
func runKubeletStart(c workflow.RunData) error {
...
// Write env file with flags for the kubelet to use. We do not need to write the --register-with-taints for the control-plane,
// as we handle that ourselves in the mark-control-plane phase
// TODO: Maybe we want to do that some time in the future, in order to remove some logic from the mark-control-plane phase?
if err := kubeletphase.WriteKubeletDynamicEnvFile(&data.Cfg().ClusterConfiguration, &data.Cfg().NodeRegistration, false, data.KubeletDir()); err != nil {
return errors.Wrap(err, "error writing a dynamic environment file for the kubelet")
}
// Write the kubelet configuration file to disk.
if err := kubeletphase.WriteConfigToDisk(&data.Cfg().ClusterConfiguration, data.KubeletDir(), data.PatchesDir(), data.OutputWriter()); err != nil {
return errors.Wrap(err, "error writing kubelet configuration to disk")
}
// Try to start the kubelet service in case it's inactive
if !data.DryRun() {
fmt.Println("[kubelet-start] Starting the kubelet")
kubeletphase.TryStartKubelet()
}
return nil
}
NewControlPlanePhase()
该阶段作用为引导控制面节点创建静态资源,执行过程:
[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"
[control-plane] Creating static Pod manifest for \"kube-apiserver\"
[control-plane] Creating static Pod manifest for \"kube-controller-manager\"
[control-plane] Creating static Pod manifest for \"kube-scheduler\"
通过执行过程中打印的命令也可以看出是在为控制节点的组建创建静态Pod目录及生成静态Pod文件,放置于:/etc/kubernetes/mainfests 目录中
func runControlPlanePhase(c workflow.RunData) error {
data, ok := c.(InitData)
if !ok {
return errors.New("control-plane phase invoked with an invalid data struct")
}
fmt.Printf("[control-plane] Using manifest folder %q\n", data.ManifestDir())
return nil
}
func runControlPlaneSubphase(component string) func(c workflow.RunData) error {
return func(c workflow.RunData) error {
data, ok := c.(InitData)
if !ok {
return errors.New("control-plane phase invoked with an invalid data struct")
}
cfg := data.Cfg()
fmt.Printf("[control-plane] Creating static Pod manifest for %q\n", component)
return controlplane.CreateStaticPodFiles(data.ManifestDir(), data.PatchesDir(), &cfg.ClusterConfiguration, &cfg.LocalAPIEndpoint, data.DryRun(), component)
}
}
NewEtcdPhase()
该阶段实现对 etcd 的处理,上一个阶段创建静态Pod清单后,如果没有提供外部 etcd 服务,则会为 etcd 生成一份额外的静态 Pod 文件。同时放在目录: /etc/kubernetes/manifests 中
func runEtcdPhaseLocal() func(c workflow.RunData) error {
return func(c workflow.RunData) error {
data, ok := c.(InitData)
if !ok {
return errors.New("etcd phase invoked with an invalid data struct")
}
cfg := data.Cfg()
// Add etcd static pod spec only if external etcd is not configured
if cfg.Etcd.External == nil {
// creates target folder if doesn't exist already
if !data.DryRun() {
// Create the etcd data directory
if err := etcdutil.CreateDataDirectory(cfg.Etcd.Local.DataDir); err != nil {
return err
}
} else {
fmt.Printf("[etcd] Would ensure that %q directory is present\n", cfg.Etcd.Local.DataDir)
}
fmt.Printf("[etcd] Creating static Pod manifest for local etcd in %q\n", data.ManifestDir())
if err := etcdphase.CreateLocalEtcdStaticPodManifestFile(data.ManifestDir(), data.PatchesDir(), cfg.NodeRegistration.Name, &cfg.ClusterConfiguration, &cfg.LocalAPIEndpoint, data.DryRun()); err != nil {
return errors.Wrap(err, "error creating local etcd static pod manifest file")
}
} else {
klog.V(1).Infoln("[etcd] External etcd mode. Skipping the creation of a manifest for local etcd")
}
return nil
}
}
NewWaitControlPlanePhase()
该阶段kubelet会监视这个/etc/kubernetes/manifest⽬录以便在系统启动的时候创建 Pod,⼀旦控制平⾯的 Pod 都运⾏起来,kubeadm init 的工作流程才会继续往下执⾏
func runWaitControlPlanePhase(c workflow.RunData) error {
...
fmt.Printf("[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory %q. This can take up to %v\n", data.ManifestDir(), timeout)
if err := waiter.WaitForKubeletAndFunc(waiter.WaitForAPI); err != nil {
context := struct {
Error string
Socket string
}{
Error: fmt.Sprintf("%v", err),
Socket: data.Cfg().NodeRegistration.CRISocket,
}
kubeletFailTempl.Execute(data.OutputWriter(), context)
return errors.New("couldn't initialize a Kubernetes cluster")
}
return nil
}