本文已参与「新人创作礼」活动,一起开启掘金创作之路。
Schedule
我们直接来看Scheduler
的定义:
// pkg/scheduler/scheduler.go
// Scheduler watches for new unscheduled pods. It attempts to find
// nodes that they fit on and writes bindings back to the api server.
// Scheduler监视新的非计划pod。它试图找到它们适合的节点,并将绑定写回api服务器。
type Scheduler struct {
// It is expected that changes made via Cache will be observed
// by NodeLister and Algorithm.
// 预计NodeLister和 Algorithm 将观察到通过缓存进行的更改。
Cache internalcache.Cache
Extenders []framework.Extender
// NextPod should be a function that blocks until the next pod
// is available. We don't use a channel for this, because scheduling
// a pod may take some amount of time and we don't want pods to get
// stale while they sit in a channel.
// NextPod应该是一个阻塞直到下一个pod可用的函数。
// 我们不使用channel,因为安排pod可能会花费一些时间,并且我们不希望pod在放置在channel中时变得陈旧。
NextPod func() *framework.QueuedPodInfo
// Error is called if there is an error. It is passed the pod in
// question, and the error
// 如果有错误,则调用错误。它传递了有问题的pod,以及错误
Error func(*framework.QueuedPodInfo, error)
// SchedulePod tries to schedule the given pod to one of the nodes in the node list.
// Return a struct of ScheduleResult with the name of suggested host on success,
// otherwise will return a FitError with reasons.
// SchedulePod尝试将给定的pod调度到节点列表中的一个节点。
// 在成功时返回带有建议主机名称的schedueresult结构,否则将返回带有原因的FitError。
SchedulePod func(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) (ScheduleResult, error)
// Close this to shut down the scheduler.
// 关闭此以关闭调度程序。
StopEverything <-chan struct{}
// SchedulingQueue holds pods to be scheduled
// SchedulingQueue存放待调度的Pod
SchedulingQueue internalqueue.SchedulingQueue
// Profiles are the scheduling profiles.
// 配置文件是调度配置文件。
Profiles profile.Map
client clientset.Interface
nodeInfoSnapshot *internalcache.Snapshot
percentageOfNodesToScore int32
nextStartNodeIndex int
}
不难发现,真正执行调度的是SchedulePod
,它的属性是func(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) (ScheduleResult, error)`这个函数,我们来看这个函数的具体实现:
// pkg/scheduler/schedule_one.go
// schedulePod tries to schedule the given pod to one of the nodes in the node list.
// If it succeeds, it will return the name of the node.
// If it fails, it will return a FitError with reasons.
// schedulePod尝试将给定的pod调度到节点列表中的一个节点。
// 如果成功,将返回节点的名称
// 如果失败,返回 FitError,以及原因
func (sched *Scheduler) schedulePod(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) (result ScheduleResult, err error) {
trace := utiltrace.New("Scheduling", utiltrace.Field{Key: "namespace", Value: pod.Namespace}, utiltrace.Field{Key: "name", Value: pod.Name})
defer trace.LogIfLong(100 * time.Millisecond)
// 更新缓存,节点信息的快照
if err := sched.Cache.UpdateSnapshot(sched.nodeInfoSnapshot); err != nil {
return result, err
}
trace.Step("Snapshotting scheduler cache and node infos done")
// 判断当前快照中节点数是否为0
if sched.nodeInfoSnapshot.NumNodes() == 0 {
return result, ErrNoNodesAvailable
}
// 预选,先找到一些符合基本条件的节点
feasibleNodes, diagnosis, err := sched.findNodesThatFitPod(ctx, fwk, state, pod)
if err != nil {
return result, err
}
trace.Step("Computing predicates done")
// 没有找到合适的
if len(feasibleNodes) == 0 {
return result, &framework.FitError{
Pod: pod,
NumAllNodes: sched.nodeInfoSnapshot.NumNodes(),
Diagnosis: diagnosis,
}
}
// When only one node after predicate, just use it.
// 如果预先过后只有一个节点,那么就直接返回这个节点信息即可
if len(feasibleNodes) == 1 {
return ScheduleResult{
SuggestedHost: feasibleNodes[0].Name,
EvaluatedNodes: 1 + len(diagnosis.NodeToStatusMap),
FeasibleNodes: 1,
}, nil
}
// 如果不止一个节点,那么就需要进行优选,给每个节点进行打分
priorityList, err := prioritizeNodes(ctx, sched.Extenders, fwk, state, pod, feasibleNodes)
if err != nil {
return result, err
}
// 选择分数最高的最为最终的节点
host, err := selectHost(priorityList)
trace.Step("Prioritizing done")
return ScheduleResult{
SuggestedHost: host,
EvaluatedNodes: len(feasibleNodes) + len(diagnosis.NodeToStatusMap),
FeasibleNodes: len(feasibleNodes),
}, err
}
整个核心调度的实现流程很简单:
1.对调度器缓存和节点信息快照操作
2.首先进行预选操作,找到一批合适的待调度节点
3.如果没有找到合适节点,返回FitError错误
4.如果只找到一个节点,则直接返回这个节点的信息
5.如果找到多个节点,则进行优选操作,为每个节点进行打分,选择一个得分最高的作为待调度的节点进行返回
预选
预选阶段调用sched.findNodesThatFitPod()
函数来获取一批合适的待调度的节点,函数实现如下所示:
// pkg/scheduler/schedule_one.go
// Filters the nodes to find the ones that fit the pod based on the framework
// filter plugins and filter extenders.
// 根据框架的过滤插件和过滤扩展器对节点进行过滤,找到适合 pod 的节点
func (sched *Scheduler) findNodesThatFitPod(ctx context.Context, fwk framework.Framework, state *framework.CycleState, pod *v1.Pod) ([]*v1.Node, framework.Diagnosis, error) {
diagnosis := framework.Diagnosis{
NodeToStatusMap: make(framework.NodeToStatusMap),
UnschedulablePlugins: sets.NewString(),
}
// Run "prefilter" plugins.
// 运行 "prefilter" 插件
preRes, s := fwk.RunPreFilterPlugins(ctx, state, pod)
// 更新节点的状态
allNodes, err := sched.nodeInfoSnapshot.NodeInfos().List()
if err != nil {
return nil, diagnosis, err
}
if !s.IsSuccess() {
if !s.IsUnschedulable() {
return nil, diagnosis, s.AsError()
}
// All nodes will have the same status. Some non trivial refactoring is
// needed to avoid this copy.
// 所有节点将具有相同的状态。需要一些非平凡的重构来避免这种复制。
for _, n := range allNodes {
diagnosis.NodeToStatusMap[n.Node().Name] = s
}
// Status satisfying IsUnschedulable() gets injected into diagnosis.UnschedulablePlugins.
// 满足IsUnSchedulable()的状态被注入诊断。不可调度插件。
if s.FailedPlugin() != "" {
diagnosis.UnschedulablePlugins.Insert(s.FailedPlugin())
}
return nil, diagnosis, nil
}
// "NominatedNodeName" can potentially be set in a previous scheduling cycle as a result of preemption.
// This node is likely the only candidate that will fit the pod, and hence we try it first before iterating over all nodes.
// 由于抢占,可能会在前一个调度周期中设置“NominatedNodeName”。
// 这个节点可能是唯一适合pod的候选节点,因此我们在遍历所有节点之前先尝试它。
if len(pod.Status.NominatedNodeName) > 0 {
feasibleNodes, err := sched.evaluateNominatedNode(ctx, pod, fwk, state, diagnosis)
if err != nil {
klog.ErrorS(err, "Evaluation failed on nominated node", "pod", klog.KObj(pod), "node", pod.Status.NominatedNodeName)
}
// Nominated node passes all the filters, scheduler is good to assign this node to the pod.
// 指定的节点通过了所有过滤器,调度器可以很好地将此节点分配给pod。
if len(feasibleNodes) != 0 {
return feasibleNodes, diagnosis, nil
}
}
nodes := allNodes
if !preRes.AllNodes() {
nodes = make([]*framework.NodeInfo, 0, len(preRes.NodeNames))
for n := range preRes.NodeNames {
nInfo, err := sched.nodeInfoSnapshot.NodeInfos().Get(n)
if err != nil {
return nil, diagnosis, err
}
nodes = append(nodes, nInfo)
}
}
// 通过 Filter 插件找到合适的节点
feasibleNodes, err := sched.findNodesThatPassFilters(ctx, fwk, state, pod, diagnosis, nodes)
if err != nil {
return nil, diagnosis, err
}
// 通过 Extenders 过滤合适的节点
feasibleNodes, err = findNodesThatPassExtenders(sched.Extenders, pod, feasibleNodes, diagnosis.NodeToStatusMap)
if err != nil {
return nil, diagnosis, err
}
return feasibleNodes, diagnosis, nil
}
首先会运行prefilter
插件,然后运行所有的filter插件,最后是如果存在Extender,则运行Extender
的Filter函数,当然Extender这种方式我们不关心,这里的重点仍然是调度框架的使用。
其中调用prof.RunPreFilterPlugins()
执行所有prefilter插件的PreFilter
函数,需要所有的插件都执行成功才算成功:
// pkg/scheduler/framework/runtime/framework.go
// RunPreFilterPlugins runs the set of configured PreFilter plugins. It returns
// *Status and its code is set to non-success if any of the plugins returns
// anything but Success. If a non-success status is returned, then the scheduling
// cycle is aborted.
// RunPreFilterPlugins运行已配置的PreFilter插件集。
// 它返回*Status,如果有任何插件不返回Success,则将其代码设置为Non-Success。
// 如果返回不成功状态,则中止调度周期。
func (f *frameworkImpl) RunPreFilterPlugins(ctx context.Context, state *framework.CycleState, pod *v1.Pod) (_ *framework.PreFilterResult, status *framework.Status) {
startTime := time.Now()
defer func() {
metrics.FrameworkExtensionPointDuration.WithLabelValues(preFilter, status.Code().String(), f.profileName).Observe(metrics.SinceInSeconds(startTime))
}()
var result *framework.PreFilterResult
var pluginsWithNodes []string
// 循环运行所有配置的 prefilter 插件
for _, pl := range f.preFilterPlugins {
r, s := f.runPreFilterPlugin(ctx, pl, state, pod)
if !s.IsSuccess() {
s.SetFailedPlugin(pl.Name())
if s.IsUnschedulable() {
return nil, s
}
return nil, framework.AsStatus(fmt.Errorf("running PreFilter plugin %q: %w", pl.Name(), status.AsError())).WithFailedPlugin(pl.Name())
}
if !r.AllNodes() {
pluginsWithNodes = append(pluginsWithNodes, pl.Name())
}
result = result.Merge(r)
if !result.AllNodes() && len(result.NodeNames) == 0 {
msg := fmt.Sprintf("node(s) didn't satisfy plugin(s) %v simultaneously", pluginsWithNodes)
if len(pluginsWithNodes) == 1 {
msg = fmt.Sprintf("node(s) didn't satisfy plugin %v", pluginsWithNodes[0])
}
return nil, framework.NewStatus(framework.Unschedulable, msg)
}
}
return result, nil
}
func (f *frameworkImpl) runPreFilterPlugin(ctx context.Context, pl framework.PreFilterPlugin, state *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) {
if !state.ShouldRecordPluginMetrics() {
return pl.PreFilter(ctx, state, pod)
}
startTime := time.Now()
// 调用插件的 PreFilter 函数
result, status := pl.PreFilter(ctx, state, pod)
f.metricsRecorder.observePluginDurationAsync(preFilter, pl.Name(), status, metrics.SinceInSeconds(startTime))
return result, status
}
优选
经过上面的预选阶段过后得到符合调度条件的节点列表,然后会调用prioritizeNodes
函数为每个节点进行打分,最后调用selectHost
函数选择一个分数最高的节点作为最终调度的节点:
// pkg/scheduler/schedule_one.go
// 如果不止一个节点,那么就需要进行优选,给每个节点进行打分
priorityList, err := prioritizeNodes(ctx, sched.Extenders, fwk, state, pod, feasibleNodes)
if err != nil {
return result, err
}
// 选择分数最高的最为最终的节点
host, err := selectHost(priorityList)
为每个节点进行打分的函数实现如下所示:
// pkg/scheduler/schedule_one.go
// prioritizeNodes prioritizes the nodes by running the score plugins,
// which return a score for each node from the call to RunScorePlugins().
// The scores from each plugin are added together to make the score for that node, then
// any extenders are run as well.
// All scores are finally combined (added) to get the total weighted scores of all nodes
// 优先级节点通过运行Score插件来确定节点的优先级,该插件从对RunScorePlugins()的调用中返回每个节点的分数。
// 每个插件的分数加在一起,就成了该节点的分数
// 最后将所有的分数合并,得到所有节点的加权总分
func prioritizeNodes(
ctx context.Context,
extenders []framework.Extender,
fwk framework.Framework,
state *framework.CycleState,
pod *v1.Pod,
nodes []*v1.Node,
) (framework.NodeScoreList, error) {
// If no priority configs are provided, then all nodes will have a score of one.
// This is required to generate the priority list in the required format
// 如果没有提供优先级配置,则所有节点的得分为1。
// 这是生成所需格式的优先级列表所必需的
if len(extenders) == 0 && !fwk.HasScorePlugins() {
result := make(framework.NodeScoreList, 0, len(nodes))
for i := range nodes {
result = append(result, framework.NodeScore{
Name: nodes[i].Name,
Score: 1,
})
}
return result, nil
}
// Run PreScore plugins.
// 执行所有 PreScore 插件
preScoreStatus := fwk.RunPreScorePlugins(ctx, state, pod, nodes)
if !preScoreStatus.IsSuccess() {
return nil, preScoreStatus.AsError()
}
// Run the Score plugins.
// 执行所有 Score 插件
scoresMap, scoreStatus := fwk.RunScorePlugins(ctx, state, pod, nodes)
if !scoreStatus.IsSuccess() {
return nil, scoreStatus.AsError()
}
// Additional details logged at level 10 if enabled.
// 如果启用,则记录在10级的其他详细信息。
klogV := klog.V(10)
if klogV.Enabled() {
for plugin, nodeScoreList := range scoresMap {
for _, nodeScore := range nodeScoreList {
klogV.InfoS("Plugin scored node for pod", "pod", klog.KObj(pod), "plugin", plugin, "node", nodeScore.Name, "score", nodeScore.Score)
}
}
}
// Summarize all scores.
// 总结所有分数
result := make(framework.NodeScoreList, 0, len(nodes))
for i := range nodes {
result = append(result, framework.NodeScore{Name: nodes[i].Name, Score: 0})
for j := range scoresMap {
result[i].Score += scoresMap[j][i].Score
}
}
if len(extenders) != 0 && nodes != nil {
var mu sync.Mutex
var wg sync.WaitGroup
combinedScores := make(map[string]int64, len(nodes))
for i := range extenders {
if !extenders[i].IsInterested(pod) {
continue
}
wg.Add(1)
go func(extIndex int) {
metrics.SchedulerGoroutines.WithLabelValues(metrics.PrioritizingExtender).Inc()
defer func() {
metrics.SchedulerGoroutines.WithLabelValues(metrics.PrioritizingExtender).Dec()
wg.Done()
}()
prioritizedList, weight, err := extenders[extIndex].Prioritize(pod, nodes)
if err != nil {
// Prioritization errors from extender can be ignored, let k8s/other extenders determine the priorities
// 可以忽略扩展器的优先级错误,让K8s/其他扩展器决定优先级
klog.V(5).InfoS("Failed to run extender's priority function. No score given by this extender.", "error", err, "pod", klog.KObj(pod), "extender", extenders[extIndex].Name())
return
}
mu.Lock()
for i := range *prioritizedList {
host, score := (*prioritizedList)[i].Host, (*prioritizedList)[i].Score
if klogV.Enabled() {
klogV.InfoS("Extender scored node for pod", "pod", klog.KObj(pod), "extender", extenders[extIndex].Name(), "node", host, "score", score)
}
combinedScores[host] += score * weight
}
mu.Unlock()
}(i)
}
// wait for all go routines to finish
// 等待所有的 goroutine 完成
wg.Wait()
for i := range result {
// MaxExtenderPriority may diverge from the max priority used in the scheduler and defined by MaxNodeScore,
// therefore we need to scale the score returned by extenders to the score range used by the scheduler.
result[i].Score += combinedScores[result[i].Name] * (framework.MaxNodeScore / extenderv1.MaxExtenderPriority)
}
}
if klogV.Enabled() {
for i := range result {
klogV.InfoS("Calculated node's final score for pod", "pod", klog.KObj(pod), "node", result[i].Name, "score", result[i].Score)
}
}
return result, nil
}
同样首先通过调用RunPreScorePlugins
函数执行所有PreScore插件,然后调用RunScorePlugins
函数执行所有的Score插件,最后把所有节点的分数合并得到对应节点的最终分数。
// pkg/scheduler/framework/runtime/framework.go
// RunScorePlugins runs the set of configured scoring plugins. It returns a list that
// stores for each scoring plugin name the corresponding NodeScoreList(s).
// It also returns *Status, which is set to non-success if any of the plugins returns
// a non-success status.
// RunScorePlugins 执行配置的所有 score 插件
// 它返回一个列表,为每个评分插件的名称存储相应的 NodeScoreList(s)
// 它还返回 *Status,如果任何一个插件返回非成功状态,它将被设置为 non-success
func (f *frameworkImpl) RunScorePlugins(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodes []*v1.Node) (ps framework.PluginToNodeScores, status *framework.Status) {
startTime := time.Now()
defer func() {
metrics.FrameworkExtensionPointDuration.WithLabelValues(score, status.Code().String(), f.profileName).Observe(metrics.SinceInSeconds(startTime))
}()
// 初始化插件节点分数对象
pluginToNodeScores := make(framework.PluginToNodeScores, len(f.scorePlugins))
for _, pl := range f.scorePlugins {
pluginToNodeScores[pl.Name()] = make(framework.NodeScoreList, len(nodes))
}
ctx, cancel := context.WithCancel(ctx)
errCh := parallelize.NewErrorChannel()
// Run Score method for each node in parallel.
// 对每个节点并行运行 Score 方法
f.Parallelizer().Until(ctx, len(nodes), func(index int) {
for _, pl := range f.scorePlugins {
nodeName := nodes[index].Name
// 调用品分插件的 Score 函数
s, status := f.runScorePlugin(ctx, pl, state, pod, nodeName)
if !status.IsSuccess() {
err := fmt.Errorf("plugin %q failed with: %w", pl.Name(), status.AsError())
errCh.SendErrorWithCancel(err, cancel)
return
}
// 为当前插件设置对应节点的分数
pluginToNodeScores[pl.Name()][index] = framework.NodeScore{
Name: nodeName,
Score: s,
}
}
})
if err := errCh.ReceiveError(); err != nil {
return nil, framework.AsStatus(fmt.Errorf("running Score plugins: %w", err))
}
// Run NormalizeScore method for each ScorePlugin in parallel.
// 为每个ScorePlugin并行运行NormalizeScore方法。
f.Parallelizer().Until(ctx, len(f.scorePlugins), func(index int) {
pl := f.scorePlugins[index]
// 得到插件对应的所有节点分数
nodeScoreList := pluginToNodeScores[pl.Name()]
if pl.ScoreExtensions() == nil {
return
}
// 调用插件的 NormalizeScore 函数
status := f.runScoreExtension(ctx, pl, state, pod, nodeScoreList)
if !status.IsSuccess() {
err := fmt.Errorf("plugin %q failed with: %w", pl.Name(), status.AsError())
errCh.SendErrorWithCancel(err, cancel)
return
}
})
if err := errCh.ReceiveError(); err != nil {
return nil, framework.AsStatus(fmt.Errorf("running Normalize on Score plugins: %w", err))
}
// Apply score defaultWeights for each ScorePlugin in parallel.
// 并行地为每个ScorePlugin应用score defaultWeights。
f.Parallelizer().Until(ctx, len(f.scorePlugins), func(index int) {
pl := f.scorePlugins[index]
// Score plugins' weight has been checked when they are initialized.
// 初始化时已检查Score plugins的权重。
weight := f.scorePluginWeight[pl.Name()]
nodeScoreList := pluginToNodeScores[pl.Name()]
for i, nodeScore := range nodeScoreList {
// return error if score plugin returns invalid score.
// 如果Score插件返回无效Score,则返回错误
if nodeScore.Score > framework.MaxNodeScore || nodeScore.Score < framework.MinNodeScore {
err := fmt.Errorf("plugin %q returns an invalid score %v, it should in the range of [%v, %v] after normalizing", pl.Name(), nodeScore.Score, framework.MinNodeScore, framework.MaxNodeScore)
errCh.SendErrorWithCancel(err, cancel)
return
}
nodeScoreList[i].Score = nodeScore.Score * int64(weight)
}
})
if err := errCh.ReceiveError(); err != nil {
return nil, framework.AsStatus(fmt.Errorf("applying score defaultWeights on Score plugins: %w", err))
}
return pluginToNodeScores, nil
}
RunPreScorePlugins
函数就是循环调用所有注册的 PreScore 插件的 PreScore 函数,这里重点是 RunScorePlugins
函数的实现,在该函数中首先会对每个节点并行运行注册插件的 Score 方法,然后会为为每个插件并行运行 NormalizeScore
方法,由于每个插件的权重不一样,所以最后还有一步非常重要是为每个插件作用上定义的权重得到最终的分数。
最终通过调用selectHost
函数来获取一个得分最高的节点:
// pkg/scheduler/schedule_one.go
// selectHost takes a prioritized list of nodes and then picks one
// in a reservoir sampling manner from the nodes that had the highest score.
// selectHost获取优先节点列表,然后以储层采样方式从得分最高的节点中选择一个节点。
func selectHost(nodeScoreList framework.NodeScoreList) (string, error) {
if len(nodeScoreList) == 0 {
return "", fmt.Errorf("empty priorityList")
}
// 将第一个节点作为选择的节点
maxScore := nodeScoreList[0].Score
selected := nodeScoreList[0].Name
cntOfMaxScore := 1
// 然后循环后面的节点进行比较
for _, ns := range nodeScoreList[1:] {
if ns.Score > maxScore {
// 如果当前节点分数更大,则选择该节点
maxScore = ns.Score
selected = ns.Name
cntOfMaxScore = 1
} else if ns.Score == maxScore {
// 如果分数相同,cntOfMaxScore+1
cntOfMaxScore++
if rand.Intn(cntOfMaxScore) == 0 {
// Replace the candidate with probability of 1/cntOfMaxScore
// 以 1/cntOfMaxScore 的概率取代候选节点
// 因为分数都一样,就无所谓选择哪个节点了
selected = ns.Name
}
}
}
return selected, nil
}