Sample-Controller
SC是官方给的自定义资源开发示例, 方便给开发者学习如何开发一个简单的CRD资源, 示例提供了以下基本操作:
- 如何注册一个名为
Foo
的自定义资源类型 - 如何创建更新/获取/列出
Foo
的资源实例 - 如何在资源处理创建/更新/删除事件上设置控制器
示例中主要涉及到项目资源
kubernetes/client-go 负责与k8s服务端进行通讯监听. kubernetes/code-generator 资源类型代码生成器.
示例文件
.
├── CONTRIBUTING.md
├── Godeps
├── LICENSE
├── OWNERS
├── README.md
├── SECURITY_CONTACTS
├── artifacts // yaml示例, 主要用到crd.yaml, example-foo.yaml
├── code-of-conduct.md
├── controller.go // 控制器, 操作资源的主要逻辑
├── controller_test.go
├── docs
├── go.mod
├── go.sum
├── hack // 相关代码生成器工具
├── main.go // 配置参数并初始化
└── pkg // 资源定义文件及保存生成代码目录
主要是看main.go
和controller.go
的代码.
分析代码
main.go
配置参数并初始化
func main() {
klog.InitFlags(nil)
flag.Parse()
...
}
func init() {
flag.StringVar(&kubeconfig, "kubeconfig", "", "Path to a kubeconfig. Only required if out-of-cluster.")
flag.StringVar(&masterURL, "master", "", "The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.")
}
初始化klog
, 详细看k8s.io/klog.
init()
接收两个命令行参数, kubeconfig
和master
两个都是用来连接k8s服务端.
func main() {
...
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
kubeClient, err := kubernetes.NewForConfig(cfg)
exampleClient, err := clientset.NewForConfig(cfg)
...
}
kubeconfig
或master
生成*rest.Config
结构体, 本质上就是配置.
根据配置初始化两个客户端, kubeClient
是client-go
下的客户端库, exampleClient
是代码生成器创建的客户端.
两者区别于:
exampleClient
只能操作相关的自定义资源组别.kubeClient
除了自定义资源组别不能操作, 其它都能操作.
func main() {
...
kubeInformerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, time.Second*30)
exampleInformerFactory := informers.NewSharedInformerFactory(exampleClient, time.Second*30)
controller := NewController(kubeClient, exampleClient,
kubeInformerFactory.Apps().V1().Deployments(),
exampleInformerFactory.Samplecontroller().V1alpha1().Foos())
...
}
informer
核心组件, 作用是watch
和list
k8s资源, 然后将资源本地化, 减低对apiServer
的压力. 在上面的前两行代码中根据客户端且设置重新同步时间间隔为30秒, 返回对应资源InformerFactory
对象, 用于生产Informer
对象.
NewController
的Informer
对象有原生的DeploymentsInformer
和自定义资源的FoosInformer
对象.
func main() {
...
stopCh := signals.SetupSignalHandler() // 停止信号
kubeInformerFactory.Start(stopCh)
exampleInformerFactory.Start(stopCh)
if err = controller.Run(2, stopCh); err != nil {
klog.Fatalf("Error running controller: %s", err.Error())
}
...
}
启动全部已注册的informers
及运行controller
.
controller.go
控制器逻辑
Controller
结构体:
type Controller struct
kubeclientset kubernetes.Interface // 标准k8s客户端工具
sampleclientset clientset.Interface // 自定义API组别客户端工具
deploymentsLister appslisters.DeploymentLister // Deployment列表对象
deploymentsSynced cache.InformerSynced // Deployment同步状态
foosLister listers.FooLister // Foo列表对象
foosSynced cache.InformerSynced // Foo同步状态
workqueue workqueue.RateLimitingInterface // 工作队列意味着每次只能处理一次事件
recorder record.EventRecorder // 事件记录器
}
NewController
方法:
utilruntime.Must(samplescheme.AddToScheme(scheme.Scheme)) // 添加scheme
// 创建事件广播器
eventBroadcaster := record.NewBroadcaster()
// 记录到本地日志
eventBroadcaster.StartLogging(klog.Infof)
// 上报events到apiserver
eventBroadcaster.StartRecordingToSink(&typedcorev1.EventSinkImpl{Interface: kubeclientset.CoreV1().Events("")})
// 创建EventRecorder
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, corev1.EventSource{Component: controllerAgentName})
record
上报,处理及打印事件, 用kubectl get events
可以查看上报的事件.
controller := &Controller{
kubeclientset: kubeclientset,
sampleclientset: sampleclientset,
deploymentsLister: deploymentInformer.Lister(),
deploymentsSynced: deploymentInformer.Informer().HasSynced,
foosLister: fooInformer.Lister(),
foosSynced: fooInformer.Informer().HasSynced,
workqueue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "Foos"),
recorder: recorder,
}
workqueue
带有限速器的队列, 支持延迟添加机制, 还可以以失败次数提高延迟添加到队列时间.
// 对Foo(自定义资源)的逻辑操作
fooInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.enqueueFoo,
UpdateFunc: func(old, new interface{}) {
controller.enqueueFoo(new)
},
})
// 对Deployment(自定义资源)的逻辑操作
deploymentInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controller.handleObject,
UpdateFunc: func(old, new interface{}) {
newDepl := new.(*appsv1.Deployment)
oldDepl := old.(*appsv1.Deployment)
if newDepl.ResourceVersion == oldDepl.ResourceVersion {
return
}
controller.handleObject(new)
},
DeleteFunc: controller.handleObject,
})
对资源的创建/更新/删除绑定方法, 也就是实现逻辑的入口.
func (c *Controller) Run(threadiness int, stopCh <-chan struct{}) error {
defer utilruntime.HandleCrash()
defer c.workqueue.ShutDown()
// 等待完成同步
if ok := cache.WaitForCacheSync(stopCh, c.deploymentsSynced, c.foosSynced); !ok {
return fmt.Errorf("failed to wait for caches to sync")
}
// 创建threadiness个协程重复运行runWorker, 每次运行完暂停1秒
for i := 0; i < threadiness; i++ {
go wait.Until(c.runWorker, time.Second, stopCh)
}
// 停止信号
<-stopCh
return nil
}
cache.WaitForCacheSync
等待初始化原本在服务器上的资源, 会根据绑定的eventHandler
作处理, 当c.deploymentsSynced
和c.foosSynced
为true
时即为成功. 然后执行c.runWorker
// runWorker一直执行processNextWorkItem
func (c *Controller) runWorker() {
for c.processNextWorkItem() {
}
}
func (c *Controller) processNextWorkItem() bool {
...
err := func(obj interface{}) error {
...
// 主要是执行`syncHandler`方法
if err := c.syncHandler(key); err != nil {
// 如果有错误则利用延时队列重新添加
c.workqueue.AddRateLimited(key)
return fmt.Errorf("error syncing '%s': %s, requeuing", key, err.Error())
}
...
}(obj)
...
}
func (c *Controller) syncHandler(key string) error {
...
// Lister同步服务端的状态, 不用再跟服务端进行通信
foo, err := c.foosLister.Foos(namespace).Get(name)
...
// 获取deployment类型, 如果没有找到, 则对服务端创建deployment
deploymentName := foo.Spec.DeploymentName
deployment, err := c.deploymentsLister.Deployments(foo.Namespace).Get(deploymentName)
if errors.IsNotFound(err) {
deployment, err = c.kubeclientset.AppsV1().Deployments(foo.Namespace).Create(context.TODO(), newDeployment(foo), metav1.CreateOptions{})
}
...
// 如果deployment不能被foo控制, 则返回错误
if !metav1.IsControlledBy(deployment, foo) {
msg := fmt.Sprintf(MessageResourceExists, deployment.Name)
c.recorder.Event(foo, corev1.EventTypeWarning, ErrResourceExists, msg)
return fmt.Errorf(msg)
}
// 如果foo的relicas字段不等于deployment的replicas字段, 则修改deployment的replicas数量
if foo.Spec.Replicas != nil && *foo.Spec.Replicas != *deployment.Spec.Replicas {
klog.V(4).Infof("Foo %s replicas: %d, deployment replicas: %d", name, *foo.Spec.Replicas, *deployment.Spec.Replicas)
deployment, err = c.kubeclientset.AppsV1().Deployments(foo.Namespace).Update(context.TODO(), newDeployment(foo), metav1.UpdateOptions{})
}
if err != nil {
return err
}
// 同步有效副本数量
err = c.updateFooStatus(foo, deployment)
if err != nil {
return err
}
...
}
代码中根据Foo
对deployment
进行操作, CRD
逻辑实现.
func (c *Controller) enqueueFoo(obj interface{}) {
var key string
var err error
if key, err = cache.MetaNamespaceKeyFunc(obj); err != nil {
utilruntime.HandleError(err)
return
}
c.workqueue.Add(key)
}
工作队列的键名key
调用cache.MetaNamespaceKeyFunc
实现, 正常情况下返回namespace/name
.
func (c *Controller) handleObject(obj interface{}) {
var object metav1.Object
var ok bool
if object, ok = obj.(metav1.Object); !ok {
tombstone, ok := obj.(cache.DeletedFinalStateUnknown)
object, ok = tombstone.Obj.(metav1.Object)
}
if ownerRef := metav1.GetControllerOf(object); ownerRef != nil {
if ownerRef.Kind != "Foo" {
return
}
foo, err := c.foosLister.Foos(object.GetNamespace()).Get(ownerRef.Name)
c.enqueueFoo(foo)
return
}
}
精简代码后, 可以看出是用来处理如果deployment
出现不可控的变化, 如kubectl delete deploy xxxx
, 会重新添加到队列等待处理, 这里只处理Foo
类型.
// 部署deployment, 不解释
func newDeployment(foo *samplev1alpha1.Foo) *appsv1.Deployment {
labels := map[string]string{
"app": "nginx",
"controller": foo.Name,
}
return &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: foo.Spec.DeploymentName,
Namespace: foo.Namespace,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(foo,samplev1alpha1.SchemeGroupVersion.WithKind("Foo")),
},
},
Spec: appsv1.DeploymentSpec{
Replicas: foo.Spec.Replicas,
Selector: &metav1.LabelSelector{
MatchLabels: labels,
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: labels,
},
Spec: corev1.PodSpec{
Containers: []corev1.Container{
{
Name: "nginx",
Image: "nginx:latest",
},
},
},
},
},
}
}
小结
整个流程不是太复杂, 功能实现也比较简单, 大概流程如下图.
