Kubernetes Operator(二)使用kubebuilder构建一个你的Operator

337 阅读3分钟

# 使用kubebuilder构建一个你的Operator快速实践

book.kubebuilder.io/quick-start…

安装kubebuilder

# golang当然是必须提前安装的
$ curl -L -o kubebuilder https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)
$ chmod +x kubebuilder && mv kubebuilder /usr/local/bin/

Controller快速入门

kubebuilder init

$ kubebuilder init --domain webservice-operator.163.com --repo g.hz.netease.com/hzwengzhiwei/operator-demo
#Next: define a resource with:
#$ kubebuilder create api

kubebuilder create api

我们同时创建CRD和Controller

# kind就是 kubectl get $kind 例如:pod
$ kubebuilder create api --group app --version v1 --kind Webservice --resource --controller
#Next: implement your new API and generate the manifests (e.g. CRDs,CRs) with:
#$ make manifests

会修改以下文件

        modified:   PROJECT
        new file:   api/v1/webservice_types.go # 用于定义crd
        new file:   api/v1/groupversion_info.go
        new file:   api/v1/zz_generated.deepcopy.go
        new file:   config/crd/kustomization.yaml
        new file:   config/crd/kustomizeconfig.yaml
        new file:   config/crd/patches/cainjection_in_apiservices.yaml
        new file:   config/crd/patches/webhook_in_apiservices.yaml
        new file:   config/rbac/apiservice_editor_role.yaml
        new file:   config/rbac/apiservice_viewer_role.yaml
        new file:   config/samples/app_v1_apiservice.yaml
        new file:   controllers/webservice_controller.go # 用于控制crd的业务逻辑
        new file:   controllers/suite_test.go
        modified:   go.mod
        modified:   go.sum
        modified:   main.go

types.go

api/v1/webservice_types.go 就是定义crd的,我想好我们需要的字段

type Webservice struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`
// 我们只要修改这2个结构体就可以了
	Spec   WebserviceSpec   `json:"spec,omitempty"` // 期望状态
	Status WebserviceStatus `json:"status,omitempty"` // 当前状态
}

type WebserviceSpec struct {
	Name  string `json:"name,omitempty"`
	Image string `json:"image,omitempty"`
	// deployment replicas
	Replicas int32 `json:"replicas,omitempty"`
	ContainerPort     int32 `json:"containerPort,omitempty"`

	IngressClassName string `json:"ingressClassName,omitempty"`
	IngressHost      string `json:"ingressHost,omitempty"`
	IngressPath      string `json:"ingressPath,omitempty"`
	IngressPathType  string `json:"ingressPathType,omitempty"`
}

type WebserviceStatus struct {
	DeployStatus DeployStatus `json:"deployHealth,omitempty"`
}

type DeployStatus struct {
	ReadyReplicas int32 `json:"readyReplicas,omitempty"`
}

在当前集群中构建和部署CRD

$ make install
/Users/hzwengzhiwei/Documents/gitdir/operator-demo/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
test -s /Users/xxx/Documents/gitdir/operator-demo/bin/kustomize || { curl -Ss "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s -- 3.8.7 /Users/hzwengzhiwei/Documents/gitdir/operator-demo/bin; }
Version v3.8.7 does not exist for darwin/arm64, trying darwin/amd64 instead. # 会安装kustomize
# crd已经部署到集群中了
$ kubectl get crd webservices.app.webservice-operator.163.com
NAME                                          CREATED AT
webservices.app.webservice-operator.163.com   2023-11-01T16:17:47Z

controller.go

controllers/webservice_controller.go

测试Reconcile

当控制器监听的 Kubernetes API 对象发生更改时,控制器会调用 Reconcile 函数来处理这些更改。接下来我们来验证一下

// 当返回的error不为nil,认为执行失败,会重试直到超过最大重试次数或强制取消为止
func (r *WebserviceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	logger := log.FromContext(ctx)
	logger.Info("Reconciling Webservice...")

	return ctrl.Result{}, nil
}

本地运行测试

$ make run

创建config/samples/app_v1_webservice.yaml

apiVersion: app.webservice-operator.163.com/v1
kind: Webservice
metadata:
  labels:
    app.kubernetes.io/name: webservice
    app.kubernetes.io/instance: webservice-sample
    app.kubernetes.io/part-of: operator-demo
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: operator-demo
  name: webservice-sample
spec:
  name: webservice-sample
  image: nginx:1.14.2
  replicas: 1
  port: 80
  ingressClassName: nginx
  ingressHost: webservice-sample.163.com
  ingressPath: /
  ingressPathType: Prefix

然后创建实例,我们会发现刚刚make run的实例会输出我们的测试内容

$ kubectl apply -f config/samples/app_v1_webservice.yaml

值得强调一下Reconcile对错误的处理:

  • return ctrl.Result{}, err,返回错误会立即重试Reconcile
  • return ctrl.Result{Requeue: true}, nil,正常结束Reconcile
  • return ctrl.Result{}, nil,停止Reconcile
  • return ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())}, nil,指定时间重试Reconcile,例如:return ctrl.Result{RequeueAfter: 3 * time.Second, }, nil
正式编写一个Reconcile

现在我们熟悉了Reconcile的处理过程,接下来我们只需要完善我们的逻辑就可以了。

首先我们来确定一下集群的权限,RBAC权限现在通过RBAC标记进行配置,这些标记用于生成和更新config/rbac/role.yaml。这里我们操作三种资源deployments、services和ingresses。

//+kubebuilder:rbac:groups=app.webservice-operator.163.com,resources=webservices,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=app.webservice-operator.163.com,resources=webservices/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups="",resources=services,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=networking.k8s.io,resources=ingresses,verbs=get;list;watch;create;update;patch;delete

我们可以通过kubectl api-resources来确定groups

$ kubectl api-resources |grep ingresses
ingresses                         ing           networking.k8s.io/v1                   true         Ingress

我们写一下创建deployment、service、ingress的逻辑

func (r *WebserviceReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
	logger := log.FromContext(ctx)

	logger.Info(fmt.Sprintf("***Reconciling: %s/%s", req.Namespace, req.Name))
	// 获取Webservice对象,r.Get方法会将获取到的对象赋值给webservice
	webservice := &appv1.Webservice{}
	err := r.Get(ctx, req.NamespacedName, webservice)
	// 当获取到的对象不存在时,清理ingress、service、deployment资源
	if err != nil {
		logger.Info(fmt.Sprintf("***%s/%s Deleted ..", req.Namespace, req.Name))
		err = r.cleanupResources(ctx, req, webservice)
		if err != nil {
			logger.Error(err, "***Failed to cleanup resources", "Webservice.Namespace", webservice.Namespace, "Webservice.Name", webservice.Name)
			return ctrl.Result{}, err
		}
		return ctrl.Result{}, nil
	}

	// 获取deployment
	deployment := &appsv1.Deployment{}
	err = r.Get(ctx, req.NamespacedName, deployment)
	if err != nil {
		// 如果没有则创建deployment
		err = r.createDeployment(ctx, webservice)
		if err != nil {
			logger.Error(err, "***Failed to create Deployment", "Deployment.Namespace", webservice.Namespace, "Deployment.Name", webservice.Name)
			return ctrl.Result{}, err
		}
	} else {
		// logger.Info(fmt.Sprintf("deployment: %v", deployment))
		// 检查deployment的readyReplicas是否等于webservice的replicas 或者 deployment的image是否等于webservice的image 或者 deployment的containerPort是否等于webservice的containerPort
		if deployment.Status.ReadyReplicas != webservice.Spec.Replicas || deployment.Spec.Template.Spec.Containers[0].Image != webservice.Spec.Image || deployment.Spec.Template.Spec.Containers[0].Ports[0].ContainerPort != webservice.Spec.ContainerPort {
			// 如果不等于,则更新deployment
			deployment.Spec.Replicas = &webservice.Spec.Replicas
			deployment.Spec.Template.Spec.Containers[0].Image = webservice.Spec.Image
			deployment.Spec.Template.Spec.Containers[0].Ports[0].ContainerPort = webservice.Spec.ContainerPort
			err = r.Update(ctx, deployment)
			if err != nil {
				logger.Error(err, "***Failed to update Deployment", "Deployment.Namespace", webservice.Namespace, "Deployment.Name", webservice.Name)
				return ctrl.Result{}, err
			}
		}
	}

	// 更新status
	err = r.updateStatus(ctx, webservice, deployment)
	if err != nil {
		logger.Error(err, "***Failed to update Webservice status", "Webservice.Namespace", webservice.Namespace, "Webservice.Name", webservice.Name)
		return ctrl.Result{}, err
	}

	// 获取service
	service := &v1.Service{}
	err = r.Get(ctx, req.NamespacedName, service)
	if err != nil {
		// 如果没有则创建service
		err = r.createService(ctx, webservice)
		if err != nil {
			logger.Error(err, "***Failed to create Service", "Service.Namespace", webservice.Namespace, "Service.Name", webservice.Name)
			return ctrl.Result{}, err
		}
	} else {
		// 检查service的containerPort是否等于webservice的containerPort
		if service.Spec.Ports[0].Port != webservice.Spec.ContainerPort {
			// 如果不等于,则更新service
			service.Spec.Ports[0].Port = webservice.Spec.ContainerPort
			err = r.Update(ctx, service)
			if err != nil {
				logger.Error(err, "***Failed to update Service", "Service.Namespace", webservice.Namespace, "Service.Name", webservice.Name)
				return ctrl.Result{}, err
			}
		}
	}

	// 获取ingress
	ingress := &networkingv1.Ingress{}
	err = r.Get(ctx, req.NamespacedName, ingress)
	if err != nil {
		// 如果没有则创建ingress
		err = r.createIngress(ctx, webservice)
		if err != nil {
			logger.Error(err, "***Failed to create Ingress", "Ingress.Namespace", webservice.Namespace, "Ingress.Name", webservice.Name)
			return ctrl.Result{}, err
		}
	} else {
		// 检查ingress的ingressClassName、host、path是否等于webservice的ingressClassName、host、path
		logger.Info(fmt.Sprintf("ingress: %v", ingress))
		if ingress.Spec.IngressClassName != &webservice.Spec.IngressClassName || ingress.Spec.Rules[0].Host != webservice.Spec.IngressHost || ingress.Spec.Rules[0].IngressRuleValue.HTTP.Paths[0].Path != webservice.Spec.IngressPath {
			// 如果不等于,则更新ingress
			ingress.Spec.IngressClassName = &webservice.Spec.IngressClassName
			ingress.Spec.Rules[0].Host = webservice.Spec.IngressHost
			ingress.Spec.Rules[0].IngressRuleValue.HTTP.Paths[0].Path = webservice.Spec.IngressPath
			err = r.Update(ctx, ingress)
			if err != nil {
				logger.Error(err, "***Failed to update Ingress", "Ingress.Namespace", webservice.Namespace, "Ingress.Name", webservice.Name)
				return ctrl.Result{}, err
			}
		}
	}

	return ctrl.Result{}, nil
}

k8s资源创建对没有接触过client-go的同学是会有一点难度,但是写多了你会发现其实都是有规律的,而且有强大的copilot帮忙,我只要写好注释,剩下的就只需tab了。

例如ingress的创建,我们可以参考集群内已有的ingress

---
apiVersion: networking.k8s.io/v1 # 对应"k8s.io/api/networking/v1"
kind: Ingress
……
spec:
  ingressClassName: nginx
  rules:
  - host: xxx
    http:
      paths:
      - backend:
          service:
            name: yyy
            port:
              name: api-port
        path: /zzz
        pathType: Prefix

所以我们的createIngress代码如下

func (r *WebserviceReconciler) createIngress(ctx context.Context, webservice *appv1.Webservice) error {
	var pathType networkingv1.PathType
	if webservice.Spec.IngressPathType == "Prefix" {
		pathType = networkingv1.PathTypePrefix
	} else {
		pathType = networkingv1.PathTypeExact
	}

	ingress := &networkingv1.Ingress{
		ObjectMeta: metav1.ObjectMeta{
			Name:      webservice.Name,
			Namespace: webservice.Namespace,
		},
		Spec: networkingv1.IngressSpec{
			IngressClassName: &webservice.Spec.IngressClassName,
			Rules: []networkingv1.IngressRule{
				{
					Host: webservice.Spec.IngressHost,
					IngressRuleValue: networkingv1.IngressRuleValue{
						HTTP: &networkingv1.HTTPIngressRuleValue{
							Paths: []networkingv1.HTTPIngressPath{
								{
									Path:     webservice.Spec.IngressPath,
									PathType: &pathType,
									Backend: networkingv1.IngressBackend{
										Service: &networkingv1.IngressServiceBackend{
											Name: webservice.Name,
											Port: networkingv1.ServiceBackendPort{
												Number: webservice.Spec.ContainerPort,
											},
										},
									},
								},
							},
						},
					},
				},
			},
		},
	}
	return r.Create(ctx, ingress)
}

我们创建一个webservice实例,并且修改replicas验证deploy是否符合我们的预期

$ kubectl apply -f config/samples/app_v1_webservice.yaml
$ kubectl patch webservice webservice-sample -p '{"spec":{"replicas":3}}' --type=merge
构建镜像

在测试通过后,将controller制作镜像

# 如果跨平台编译记得需要指定PLATFORMS
# $ make docker-buildx docker-push IMG=xxx/webservices:v1 PLATFORMS=linux/amd64

# docker-buildx 比较慢而且有点问题。。我们用的也比较少,就直接修改Makefile来构建镜像吧
.PHONY: docker-build
docker-build: test ## Build docker image with the manager.
	docker build --platform linux/amd64 -t ${IMG} .
$ make docker-build docker-push IMG=xxx/webservices:v1

在服务器上部署,make deploy会自动创建rbac,crd和controller

$ make deploy IMG=xxx.com/webservices:v1

在集群中会部署到项目名称-system名称空间下,例如:operator-demo-system