【k8s系列十四】nginx-ingress 部署

488 阅读7分钟

持续创作,加速成长!这是我参与「掘金日新计划 · 10 月更文挑战」的第28天,点击查看活动详情

nginx-ingress 部署

上面的原理已经知道了, 那么如何部署呢? 来看看部署的架构图:

首先, 用户通过两个域名请求到nginx, 我们看到这两个域名指向的nginx是同一个. 然后再nginx中配置了两个server反向代理. server1.com 反向代理到server1服务器, server2.com反向代理到server2服务器. 这里后端服务没有指定ip地址,而是endpoint. 之前也说过原因. 接下来, 我们就来部署这个nginx-ingress.

nginx-ingress镜像保存在国外, 所以, 下载好的, 直接导入到docker中.

有三个文件:

ingress.tar : ingress的镜像

mandatory.yaml:

service-nodeport.yaml:

第一步: 加载load

  • 将ingress.tar 包上传到master

    image

  • 然后将ingress.tar拷贝到node1和node2节点

image

image

  • 在master,node1, node2上导入ingress.tar到docker
docker load -i ingress.tar

image

  • 通过docker images查看导入是否成功

image

第二步: 构建ingress资源清单

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: docker.io/bitnami/nginx-ingress-controller:latest
          imagePullPolicy: IfNotPresent
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

使用quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0镜像的需要注意,这个版本已经过时,不能再用,替换为docker.io/bitnami/nginx-ingress-controller:latest

这里面内容很多,我们先执行,后面再来看它的具体含义

kubectl apply -f mandatory.yaml

然后我们可以看到创建了一个nginx-ingress的pod, 这个pod的命名空间是ingress-nginx

image

但是, 注意一个问题,READY的状态是0/1, 也就是有问题。这时候查看一下pod的日志。我这边的问题记录如下;

异常1: Failed to list *v1beta1.Ingress: the server could not find the requested resource

image

这个问题是因为我的k8s版本比较高v1.23.4,不再支持v1beta1,所以与低版本的aliyuncs/nginx-ingress:0.28.0不匹配,要改成使用高版本的bitnami/nginx-ingress-controller

apiVersion: extensions/v1beta1
kind: Deployment

但是现在新版本的k8s已经没有 extensions/v1beta1包了,只有 extensions/v1包。而包上面错误的原因是docker的镜像文件中引入了 v1beta1包。这时我们要替换docker镜像文件。

去docker中查询aliyun镜像,不查询quay.io, 这个镜像库是国外的,要翻墙。

docker search nginx-ingress

image

目标是找nginx-ingress-controller, 找starts最多的第一个bitnami/nginx-ingress-controller,下载镜像,然后修改mandatory.yaml文件

image

然后重新apply资源配置

kubectl apply -f mandatory.yaml

image

image

我们看到nginx-ingress这次启动成功了。

异常2:selfLink was empty, can‘t make reference

遇到这个问题,表示系统没有selfLink。参考文章:blog.csdn.net/w2909526/ar…

selfLink was empty 在k8s集群 v1.20之前都存在,在v1.20之后被删除,需要在/etc/kubernetes/manifests/kube-apiserver.yaml 添加参数 增加 - --feature-gates=RemoveSelfLink=false

spec:
  containers:
  - command:
    - kube-apiserver
    - --feature-gates=RemoveSelfLink=false

添加之后使用kubeadm部署的集群会自动加载部署pod。 kubeadm安装的apiserver是Static Pod,它的配置文件被修改后,立即生效。 因为Kubelet 会监听该文件的变化,当修改了 /etc/kubenetes/manifest/kube-apiserver.yaml 文件之后,kubelet 将自动终止原有的 kube-apiserver-{nodename} 的 Pod,并自动创建一个使用了新配置参数的 Pod 作为替代。

注意: 如果您有多个 Kubernetes Master 节点,您需要在每一个 Master 节点上都修改该文件,并使各节点上的参数保持一致。

如果api-server启动失败 需重新在执行一遍

kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml

异常3:ingress-nginx异常:open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied

image

报异常:ssl.go:389] unexpected error storing fake SSL Cert: could not create PEM certificate file /etc/ingress-controller/ssl/default-fake-certificate.pem: open /etc/ingress-controller/ssl/default-fake-certificate.pem: permission denied

这个问题的原因是docker内部没有权限,设置docker目录下的权限即可

 chmod -R 777 /var/lib/docker

问题解决

第三步: 构建NodePort类型的svc资源清单

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

执行命令

kubectl apply -f service-nodeport.yaml

这个svc创建好以后, 也是在ingress-nginx命名空间下

image

这个svc对外暴露的端口, 如果是80对外暴露的是30201,如果是443端口对外暴露的是32110.

检测配置是否成功:

我们在浏览器输入ip:30201出现400的界面,就表示请求成功了。因为还没有应用服务器,所以报400。

10.211.55.200:30201

image

看到这个到这一步为止,就成功了!!

第四步: 配置 Ingress HTTP 代理访问

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      name: nginx
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: wangyanglinux/myapp:v1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector: 
    name: nginx
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-test
spec:
  rules:
    - host: www1.hongfu.com
      http:
        paths:
        - path: /
          backend:
           # service 里面就是proxy path的地址。service里面定义的是代理的svc的名字和端口号
           service:
             name: nginx-svc
             port:
               number: 80
          pathType: Prefix
            

来看看Ingress资源文件的含义:

  • 名字啥的就不说了
  • ingress采用的请求规则是http请求,请求的域名是www1.hongfu.com,请求路径是根路径下都跳转
  • ingress代理跳转的svc名称是ngxin-svc, svc的端口是80.
  • 必须指定ingress 请求路径的类型, Prefix表示匹配前缀. Exact表示精确匹配.

Ingress资源文件重点注意:

  • Ingress类型的配置, backend后面的service前面是空一个空格. 空两个空格会报错.
  • networking.k8s.io/v1包下的Ingress配置和老包不一样。具体可以通过explain查看。
kubectl explain Ingress.spec.rules.http.paths
kubectl explain Ingress.spec.rules.http.paths.backend.service

image

如果现在想访问ingress,必须写域名,写ip的话识别不到域名又返回400了

在宿主机配置路由的ip

vi /etc/hosts
​
修改内容如下:
10.211.55.200 www1.hongfu.com

这样在宿主机请求,就可以请求到应用pod了

curl www1.hongfu.com:30201

image

在具体查看pod,看看是不是有负载均衡

curl www1.hongfu.com:30201/hostname.html

image

每次请求都是不一样的host,配置成功

第五步:配置另一个Ingress Http 代理

我们说nginx是一个负载均衡器,同样可以做流量转发。www1.lxl 转到svc1, www2.lxl.com 转到svc2。并且每一个svc内部都可以做负载均衡。刚刚我们实现了一个,另一个怎么实现呢?

1. 修改资源清单

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-v2
spec:
  selector:
    matchLabels:
      name: nginx-v2
  replicas: 2
  template:
    metadata:
      labels:
        name: nginx-v2
    spec:
      containers:
        - name: nginx-v2
          image: wangyanglinux/myapp:v2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc-v2
spec:
  selector: 
    name: nginx-v2
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-test-v2
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: www2.hongfu.com
      http:
        paths:
        - path: /
          backend:
           # service 里面就是proxy path的地址。service里面定义的是代理的svc的名字和端口号
           service:
             name: nginx-svc-v2
             port:
               number: 80
          pathType: Prefix
            


  • 把相关的名字都加了-v2
  • 增加了配置
annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /

如果不增加这个配置, 后面怎么请求都是404

2. 加载资源清单

kubectl apply -f ingress.yaml.2

3. 修改host文件

image

4. 宿主机访问

curl www2.hongfu.com:30201

image

请求成功。