Apisix(一)部署

1,993 阅读15分钟

1. 部署

Apache APISIX 是一个开源的高性能 API 网关,支持动态路由、负载均衡、流量控制和插件扩展。它基于 NGINX 和 Lua 构建,具备极高的可扩展性和灵活性,能够处理大规模的 API 流量。APISIX 提供丰富的监控和管理功能,适用于微服务架构和云原生环境,帮助开发者简化 API 的管理与优化。

github.com/apache/apis… apisix.apache.org/zh/docs/api…

通过官方推荐的helm安装,或者通过github的chart来安装 通过helm安装apisix

➜  ~ helm install apisix apisix/apisix --create-namespace  --namespace apisix
NAME: apisix
LAST DEPLOYED: Tue Mar  5 18:31:31 2024
NAMESPACE: apisix
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace apisix -o jsonpath="{.spec.ports[0].nodePort}" services apisix-gateway)
  export NODE_IP=$(kubectl get nodes --namespace apisix -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
➜  ~

版本信息查看

  • apisix helm chart版本:
    • Apisix 2.6.0
    • Apisix dashboard 0.8.2
  • apisix版本:3.8.0
  • etcd cluster版本:3.5.0
# apisix helm chat version
➜  apisix git:(0da1bd0) ✗ helm list
NAME          NAMESPACE        REVISION        UPDATED                                    STATUS          CHART               APP VERSION
apisix        apisix           1               2024-03-07 13:35:26.20196 +0800 CST        deployed        apisix-2.6.0        3.8.0

# apisix version
➜  apisix git:(0da1bd0) ✗ k exec apisix-68d87594b5-gbp4h -it -- bash
Defaulted container "apisix" out of: apisix, wait-etcd (init)
apisix@apisix-68d87594b5-gbp4h:/usr/local/apisix$ apisix version
/usr/local/openresty//luajit/bin/luajit ./apisix/cli/apisix.lua version
3.8.0
apisix@apisix-68d87594b5-gbp4h:/usr/local/apisix$

#etcd version
➜  apisix git:(0da1bd0) ✗ curl http://127.0.0.1:2379/version
{"etcdserver":"3.5.7","etcdcluster":"3.5.0"}%

Helm install命令执行完成之后遇到以下问题需要手动处理。

  • 镜像拉去失败
  • PVC 一直pending 针对以上两个问题分别处理:

1.1 部署问题处理

1.1.1 镜像拉取失败

通过本地拉取镜像,然后在推送到harbo仓库,替换deploy中的镜像。

➜  apisix git:(0da1bd0) ✗ docker image ls | grep apisix
apache/apisix                                   3.8.0-debian          614799026bce   2 weeks ago     320MB
reg.apx.com/library/apisix                  3.8.0-debian          614799026bce   2 weeks ago     320MB
➜  apisix git:(0da1bd0) ✗ docker image ls | grep etcd
bitnami/etcd                                    3.5.7-debian-11-r14   73feeccc2b47 
12 months ago   150MB
reg.apx.com/library/etcd                    3.5.7-debian-11-r14   73feeccc2b47   12 months ago   150MB

1.1.2 PVC pending

helm安装命令执行完成后,查看pod状态都处于pending阶段。根据依赖关系来看主要因为apisix-etcd由于pvc pending状态被阻塞,因此需要处理pvc pending的问题。

➜  Downloads k get pvc,sts,deploy,pod,svc
NAME                                       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-apisix-etcd-0   Pending                                      nfs-client     66m
persistentvolumeclaim/data-apisix-etcd-1   Pending                                      nfs-client     66m
persistentvolumeclaim/data-apisix-etcd-2   Pending                                      nfs-client     66m

NAME                           READY   AGE
statefulset.apps/apisix-etcd   0/3     21h

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apisix   0/1     0            0           21h

NAME                          READY   STATUS     RESTARTS   AGE
pod/apisix-68d87594b5-7p6pg   0/1     Init:0/1   0          5h15m
pod/apisix-etcd-0             0/1     Pending    0          66m
pod/apisix-etcd-1             0/1     Pending    0          66m
pod/apisix-etcd-2             0/1     Pending    0          66m

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/apisix-admin           ClusterIP   10.109.190.60   <none>        9180/TCP            21h
service/apisix-etcd            ClusterIP   10.99.49.181    <none>        2379/TCP,2380/TCP   21h
service/apisix-etcd-headless   ClusterIP   None            <none>        2379/TCP,2380/TCP   21h
service/apisix-gateway         NodePort    10.102.60.239   <none>        80:30491/TCP        21h
➜  Downloads

通过describe来查看apisix-etcd pod以及对应的pvc event信息,定位pvc pending原因。 Describe pod

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  65m   default-scheduler  0/12 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/12 nodes are available: 12 Preemption is not helpful for scheduling..

Describe pvc

➜  Downloads k describe pvc data-apisix-etcd-0
Name:          data-apisix-etcd-0
Namespace:     apisix
StorageClass:  nfs-client
Status:        Pending
Volume:
Labels:        app.kubernetes.io/instance=apisix
               app.kubernetes.io/name=etcd
Annotations:   volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-client-nfs-client-provisioner
               volume.kubernetes.io/storage-provisioner: cluster.local/nfs-client-nfs-client-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Used By:       apisix-etcd-0
Events:
  Type    Reason                Age                   From                         Message
  ----    ------                ----                  ----                         -------
  Normal  ExternalProvisioning  107s (x264 over 66m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "cluster.local/nfs-client-nfs-client-provisioner" or manually created by system administrator

根据pvc describe来看是因为pv创建受阻导致pvc pending,集群中通过nfs-client来作为外部挂载资源,并且集群其他服务也有使用nfs-client。因此通过查看nfs-client pod日志发现。

k logs -f --tail=500 nfs-client-nfs-client-provisioner-5f77b74777-9xpsq -n default

I0306 08:15:16.392219       1 controller.go:987] provision "apisix/data-apisix-etcd-0" class "nfs-client": started
I0306 08:15:16.392509       1 controller.go:987] provision "apisix/data-apisix-etcd-1" class "nfs-client": started
I0306 08:15:16.392219       1 controller.go:987] provision "apisix/data-apisix-etcd-2" class "nfs-client": started
E0306 08:15:16.400103       1 controller.go:1004] provision "apisix/data-apisix-etcd-0" class "nfs-client": unexpected error getting claim reference: selfLink was empty, can't make reference
E0306 08:15:16.401448       1 controller.go:1004] provision "apisix/data-apisix-etcd-1" class "nfs-client": unexpected error getting claim reference: selfLink was empty, can't make reference
E0306 08:15:16.401562       1 controller.go:1004] provision "apisix/data-apisix-etcd-2" class "nfs-client": unexpected error getting claim reference: selfLink was empty, can't make reference

根据日志分析可能造成的原因

  • 集群nfs-client服务调用异常导致pv自动创建失败
  • Apisix NS下的SA账号权限问题导致pvc创建pv失败

因此针对以上问题需要进行以下调整。

  • 对SA添加权限
  • 手动创建pvc所需的pv(创建pv之前确保nfs server上已创建对应目录)
  • 重新部署api-six sts
  • 调整apisix-etcd pv,sts,svc中的标签 以上调整均以整理到以下资源中,按照apisix-rbac,apisix-pv,apisix-etcd,apisix-etcd-svc的顺序分别执行以下资源文件。

apisix-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: apisix
  name: apisix-default-sa
rules:
- apiGroups: [""] # Core API Groups
  resources: ["*"] # All resources within the core group
  verbs: ["*"] # All verbs for these resources
  
- apiGroups: ["apps", "batch", "extensions", "networking.k8s.io", "policy", "rbac.authorization.k8s.io"] # Some common API groups
  resources: ["*"]
  verbs: ["*"]

- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses", "volumeattachments", "csinodes"]
  verbs: ["*"]

# For PersistentVolume and PersistentVolumeClaim operations
- apiGroups: [""] # PersistentVolumes are in the core API group
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch"] # Note: Creating or deleting PersistentVolumes is typically managed by the cluster administrator, not by a namespace-specific role

# However, full control of PersistentVolumeClaims can be given in the namespace
- apiGroups: [""] # PersistentVolumeClaims are in the core API group
  resources: ["persistentvolumeclaims"]
  verbs: ["*"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: apisix
  name: apisix-default-sa
subjects:
- kind: ServiceAccount
  name: default
  namespace: apisix
roleRef:
  kind: Role
  name: apisix-default-sa
  apiGroup: rbac.authorization.k8s.io



 #使用apisix集成k8s服务发现需要添加以下clusterrole
 #通过clusterrole给apisix的sa添加读取k8s集群服务发现相关权限
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: apisix-k8s-discovery
rules:
- apiGroups: [""]
  resources: ["endpoints","endpointslices"]
  verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: apisix-default-account-endpoints-binding
subjects:
- kind: ServiceAccount
  name: default
  namespace: apisix
roleRef:
  kind: ClusterRole
  name: apisix-k8s-discovery
  apiGroup: rbac.authorization.k8s.io  

apisix-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-apisix-etcd-0 # PV 的名称,可以根据实际情况命名
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi # 存储容量
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: data-apisix-etcd-0 #和pvc对应
    namespace: apisix
  nfs:
    path: /data/nfs_data/apisix/0
    server: 192.168.101.55
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-client
  volumeMode: Filesystem

---
# 创建第二个 PV
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-apisix-etcd-1
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: data-apisix-etcd-1
    namespace: apisix
  nfs:
    path: /data/nfs_data/apisix/1
    server: 192.168.101.55
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-client
  volumeMode: Filesystem

---
# 创建第三个 PV
apiVersion: v1
kind: PersistentVolume
metadata:
  name: data-apisix-etcd-2
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd  
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: data-apisix-etcd-2
    namespace: apisix
  nfs:
    path: /data/nfs_data/apisix/2
    server: 192.168.101.55
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-client
  volumeMode: Filesystem

apisix-etcd.yaml

# 执行之前先执行apisix-rbac添加sa权限
# 在通过apisix-pv手动创建pv
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
  name: apisix-etcd
  namespace: apisix
  resourceVersion: "24664633"
  uid: 9ccf7383-3e45-472c-a69e-af5d1fd8baaf
spec:
  podManagementPolicy: Parallel
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix
      app.kubernetes.io/name: etcd
  serviceName: apisix-etcd-headless
  template:
    metadata:
      annotations:
        checksum/token-secret: 2c80697c2db647500dc3c7c4f3812c6115edd6394aac0c6f99c5387fc83bd228
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix
        app.kubernetes.io/name: etcd
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/instance: apisix
                  app.kubernetes.io/name: etcd
              topologyKey: kubernetes.io/hostname
            weight: 1
      containers:
      - env:
        - name: BITNAMI_DEBUG
          value: "false"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: MY_STS_NAME
          value: apisix-etcd
        - name: ETCDCTL_API
          value: "3"
        - name: ETCD_ON_K8S
          value: "yes"
        - name: ETCD_START_FROM_SNAPSHOT
          value: "no"
        - name: ETCD_DISASTER_RECOVERY
          value: "no"
        - name: ETCD_NAME
          value: $(MY_POD_NAME)
        - name: ETCD_DATA_DIR
          value: /bitnami/etcd/data
        - name: ETCD_LOG_LEVEL
          value: info
        - name: ALLOW_NONE_AUTHENTICATION
          value: "yes"
        - name: ETCD_AUTH_TOKEN
          value: jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10m
        - name: ETCD_ADVERTISE_CLIENT_URLS
          value: http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2379,http://apisix-etcd.apisix.svc.cluster.local:2379
        - name: ETCD_LISTEN_CLIENT_URLS
          value: http://0.0.0.0:2379
        - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
          value: http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2380
        - name: ETCD_LISTEN_PEER_URLS
          value: http://0.0.0.0:2380
        - name: ETCD_INITIAL_CLUSTER_TOKEN
          value: etcd-cluster-k8s
        - name: ETCD_INITIAL_CLUSTER_STATE
          value: new
        - name: ETCD_INITIAL_CLUSTER
          value: apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.apisix.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.apisix.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.apisix.svc.cluster.local:2380
        - name: ETCD_CLUSTER_DOMAIN
          value: apisix-etcd-headless.apisix.svc.cluster.local
        image: reg.apx.com/library/etcd:3.5.7-debian-11-r14
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /opt/bitnami/scripts/etcd/prestop.sh # todo后期调整为正常脚本
        livenessProbe:
          exec:
            command:
            - cat 
            - /opt/bitnami/scripts/etcd/healthcheck.sh # todo后期调整为正常的健康检查
          failureThreshold: 5
          initialDelaySeconds: 60
          periodSeconds: 30
          successThreshold: 1
          timeoutSeconds: 5
        name: etcd
        ports:
        - containerPort: 2379
          name: client
          protocol: TCP
        - containerPort: 2380
          name: peer
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - cat 
            - /opt/bitnami/scripts/etcd/healthcheck.sh # todo后期调整为正常的健康检查
          failureThreshold: 5
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 5
        resources: {}
        securityContext:
          allowPrivilegeEscalation: false
          runAsNonRoot: true
          runAsUser: 1001
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /bitnami/etcd
          name: data
        - mountPath: /opt/bitnami/etcd/certs/token/
          name: etcd-jwt-token
          readOnly: true
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1001
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      volumes:
      - name: etcd-jwt-token
        secret:
          defaultMode: 256
          secretName: apisix-etcd-jwt-token
  updateStrategy:
    type: RollingUpdate
  volumeClaimTemplates:   
  - metadata:
      name: data
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: nfs-client  
      resources:
        requests:
          storage: 8Gi    

apisix-etcd-svc.yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-03-07T05:35:32Z"
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
  name: apisix-etcd
  namespace: apisix
  resourceVersion: "24699168"
  uid: 9991d1f5-b348-4e46-b318-1590c4b5dd00
spec:
  clusterIP: 10.102.60.214
  clusterIPs:
  - 10.102.60.214
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: client
    port: 2379
    protocol: TCP
    targetPort: client
  - name: peer
    port: 2380
    protocol: TCP
    targetPort: peer
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
  sessionAffinity: None
  type: ClusterIP


---

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-03-07T05:35:32Z"
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
  name: apisix-etcd-headless
  namespace: apisix
  resourceVersion: "24699259"
  uid: f584879d-0670-4e32-a28f-08d8fdcd9dc5
spec:
  clusterIP: None
  clusterIPs:
  - None
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: client
    port: 2379
    protocol: TCP
    targetPort: client
  - name: peer
    port: 2380
    protocol: TCP
    targetPort: peer
  publishNotReadyAddresses: true
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: etcd
  sessionAffinity: None
  type: ClusterIP

调整之后再次查看状态服务运行正常,如果过程中pv对应目录存在脏数据导致etcd cluster版本较低。清理nfs-server对应目录文件,重新执行失败处理逻辑即可。

➜  apisix git:(0da1bd0) ✗ k get pvc,pod,deploy,sts,svc,endpoints
NAME                                       STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-apisix-etcd-0   Bound    data-apisix-etcd-0   8Gi        RWO            nfs-client     15h
persistentvolumeclaim/data-apisix-etcd-1   Bound    data-apisix-etcd-1   8Gi        RWO            nfs-client     15h
persistentvolumeclaim/data-apisix-etcd-2   Bound    data-apisix-etcd-2   8Gi        RWO            nfs-client     15h

NAME                          READY   STATUS    RESTARTS      AGE
pod/apisix-68d87594b5-gbp4h   1/1     Running   0             15h
pod/apisix-etcd-0             1/1     Running   1 (15h ago)   15h
pod/apisix-etcd-1             1/1     Running   1 (15h ago)   15h
pod/apisix-etcd-2             1/1     Running   0             15h

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apisix   1/1     1            1           20h

NAME                           READY   AGE
statefulset.apps/apisix-etcd   3/3     15h

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
service/apisix-admin           ClusterIP   10.104.59.46    <none>        9180/TCP            20h
service/apisix-etcd            ClusterIP   10.102.60.214   <none>        2379/TCP,2380/TCP   20h
service/apisix-etcd-headless   ClusterIP   None            <none>        2379/TCP,2380/TCP   20h
service/apisix-gateway         NodePort    10.103.204.62   <none>        80:30290/TCP        20h

NAME                             ENDPOINTS                                                         AGE
endpoints/apisix-admin           10.244.11.4:9180                                                  20h
endpoints/apisix-etcd            10.244.11.3:2380,10.244.5.73:2380,10.244.6.174:2380 + 3 more...   20h
endpoints/apisix-etcd-headless   10.244.11.3:2380,10.244.5.73:2380,10.244.6.174:2380 + 3 more...   20h
endpoints/apisix-gateway         10.244.11.4:9080                                                  20h
➜  apisix git:(0da1bd0) ✗

1.2 配置调整

在apisix服务可以正常运行之后,满足正常开发需要还需要进行一些配置调整,具体参考下面资源。

如果只是通过helloworld熟悉apisix功能,1.2小节可以直接跳过。

1.2.1 Apisix-configmap更新

文件是更新后的configmap全部内容,添加了对于服务发现,prometheus,外部ip访问admin api的更新

apiVersion: v1
data:
  config.yaml: |-
    #
    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #
    apisix:    # universal configurations
      node_listen:    # APISIX listening port
        - 9080
      enable_heartbeat: true
      enable_admin: true
      enable_admin_cors: true
      enable_debug: false

      enable_dev_mode: false                       # Sets nginx worker_processes to 1 if set to true
      enable_reuseport: true                       # Enable nginx SO_REUSEPORT switch if set to true.
      enable_ipv6: true # Enable nginx IPv6 resolver
      enable_server_tokens: true # Whether the APISIX version number should be shown in Server header

      # proxy_protocol:                   # Proxy Protocol configuration
      #   listen_http_port: 9181          # The port with proxy protocol for http, it differs from node_listen and admin_listen.
      #                                   # This port can only receive http request with proxy protocol, but node_listen & admin_listen
      #                                   # can only receive http request. If you enable proxy protocol, you must use this port to
      #                                   # receive http request with proxy protocol
      #   listen_https_port: 9182         # The port with proxy protocol for https
      #   enable_tcp_pp: true             # Enable the proxy protocol for tcp proxy, it works for stream_proxy.tcp option
      #   enable_tcp_pp_to_upstream: true # Enables the proxy protocol to the upstream server

      proxy_cache:                         # Proxy Caching configuration
        cache_ttl: 10s                     # The default caching time if the upstream does not specify the cache time
        zones:                             # The parameters of a cache
        - name: disk_cache_one             # The name of the cache, administrator can be specify
                                           # which cache to use by name in the admin api
          memory_size: 50m                 # The size of shared memory, it's used to store the cache index
          disk_size: 1G                    # The size of disk, it's used to store the cache data
          disk_path: "/tmp/disk_cache_one" # The path to store the cache data
          cache_levels: "1:2"              # The hierarchy levels of a cache
      #  - name: disk_cache_two
      #    memory_size: 50m
      #    disk_size: 1G
      #    disk_path: "/tmp/disk_cache_two"
      #    cache_levels: "1:2"

      router:
        http: radixtree_host_uri  # radixtree_uri: match route by uri(base on radixtree)
                                    # radixtree_host_uri: match route by host + uri(base on radixtree)
                                    # radixtree_uri_with_parameter: match route by uri with parameters
        ssl: 'radixtree_sni'        # radixtree_sni: match route by SNI(base on radixtree)

      proxy_mode: http
      # dns_resolver:
      #
      #   - 127.0.0.1
      #
      #   - 172.20.0.10
      #
      #   - 114.114.114.114
      #
      #   - 223.5.5.5
      #
      #   - 1.1.1.1
      #
      #   - 8.8.8.8
      #
      dns_resolver_valid: 30
      resolver_timeout: 5
      ssl:
        enable: true #开启https ssl
        listen:
          - port: 9443
            enable_http2: true
        ssl_protocols: "TLSv1.2 TLSv1.3"
        ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"

    nginx_config:    # config for render the template to genarate nginx.conf
      error_log: "/dev/stderr"
      error_log_level: "warn"    # warn,error
      worker_processes: "auto"
      enable_cpu_affinity: true
      worker_rlimit_nofile: 20480  # the number of files a worker process can open, should be larger than worker_connections
      event:
        worker_connections: 10620
      http:
        enable_access_log: true
        access_log: "/dev/stdout"
        access_log_format: '$remote_addr - $remote_user [$time_local] $http_host "$request" $status $body_bytes_sent $request_time "$http_referer" "$http_user_agent" $upstream_addr $upstream_status $upstream_response_time "$upstream_scheme://$upstream_host$upstream_uri"'
        access_log_format_escape: default
        keepalive_timeout: "60s"
        client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
        client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
        send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
        underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
        real_ip_header: "X-Real-IP"    # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
        real_ip_from:                  # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
          - 127.0.0.1
          - 'unix:'


    discovery:                      # Service Discovery
     kubernetes:                     # Kubernetes service discovery
       service:
         schema: https                     # apiserver schema, options [http, https], default https
         host: "kubernetes.default.svc.cluster.local" #${KUBERNETES_SERVICE_HOST}  # apiserver host, options [ipv4, ipv6, domain, environment variable], default ${KUBERNETES_SERVICE_HOST}
         port: "443" #${KUBERNETES_SERVICE_PORT}  # apiserver port, options [port number, environment variable], default ${KUBERNETES_SERVICE_PORT}
       client:
         # serviceaccount token or path of serviceaccount token_file
         token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
       namespace_selector:
         match: 
         - apisix
       # label_selector: |-
       #   app=cmp,
       # shared_size: 1m #default 1m
       
    ext-plugin:
      cmd: ["/usr/local/apisix/configs/go-runner", "run"]       

    plugin_attr:          # Plugin attributes            # export opentelemetry variables to nginx variables
      prometheus:                               # Plugin: prometheus
        export_uri: /apisix/prometheus/metrics  # Set the URI for the Prometheus metrics endpoint.
        metric_prefix: apisix_                  # Set the prefix for Prometheus metrics generated by APISIX.
        enable_export_server: true              # Enable the Prometheus export server.
        export_addr:                            # Set the address for the Prometheus export server.
          ip: 0.0.0.0                         # Set the IP.
          port: 9091        

    deployment:
      role: traditional
      role_traditional:
        config_provider: etcd
      admin:
        allow_admin:    # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
          - 127.0.0.1/24
          - 192.168.101.1/24
        #   - "::/64"
        admin_listen:
          ip: 0.0.0.0
          port: 9180
        # Default token when use API to call for Admin API.
        # *NOTE*: Highly recommended to modify this value to protect APISIX's Admin API.
        # Disabling this configuration item means that the Admin API does not
        # require any authentication.
        admin_key:
          # admin: can everything for configuration data
          - name: "admin"
            key: edd1c9f034335f136f87ad84b625c8f1
            role: admin
          # viewer: only can view configuration data
          - name: "viewer"
            key: 4054f7cf07e344346cd3f287985e76a2
            role: viewer
      etcd:
        host:                          # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
          - "http://apisix-etcd.apisix.svc.cluster.local:2379"
        prefix: "/apisix"    # configuration prefix in etcd
        timeout: 30    # 30 seconds
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: apisix
    meta.helm.sh/release-namespace: apisix
  creationTimestamp: "2024-03-07T05:35:32Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: apisix
  namespace: apisix
  resourceVersion: "24664574"
  uid: 699abd13-0c63-4d04-a546-9b7b3d11ce5d

1.2.2 添加apisix prometheus svc

方便对外部prometheus集成添加ip:port,添加svc暴露apisix prometheus metric endpoint

curl http://192.168.101.23:30291/apisix/prometheus/metrics

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: apisix
    meta.helm.sh/release-namespace: apisix
  creationTimestamp: "2024-03-07T05:35:32Z"
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: apisix
    app.kubernetes.io/service: apisix-prometheus
    app.kubernetes.io/version: 3.8.0
    helm.sh/chart: apisix-2.6.0
  name: apisix-prometheus
  namespace: apisix
spec:
  ports:
  - name: apisix-prometheus
    nodePort: 30291
    port: 80
    protocol: TCP
    targetPort: 9091
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: NodePort

1.2.3 更新deploy挂载自定义插件到apisix

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "2"
    meta.helm.sh/release-name: apisix
    meta.helm.sh/release-namespace: apisix
  creationTimestamp: "2024-03-07T05:35:32Z"
  generation: 9
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 3.8.0
    helm.sh/chart: apisix-2.6.0
  name: apisix
  namespace: apisix
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix
      app.kubernetes.io/name: apisix
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        checksum/config: 8ff3f1d7102abb3264b1b593afaf29ace53bfa2556ec768e59378471fcb8c3e7
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix
        app.kubernetes.io/name: apisix
    spec:
      containers:
      - image: reg.apx.com/library/apisix:3.8.0-debian
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /bin/sh
              - -c
              - sleep 30
        name: apisix
        ports:
        - containerPort: 9080
          name: http
          protocol: TCP
        - containerPort: 9443
          name: tls
          protocol: TCP
        - containerPort: 9180
          name: admin
          protocol: TCP
        readinessProbe:
          failureThreshold: 6
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 9080
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /usr/local/apisix/conf/config.yaml
          name: apisix-config
          subPath: config.yaml
        - mountPath: /usr/local/apisix/configs
          name: ext-plugin  
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - until nc -z apisix-etcd.apisix.svc.cluster.local 2379; do echo waiting for
          etcd `date`; sleep 2; done;
        image: busybox:1.28
        imagePullPolicy: IfNotPresent
        name: wait-etcd
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: default
      serviceAccountName: default
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: apisix
        name: apisix-config
      - name: ext-plugin
        persistentVolumeClaim:
          claimName: apisix-ext-plugin-pvc        

添加deployment中使用的pv和pvc

将插件的二进制文件和所需要的配置文件提前copy到nfs server中

[root@nfs-server apisix_plugin]# pwd
/data/nfs_data/apisix_plugin

[root@nfs-server apisix_plugin]# ls
config.dev.yaml  go-runner
[root@nfs-server apisix_plugin]#
#apisix挂载外部插件使用的pv和pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: apisix-ext-plugin-pvc
  namespace: apisix
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  storageClassName: nfs-client
  volumeMode: Filesystem
  volumeName: apisix-ext-plugin-pv
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  phase: Bound

---

apiVersion: v1
kind: PersistentVolume
metadata:
  name: apisix-ext-plugin-pv
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: 2Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: apisix-ext-plugin-pvc
    namespace: apisix
  nfs:
    path: /data/nfs_data/apisix_plugin
    server: 192.168.101.55
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-client
  volumeMode: Filesystem

2. 基本功能演示

2.1 路由配置

以公共服务httpbin.org/ip为例来演示apis…

 ~ curl https://httpbin.org/ip
{
  "origin": "222.90.142.120"
}
➜  ~

调用apisix-admin接口添加路由信息,添加路由执行httpbin.org,路由添加完成后我们通过访问api-gateway服务验证路由。

添加路由之前访问/ip路径失败

curl -i http://192.168.101.23:30290/ip
HTTP/1.1 404 Not Found
Date: Fri, 08 Mar 2024 02:48:03 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/3.8.0

{"error_msg":"404 Route Not Found"}

添加/ip路由

➜  ~ curl -i "http://127.0.0.1:9180/apisix/admin/routes" -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
  "id": "getting-started-ip",
  "uri": "/ip",
  "upstream": {
    "type": "roundrobin",
    "nodes": {
      "httpbin.org:80": 1
    }
  }
}'
HTTP/1.1 201 Created
Date: Fri, 08 Mar 2024 02:43:17 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/3.8.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 3600
X-API-VERSION: v3

{
  "value": {
    "status": 1,
    "upstream": {
      "scheme": "http",
      "type": "roundrobin",
      "pass_host": "pass",
      "nodes": {
        "httpbin.org:80": 1
      },
      "hash_on": "vars"
    },
    "uri": "/ip",
    "id": "getting-started-ip",
    "priority": 0,
    "update_time": 1709865797,
    "create_time": 1709865797
  },
  "key": "/apisix/routes/getting-started-ip"
}
➜  ~

再次通过apisix gateway服务访问/ip路径可以看到请求被分配到到upstream中配置的httpbin服务中,可以看出来配置的路由生效。

  ~ curl -i http://192.168.101.23:30290/ip
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 47
Connection: keep-alive
Date: Fri, 08 Mar 2024 02:47:19 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Server: APISIX/3.8.0

{
  "origin": "192.168.101.23, 43.254.54.39"
}
  ~

2.2 负载均衡

在演示了单个请求的路由配置之后,我们接下来通过对于同一个路由配置不同的两个服务,从而通过apisix实现对请求通过负载均衡分配到upstream中配置的不同服务来处理请求。

创建/headers路径路由,请求将被转发到 httpbin.orgmock.api7.ai 这两个上游服务,并且会返回请求头。有时候可能会因为网络原因失败。

curl -i "http://127.0.0.1:9180/apisix/admin/routes" -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d '
{
  "id": "getting-started-headers",
  "uri": "/headers",
  "upstream" : {
    "type": "roundrobin",
    "nodes": {
      "httpbin.org:443": 1,
      "mock.api7.ai:443": 1
    },
    "pass_host": "node",
    "scheme": "https"
}}'
HTTP/1.1 201 Created
Date: Fri, 08 Mar 2024 06:48:53 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Server: APISIX/3.8.0
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: *
Access-Control-Max-Age: 3600
X-API-VERSION: v3

{
  "value": {
    "status": 1,
    "upstream": {
      "scheme": "https",
      "type": "roundrobin",
      "pass_host": "node",
      "nodes": {
        "httpbin.org:443": 1,
        "mock.api7.ai:443": 1
      },
      "hash_on": "vars"
    },
    "uri": "/headers",
    "id": "getting-started-headers",
    "priority": 0,
    "update_time": 1709880533,
    "create_time": 1709880533
  },
  "key": "/apisix/routes/getting-started-headers"
}

通过apisix-gateway服务访问查看均衡效果

➜  ~ curl -i http://192.168.101.23:30290/headers
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 652
Connection: keep-alive
Date: Fri, 08 Mar 2024 06:49:03 GMT
Host: mock.api7.ai
X-Real-IP: 43.254.54.39
X-Forwarded-Proto: https
CF-Ray: 8610e215ab94fabe-SJC
Accept-Encoding: gzip, br
Accept: */*
User-Agent: curl/7.88.1
CF-Connecting-IP: 43.254.54.39
CF-IPCountry: CN
CF-Visitor: {"scheme":"https"}
X-Application-Owner: API7.ai
X-Forwarded-For: 192.168.101.23
X-Forwarded-Host: 192.168.101.23
X-Forwarded-Port: 9080
Report-To: {"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v3?s=ooZHryJQuJiov4YHSZtj2Pc7vHwoS8RShdd12N57qhGu6DXnHtAgKExmDGbh6JwWEhomPIDsMGu%2B7A%2Brn9ZojD4NIxN9zLf76n0F4xMSTvfaZQkO6EsxHAX6FpE8pgA%3D"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
alt-svc: h3=":443"; ma=86400
Server: APISIX/3.8.0

{
  "headers": {
    "accept": "*/*",
    "accept-encoding": "gzip, br",
    "cf-connecting-ip": "43.254.54.39",
    "cf-ipcountry": "CN",
    "cf-ray": "8610e215ab94fabe",
    "cf-visitor": "{"scheme":"https"}",
    "connection": "Keep-Alive",
    "content-type": "application/json",
    "host": "mock.api7.ai",
    "user-agent": "curl/7.88.1",
    "x-application-owner": "API7.ai",
    "x-forwarded-for": "192.168.101.23",
    "x-forwarded-host": "192.168.101.23",
    "x-forwarded-port": "9080",
    "x-forwarded-proto": "https",
    "x-real-ip": "43.254.54.39",
    "X-Application-Owner": "API7.ai",
    "Content-Type": "application/json"
  }
}%
➜  ~ curl -i http://192.168.101.23:30290/headers
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 652
Connection: keep-alive
Date: Fri, 08 Mar 2024 06:49:09 GMT
Host: mock.api7.ai
X-Real-IP: 43.254.54.39
X-Forwarded-Proto: https
CF-Ray: 8610e239dc5d6438-SJC
Accept-Encoding: gzip, br
Accept: */*
User-Agent: curl/7.88.1
CF-Connecting-IP: 43.254.54.39
CF-IPCountry: CN
CF-Visitor: {"scheme":"https"}
X-Application-Owner: API7.ai
X-Forwarded-For: 192.168.101.23
X-Forwarded-Host: 192.168.101.23
X-Forwarded-Port: 9080
Report-To: {"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v3?s=C%2BNMgFaueng9qnjDvur1CFIzu0ukoWh3DBvyncdjbLnH%2F0Gh3KRAqEOxacsHyfBRf8icZoC016uScXu2LS1pA8HOYXs4KHKKwjsSyC4OCTCgLMalfUBZimhnyocdIyo%3D"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
alt-svc: h3=":443"; ma=86400
Server: APISIX/3.8.0

{
  "headers": {
    "accept": "*/*",
    "accept-encoding": "gzip, br",
    "cf-connecting-ip": "43.254.54.39",
    "cf-ipcountry": "CN",
    "cf-ray": "8610e239dc5d6438",
    "cf-visitor": "{"scheme":"https"}",
    "connection": "Keep-Alive",
    "content-type": "application/json",
    "host": "mock.api7.ai",
    "user-agent": "curl/7.88.1",
    "x-application-owner": "API7.ai",
    "x-forwarded-for": "192.168.101.23",
    "x-forwarded-host": "192.168.101.23",
    "x-forwarded-port": "9080",
    "x-forwarded-proto": "https",
    "x-real-ip": "43.254.54.39",
    "X-Application-Owner": "API7.ai",
    "Content-Type": "application/json"
  }
}%
➜  ~ curl -i http://192.168.101.23:30290/headers
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 216
Connection: keep-alive
Date: Fri, 08 Mar 2024 06:49:24 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Server: APISIX/3.8.0

{
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin.org",
    "User-Agent": "curl/7.88.1",
    "X-Amzn-Trace-Id": "Root=1-65eab4f4-6a4b817e59a56d1e395bdde1",
    "X-Forwarded-Host": "192.168.101.23"
  }
}
➜  ~ curl -i http://192.168.101.23:30290/headers
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 652
Connection: keep-alive
Date: Fri, 08 Mar 2024 06:49:28 GMT
Host: mock.api7.ai
X-Real-IP: 43.254.54.39
X-Forwarded-Proto: https
CF-Ray: 8610e2b40f6467af-SJC
Accept-Encoding: gzip, br
Accept: */*
User-Agent: curl/7.88.1
CF-Connecting-IP: 43.254.54.39
CF-IPCountry: CN
CF-Visitor: {"scheme":"https"}
X-Application-Owner: API7.ai
X-Forwarded-For: 192.168.101.23
X-Forwarded-Host: 192.168.101.23
X-Forwarded-Port: 9080
Report-To: {"endpoints":[{"url":"https://a.nel.cloudflare.com/report/v3?s=MEnhNqgDadE8gs1q6T7tAyS1mCknbUqUCfG9VwyQJUAFUhZL5%2F5yOS5SZ63d7tU4PvaO4EPPtB%2B92Zna2qPiNippm5beuH1l%2FxKamQHeHIQiectN58Rn3X3cLkjOX90%3D"}],"group":"cf-nel","max_age":604800}
NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
alt-svc: h3=":443"; ma=86400
Server: APISIX/3.8.0

{
  "headers": {
    "accept": "*/*",
    "accept-encoding": "gzip, br",
    "cf-connecting-ip": "43.254.54.39",
    "cf-ipcountry": "CN",
    "cf-ray": "8610e2b40f6467af",
    "cf-visitor": "{"scheme":"https"}",
    "connection": "Keep-Alive",
    "content-type": "application/json",
    "host": "mock.api7.ai",
    "user-agent": "curl/7.88.1",
    "x-application-owner": "API7.ai",
    "x-forwarded-for": "192.168.101.23",
    "x-forwarded-host": "192.168.101.23",
    "x-forwarded-port": "9080",
    "x-forwarded-proto": "https",
    "x-real-ip": "43.254.54.39",
    "X-Application-Owner": "API7.ai",
    "Content-Type": "application/json"
  }
}%

3. # 安装apisix-dashboard

apisix也提供了图形化界面来管理apisix服务以及配置,从而简化对于apisix的操作。

下面通过helm安装apisix-dashboard可视化工具,通过客户端来管理apisix服务。

helm install apisix-dashboard apisix/apisix-dashboard --namespace apisix

查看apisix-dashboard对应的helm chart和apisix-dashboard服务版本信息

  apisix git:(0da1bd0)  helm list
NAME                    NAMESPACE        REVISION        UPDATED                                     STATUS          CHART                         APP VERSION
apisix                  apisix           1               2024-03-07 13:35:26.20196 +0800 CST         deployed        apisix-2.6.0                  3.8.0
apisix-dashboard        apisix           1               2024-03-08 14:13:28.980674 +0800 CST        deployed        apisix-dashboard-0.8.2        3.0.0

通过浏览器访问查看之前创建的路由信息。

  1. 总结

到这里已经对于apisix有一个初步的认识了,通过本文可以了解到。

  1. apisix,apisix-dashboard服务的快速部署
  2. 在apisix怎么配置路由,upstream并且将请求路由到我们期望的后端服务中

有一点要注意的是,本文作为快速入门只是介绍了如何快速完成以上功能,但是如果涉及到具体使用,还需要对于细节进行深入的了解。尤其是要在prod环境使用的时候,更是要了解详细的参数配置,以避免因为参数配置不当导致的服务一致性等问题。

附录

相关链接

apisix.apache.org/docs/apisix… apisix.apache.org/docs/apisix…