【玩转服务网格】初试envoy

293 阅读1分钟

预置条件

kubernetes环境,学习环境推荐使用minikube。 为了规避官方网络问题,此处已经更改为阿里数据源。

minikube start --kubernetes-version=v1.23.8 --image-mirror-country='cn' --image-repository='registry.cn-hangzhou.aliyuncs.com/google_containers'

配置别名,补充在~/.bashrc文件中,source ~/.bashrc生效。

alias k="minikube kubectl --"
source <(kubectl completion bash)

部署被代理的http服务

yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple
spec:
  replicas: 1
  selector:
    matchLabels:
      app: simple
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "80"
      labels:
        app: simple
    spec:
      containers:
        - name: simple
          imagePullPolicy: Always
          image: cncamp/httpserver:v1.0-metrics
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: simple
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: simple

部署

kubectl apply -f simple.yaml

检查部署是否成功

xxx@xxx-virtual-machine:~/istio/101-master/module12/istio/4.sidecar$ k get svc -o wide | grep simple
simple       ClusterIP   10.98.155.230   <none>        80/TCP    3h37m   app=simple
xxx@xxx-virtual-machine:~/istio/101-master/module12/istio/4.sidecar$ k get deploy -o wide | grep simple
simple   1/1     1            1           3h37m   simple       cncamp/httpserver:v1.0-metrics   app=simple

创建envoy配置文件

配置文件

admin:
  address:
    socket_address: { address: 127.0.0.1port_value: 9901 }

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 0.0.0.0port_value: 10000 }
      filter_chains:
        - filters:
            - name: envoy.filters.network.http_connection_manager
              typed_config:
                "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
                stat_prefix: ingress_http
                codec_type: AUTO
                route_config:
                  name: local_route
                  virtual_hosts:
                    - name: local_service
                      domains: ["*"]
                      routes:
                        - match: { prefix: "/" }
                          route: { cluster: some_service }
                http_filters:
                  - name: envoy.filters.http.router
                    typed_config: 
                      "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
  clusters:
    - name: some_service
      connect_timeout: 0.25s
      type: LOGICAL_DNS
      lb_policy: ROUND_ROBIN
      load_assignment:
        cluster_name: some_service
        endpoints:
          - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: simple
                      port_value: 80

使用configmap进行管理

kubectl create configmap envoy-config --from-file=envoy.yaml

检查创建是否成功

xxx@xxx-virtual-machine:~/istio/101-master/module12/istio/4.sidecar$ k get cm -o wide | grep envoy
envoy-config         1      117m

创建envoy deployment

yaml文件,同时创建了一个nodeport的service

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: envoy
  name: envoy
spec:
  replicas: 1
  selector:
    matchLabels:
      run: envoy
  template:
    metadata:
      labels:
        run: envoy
    spec:
      containers:
        - image: envoyproxy/envoy-dev
          name: envoy
          volumeMounts:
            - name: envoy-config
              mountPath: "/etc/envoy"
              readOnly: true
      volumes:
        - name: envoy-config
          configMap:
            name: envoy-config

---
apiVersion: v1
kind: Service
metadata:
  name: envoy-svc
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 10000
      nodePort: 30080
  selector:
    run: envoy

检查部署是否成功

xxx@xxx-virtual-machine:~/istio/101-master/module12/envoy$ k get svc -o wide | grep envoy
envoy-svc    NodePort    10.99.140.121   <none>        80:30080/TCP   25m     run=envoy
xxx@xxx-virtual-machine:~/istio/101-master/module12/envoy$ k get ep -o wide | grep envoy
envoy-svc    172.17.0.4:10000    25m
xxx@xxx-virtual-machine:~/istio/101-master/module12/envoy$ k get deploy -o wide | grep envoy
envoy     1/1     1            1           4h8m    envoy        envoyproxy/envoy-dev             run=envoy
xxx@xxx-virtual-machine:~/istio/101-master/module12/envoy$ k get pod -o wide | grep envoy
envoy-6958c489d9-j7bg6    1/1     Running   0          149m    172.17.0.4   minikube   <none>           <none>

使用envoy代理进行业务请求

由于envoy的service使用的是nodeport的方式进行暴露,所以直接在外部请求k8s的节点的30080即可。

kubernetes的service默认就是使用iptables实现的,使用了nodeport的方式,本质上就是在node的iptables中增加了一个规则而已 root@minikube:/home/docker# iptables-save -t nat | grep 30080 -A KUBE-NODEPORTS -p tcp -m comment --comment "default/envoy-svc:http" -m tcp --dport 30080 -j KUBE-SVC-BGSVDZOZXFQ3YAXM -A KUBE-SVC-BGSVDZOZXFQ3YAXM -p tcp -m comment --comment "default/envoy-svc:http" -m tcp --dport 30080 -j KUBE-MARK-MASQ

查看node的ip

xxx@xxx-virtual-machine:~/istio/101-master/module12/envoy$ k get node -o wide
NAME       STATUS   ROLES                  AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
minikube   Ready    control-plane,master   5h44m   v1.23.8   192.168.49.2   <none>        Ubuntu 20.04.4 LTS   5.15.0-56-generic   docker://20.10.17

请求

xxx@xxx-virtual-machine:~/istio/101-master/module12/envoy$ curl http://192.168.49.2:30080/hello
hello [stranger]
===================Details of the http request header:============
User-Agent=[curl/7.81.0]
Accept=[*/*]
X-Forwarded-Proto=[http]
X-Request-Id=[e89360a4-6f1d-4754-b4f5-65721a44e355]
X-Envoy-Expected-Rq-Timeout-Ms=[15000]