k8s学习笔记-P32-Service实践

189 阅读7分钟

教程:尚硅谷Kubernetes教程(K8s入门到精通)_哔哩哔哩_bilibili

笔记摘自视频章节:第四章-p32


主题

service实践

笔记

ClusterIP

clusterIP 主要在每个node节点使用iptables,将发向clusterIP对应端口的数据,转发到kube-proxy中。然后kube-proxy 自己内部实现有负载均衡的方法,并可以查询到这个service下对应pod的地址和端口,进而把数据转发给对应的pod的地址和端口。

image.png 为了实现图上的功能,主要需要以下几个组件的协同工作:

  • apiserver: 用户通过kubectl 指令向apiserver发送创建service的命令, apiserver接收到请求后将数据存储到etcd中
  • kube-proxy kubernetes的每个节点中都有一个叫做kube-porxy的进程,这个进程负责感知service, pod的变化,并将变化的信息写入本地的iptables规则中
  • iptables 使用NAT等技术将virtualIP的流量转至endpoint中
实践
  • 文件
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: stable
  template:
    metadata:
      labels:
        app: myapp
        release: stable
        env: test
    spec:
      containers:
        - name: myapp
          image: wangyanglinux/myapp:v2
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 80
  • apply:
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl apply -f p32_svc_deployment.yaml
deployment.apps/myapp-deploy created
  • 通过svc管理deployment
    • selector: 匹配到app:myapp; release:stable
    • ports:
      • port: svc开放段开口为500
      • targetPort: pod端口为80
apiVersion: v1
kind: Service
metadata:
  name: my-svc
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: myapp
    release: stable
  ports:
    - name: http
      port: 500
      targetPort: 80
  • 查看svc
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP   3d3h    <none>
my-svc       ClusterIP   10.1.5.238   <none>        500/TCP   4m14s   app=myapp,release=stable
  • 测试访问
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# curl 10.1.5.238:500
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
  • 修改svc文件,把app修改为一个不存在的label,比如app_not_exist
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl apply -f p32_svc_deployment.yaml
deployment.apps/myapp-deploy created
  • 测试修改后的联通性,发现已经无法访问。查看ipvs规则,发现该svc的对应的转发规则已经没了。
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# curl 10.1.5.238:500
curl: (7) Failed to connect to 10.1.5.238 port 500: Connection refused
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr
  -> 172.16.13.128:6443           Masq    1      0          0
TCP  10.1.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.1.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
TCP  10.1.5.238:500 rr
UDP  10.1.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
  • 修改回来后,访问成功。
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl apply -f p32_svc_demo.yaml
service/my-svc configured
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# curl 10.1.5.238:500
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.1.0.1:443 rr
  -> 172.16.13.128:6443           Masq    1      0          0
TCP  10.1.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0
TCP  10.1.0.10:9153 rr
  -> 10.244.0.2:9153              Masq    1      0          0
  -> 10.244.0.3:9153              Masq    1      0          0
TCP  10.1.5.238:500 rr
  -> 10.244.1.138:80              Masq    1      0          0
  -> 10.244.1.139:80              Masq    1      0          0
  -> 10.244.3.159:80              Masq    1      0          1
UDP  10.1.0.10:53 rr
  -> 10.244.0.2:53                Masq    1      0          0
  -> 10.244.0.3:53                Masq    1      0          0


root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP             NODE                 NOMINATED NODE   READINESS GATES
myapp-deploy-5cd54797f8-647jg   1/1     Running   0          3m44s   10.244.1.139   jjh-k8s-demo-node1   <none>           <none>
myapp-deploy-5cd54797f8-c2wvd   1/1     Running   0          3m44s   10.244.1.138   jjh-k8s-demo-node1   <none>           <none>
myapp-deploy-5cd54797f8-nrrpw   1/1     Running   0          3m44s   10.244.3.159   jjh-k8s-node-2       <none>           <none>
  • 测试访问的pod,发现是rr规则的负载均衡策略
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# curl 10.1.5.238:500/hostname.html
myapp-deploy-5cd54797f8-647jg
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# curl 10.1.5.238:500/hostname.html
myapp-deploy-5cd54797f8-c2wvd
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# curl 10.1.5.238:500/hostname.html
myapp-deploy-5cd54797f8-nrrpw

Headless Service

这是一种特殊的ClusterIP。有时不需要或不想要负载均衡,以及单独的Service IP。遇到这种情况,可以通过指定ClusterIP(spec.clusterIP) 的值为"None" 来创建Headless Service。这类Service 不会分配Cluster IP, kube-proxy不会处理它们,而且平台也不会为它们进行负载均衡和路由

  • 文件
apiVersion: v1
kind: Service
metadata:
  name: myapp-headless
  namespace: default
spec:
  selector:
    app: myapp
  clusterIP: "None"
  ports:
    - port: 80
      targetPort: 80
  • 部署
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl apply -f p32_svc_headless_demo.yaml
service/myapp-headless created
  • 对headless svc,查看dns(使用dig 命令)。能够查到
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pods -n kube-system -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP              NODE                  NOMINATED NODE   READINESS GATES
coredns-65c54cc984-smdtc                      1/1     Running   0          12d   10.244.0.2      jjh-k8s-demo-master   <none>           <none>
coredns-65c54cc984-tcbfb                      1/1     Running   0          12d   10.244.0.3      jjh-k8s-demo-master   <none>           <none>
etcd-jjh-k8s-demo-master                      1/1     Running   0          12d   172.16.13.128   jjh-k8s-demo-master   <none>           <none>
kube-apiserver-jjh-k8s-demo-master            1/1     Running   0          12d   172.16.13.128   jjh-k8s-demo-master   <none>           <none>
kube-controller-manager-jjh-k8s-demo-master   1/1     Running   0          12d   172.16.13.128   jjh-k8s-demo-master   <none>           <none>
kube-flannel-ds-6m5mp                         1/1     Running   0          12d   172.16.13.128   jjh-k8s-demo-master   <none>           <none>
kube-flannel-ds-lmrw9                         1/1     Running   1          12d   172.16.13.127   jjh-k8s-demo-node1    <none>           <none>
kube-flannel-ds-nbjm7                         1/1     Running   2          32h   172.16.13.254   jjh-k8s-node-2        <none>           <none>
kube-proxy-cwp2v                              1/1     Running   1          9d    172.16.13.127   jjh-k8s-demo-node1    <none>           <none>
kube-proxy-n56nk                              1/1     Running   3          32h   172.16.13.254   jjh-k8s-node-2        <none>           <none>
kube-proxy-nmslm                              1/1     Running   0          9d    172.16.13.128   jjh-k8s-demo-master   <none>           <none>
kube-scheduler-jjh-k8s-demo-master            1/1     Running   0          12d   172.16.13.128   jjh-k8s-demo-master   <none>           <none>
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# dig -t A  myapp-headless.default.svc.cluster.local. @10.244.0.2

; <<>> DiG 9.16.1-Ubuntu <<>> -t A myapp-headless.default.svc.cluster.local. @10.244.0.2
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27790
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 49bf867f5f0813be (echoed)
;; QUESTION SECTION:
;myapp-headless.default.svc.cluster.local. IN A

;; ANSWER SECTION:
myapp-headless.default.svc.cluster.local. 30 IN	A 10.244.1.139
myapp-headless.default.svc.cluster.local. 30 IN	A 10.244.3.159
myapp-headless.default.svc.cluster.local. 30 IN	A 10.244.1.138

;; Query time: 4 msec
;; SERVER: 10.244.0.2#53(10.244.0.2)
;; WHEN: Tue Mar 22 15:05:39 UTC 2022
;; MSG SIZE  rcvd: 249

root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP             NODE                 NOMINATED NODE   READINESS GATES
myapp-deploy-5cd54797f8-647jg   1/1     Running   0          18m   10.244.1.139   jjh-k8s-demo-node1   <none>           <none>
myapp-deploy-5cd54797f8-c2wvd   1/1     Running   0          18m   10.244.1.138   jjh-k8s-demo-node1   <none>           <none>
myapp-deploy-5cd54797f8-nrrpw   1/1     Running   0          18m   10.244.3.159   jjh-k8s-node-2       <none>           <none>

NodePort

在Node上开一个Port

  • 文件
apiVersion: v1
kind: Service
metadata:
  name: mysvc-node-port
  namespace: default
spec:
  type: NodePort
  selector:
    app: myapp
    release: stable
  ports:
    - name: http
      port: 80
      targetPort: 80
  • 创建svc,看到对外开放的端口是32477。
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl apply -f p32_svc_node_port_demo.yaml
service/mysvc-node-port created
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl get svc
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes        ClusterIP   10.1.0.1      <none>        443/TCP        3d4h
my-svc            ClusterIP   10.1.5.238    <none>        500/TCP        34m
myapp-headless    ClusterIP   None          <none>        80/TCP         11m
mysvc-node-port   NodePort    10.1.50.255   <none>        80:32477/TCP   51s
  • 尝试访问。我使用的是云主机,还需要再控制台上设置安全组,开放改端口。

image.png

  • 测试访问成功

image.png

  • 在每一个node上都能检查到该端口的监听信息。通过netstat查看该端口的连接信息,每个节点都能查到类似的信息。
netstat -anpt |grep :32477
tcp        2      0 0.0.0.0:32477           0.0.0.0:*               LISTEN      1439834/kube-proxy

LoadBalancer

nodePort的原理在于在node上开了一个端口,将向该端口的流量导入到kube-proxy,然后由kube-proxy进一步到给对应的pod 需要购买服务(花钱)

ExternalName

  • 文件
apiVersion: v1
kind: Service
metadata:
  name: mysvc-external-name-to-sohu
  namespace: default
spec:
  type: ExternalName
  externalName: www.sohu.com
  • 查看配置结果
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# dig -t A  mysvc-external-name-to-sohu.default.svc.cluster.local. @10.244.0.2

; <<>> DiG 9.16.1-Ubuntu <<>> -t A mysvc-external-name-to-sohu.default.svc.cluster.local. @10.244.0.2
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61061
;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: a2b8ece21d67fd70 (echoed)
;; QUESTION SECTION:
;mysvc-external-name-to-sohu.default.svc.cluster.local. IN A

;; ANSWER SECTION:
mysvc-external-name-to-sohu.default.svc.cluster.local. 30 IN CNAME www.sohu.com.
www.sohu.com.		30	IN	CNAME	gs.a.sohu.com.
gs.a.sohu.com.		30	IN	CNAME	fbx.a.sohu.com.
fbx.a.sohu.com.		30	IN	A	123.126.104.68

;; Query time: 40 msec
;; SERVER: 10.244.0.2#53(10.244.0.2)
;; WHEN: Wed Mar 23 03:27:59 UTC 2022
;; MSG SIZE  rcvd: 283
  • 进入pod内部,测试配置结果。nslookup和ping均成功,结果显示正常。可以发现nslookup SVC_NAME.NAMESPACE.svc.cluster.local 的结果与直接查询 www.sohu.com 的结果是相同的。
root@jjh-k8s-demo-master:~/k8s_yaml/bzhan_shangguigu# kubectl exec myapp-deploy-5cd54797f8-c2wvd -it -- /bin/sh
 # nslookup www.sohu.com
nslookup: can't resolve '(null)': Name does not resolve

Name:      www.sohu.com
Address 1: 123.126.104.68
Address 2: 2408:80f0:4100:4007::4
Address 3: 2408:80f0:4100:4007::5

# nslookup ping mysvc-external-name-to-sohu.default.svc.cluster.local
Server:    123.126.104.68
Address 1: 123.126.104.68

nslookup: can't resolve 'ping': Name does not resolve

 # ping mysvc-external-name-to-sohu.default.svc.cluster.local.
PING mysvc-external-name-to-sohu.default.svc.cluster.local. (123.126.104.68): 56 data bytes
64 bytes from 123.126.104.68: seq=0 ttl=50 time=4.265 ms
64 bytes from 123.126.104.68: seq=1 ttl=50 time=4.129 ms

备注

当查询主机SVC_NAME.NAMESPACE.svc.cluster.local 时,集群的DNS服务将返回一个对应的的CNAME记录。访问这个服务的工作方式和其他的相同,唯一不同的是重定向发生在DNS层,而且不会进行代理或转发