利用 Karmada 实现跨集群的资源分发,利用 Submariner 的服务发现原则,实现应用的就近负载。
实验环境
1、Karmada 接入了两个集群,分别是 k8s-node4、k8s-node5。下面把 k8s-node4 唤作集群甲,k8s-node5唤作集群乙。
[root@localhost ~]# k get cluster
NAME VERSION MODE READY AGE
k8s-node4 v1.23.7 Push True 2d20h
k8s-node5 v1.23.7 Push True 2d20h
2、k8s-node4、k8s-node5 安装了 submariner 网络插件,进行跨集群通信。
方案
在 karmada 集群上:
- 创建一个实例数为 1 的 Nginx 的应用,以
Duplicated的实例调度模式分发到集群甲和集群集群乙。 - 创建一个 ClusterIP 类型的 Service,暴露 Nginx 的应用,分发到集群甲乙。
- 创建一个与 Service 同名的 ServiceExport,分发到集群甲乙。
- 在集群甲上面,请求 Nginx 应用。流量会一直打到集群甲上面的 Nginx。
部署步骤
部署 Nginx
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: daocloud.io/atsctoo/nginx-unprivileged:stable-alpine
name: nginx
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-propagation
namespace: nginx-test
spec:
resourceSelectors:
- apiVersion: apps/v1
kind: Deployment
name: nginx
namespace: nginx-test
placement:
clusterAffinity:
clusterNames:
- k8s-node4
- k8s-node5
replicaScheduling:
replicaSchedulingType: Duplicated
部署 Service
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
namespace: nginx-test
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: nginx
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-service-propagation
namespace: nginx-test
spec:
placement:
clusterAffinity:
clusterNames:
- k8s-node4
- k8s-node5
resourceSelectors:
- apiVersion: v1
kind: Service
name: nginx
namespace: nginx-test
部署 ServiceExport
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
name: nginx
namespace: nginx-test
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
name: nginx-serviceexport-propagation
namespace: nginx-test
spec:
placement:
clusterAffinity:
clusterNames:
- k8s-node4
- k8s-node5
resourceSelectors:
- apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
name: nginx
namespace: nginx-test
kubectl run -n nginx-test tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
验证
- 在集群乙上,运行 nettest 镜像
$ kubectl run -n nginx-test tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
- 进入 nettest 控制台,执行如下命令:
bash-5.0# date
Thu Jul 7 07:42:07 UTC 2022
bash-5.0# curl nginx.nginx-test.svc.clusterset.local:8080
bash-5.0# curl nginx.nginx-test.svc.clusterset.local:8080
- 查看日志,确定流量打到集群乙上的 Nginx。
[root@k8s-node5 ~]# kubectl logs nginx-7494bf958f-dpp2t -n nginx-test
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/07/07 07:20:26 [notice] 1#1: using the "epoll" event method
2022/07/07 07:20:26 [notice] 1#1: nginx/1.22.0
2022/07/07 07:20:26 [notice] 1#1: built by gcc 11.2.1 20220219 (Alpine 11.2.1_git20220219)
2022/07/07 07:20:26 [notice] 1#1: OS: Linux 3.10.0-1127.el7.x86_64
2022/07/07 07:20:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/07/07 07:20:26 [notice] 1#1: start worker processes
2022/07/07 07:20:26 [notice] 1#1: start worker process 31
2022/07/07 07:20:26 [notice] 1#1: start worker process 32
2022/07/07 07:20:26 [notice] 1#1: start worker process 33
2022/07/07 07:20:26 [notice] 1#1: start worker process 34
10.150.108.64 - - [07/Jul/2022:07:42:11 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.69.1" "-"
10.150.108.64 - - [07/Jul/2022:07:42:14 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.69.1" "-"
引用官方结论
Service Discovery for Services Deployed to Multiple Clusters Submariner follows this logic for service discovery across the cluster set:
-
If an exported Service is not available in the local cluster, Lighthouse DNS returns the IP address of the ClusterIP Service from one of the remote clusters on which the Service was exported. If it is an SRV query, an SRV record with port and domain name corresponding to the ClusterIP will be returned.
-
If an exported Service is available in the local cluster, Lighthouse DNS always returns the IP address of the local ClusterIP Service.
-
If multiple clusters export a Service with the same name and from the same namespace, Lighthouse DNS load-balances between the clusters in a round-robin fashion. Note that Lighthouse returns IPs from connected clusters only. Clusters in disconnected state are ignored.
-
Applications can always access a Service from a specific cluster by prefixing the DNS query with cluster-id as follows: ...svc.clusterset.local.