KIND
KIND全称Kubernetes in Docker是一个用于在本地环境创建 Kubernetes 集群的工具。KIND使用Docker容器模拟kubernetes集群节点,可以支持模拟多节点集群,通过KIND可以无成本、快速、轻量级的方式搭建和测试Kubernetes应用。 什么是ingress
Ingress
是对集群中服务的外部访问进行管理的 API 对象,典型的访问方式是 HTTP。Ingress 可以提供负载均衡、SSL 终结和基于名称的虚拟托管。Ingress的定义如下:
// https://github1s.com/kubernetes/kubernetes/blob/master/pkg/apis/networking/types.go
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Ingress is a collection of rules that allow inbound connections to reach the
// endpoints defined by a backend. An Ingress can be configured to give services
// externally-reachable urls, load balance traffic, terminate SSL, offer name
// based virtual hosting etc.
type Ingress struct {
metav1.TypeMeta
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta
// spec is the desired state of the Ingress.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec IngressSpec
// status is the current state of the Ingress.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status IngressStatus
}
Ingress controller
Ingress controller是一种追踪ingress资源的控制器,为了让 Ingress 资源工作,集群必须有一个正在运行的 Ingress 控制器。与作为 kube-controller-manager
可执行文件的一部分运行的其他类型的控制器不同, Ingress 控制器不是随集群自动启动的。 基于此页面,你可选择最适合你的集群的 ingress 控制器实现。
Kubernetes官方目前支持和维护 AWS、 GCE 和 Nginx Ingress 控制器。第三方实现的ingress controller,典型的例子是APISIX和Kong实现的Ingress控制器。
本文将通过安装一个ingress controller的实例,来介绍安装、使用KIND的整个过程,包括:
- 创建集群
- 部署Kong Ingress controller
创建集群
我们可以在创建集群时,进行如下设置:
- 利用 KIND 的
extraPortMapping
配置选项,将主机上的端口转发到运行在节点上的 Ingress 控制器。允许本地主机通过80/443端口发送请求访问ingress controller - 通过在 kubeadm 的
InitConfiguration
中使用node-labels
来设置自定义节点标签,供 Ingress 控制器的nodeSelector
使用。通过label可以将Ingress controller运行在带有该特定label的节点上。
通过配置文件创建集群
cat <<EOF | sudo kind create cluster --name test --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
EOF
更新kubeconfig
sudo kind get kubeconfig --name test > ~/.kube/config
➜ expose-controller k cluster-info
Kubernetes control plane is running at <https://127.0.0.1:46219>
CoreDNS is running at <https://127.0.0.1:46219/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
安装Kong Ingress Controller
kubectl apply -f [raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v2.12.0/deploy/single/all-in-one-dbless.yaml](<https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/v2.12.0/deploy/single/all-in-one-dbless.yaml>)
通过设置 hostPorts
将80/443请求转发到 Ingress 控制器,设置容忍性标记,并将其Ingress controller调度到自定义标记的节点上。
{
"spec": {
"replicas": 1,
"template": {
"spec": {
"containers": [
{
"name": "proxy",
"ports": [
{
"containerPort": 8000,
"hostPort": 80,
"name": "proxy-tcp",
"protocol": "TCP"
},
{
"containerPort": 8443,
"hostPort": 443,
"name": "proxy-ssl",
"protocol": "TCP"
}
]
}
],
"nodeSelector": {
"ingress-ready": "true"
},
"tolerations": [
{
"key": "node-role.kubernetes.io/control-plane",
"operator": "Equal",
"effect": "NoSchedule"
},
{
"key": "node-role.kubernetes.io/master",
"operator": "Equal",
"effect": "NoSchedule"
}
]
}
}
}
}
更新配置到deployment
kubectl patch deployment -n kong proxy-kong -p '{"spec":{"replicas":1,"template":{"spec":{"containers":[{"name":"proxy","ports":[{"containerPort":8e3,"hostPort":80,"name":"proxy-tcp","protocol":"TCP"},{"containerPort":8443,"hostPort":443,"name":"proxy-ssl","protocol":"TCP"}]}],"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/control-plane","operator":"Equal","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'
更新service type为 NodePort
:
kubectl patch service -n kong kong-proxy -p '{"spec":{"type":"NodePort"}}'
更新ingress的ingressClassName为kong,让Kong ingress controller可以控制ingress
kubectl patch ingress example-ingress -p '{"spec":{"ingressClassName":"kong"}}'
使用Ingress
通过下面的例子创建一个名为http-echo的service,并将其加入到Ingress的path路由中,匹配path前缀流量会转发给对应的service进行处理。 创建名为foo-app的Pod
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- command:
- /agnhost
- netexec
- --http-port
- "8080"
image: e2eteam/agnhost:2.26
name: foo-app
常见名为foo-service的Service
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
# Default port used by the image
- port: 8080
---
创建名为bar-app的Pod
kind: Pod
apiVersion: v1
metadata:
name: bar-app
labels:
app: bar
spec:
containers:
- command:
- /agnhost
- netexec
- --http-port
- "8080"
image: e2eteam/agnhost:2.26
name: bar-app
创建名为bar-service的Service
---
kind: Service
apiVersion: v1
metadata:
name: bar-service
spec:
selector:
app: bar
ports:
# Default port used by the image
- port: 8080
创建Ingress并分别为foo-service和bar-service创建对应的路由Path
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /foo(/|$)(.*)
backend:
service:
name: foo-service
port:
number: 8080
- pathType: Prefix
path: /bar(/|$)(.*)
backend:
service:
name: bar-service
port:
number: 8080
- Ingress通过两条路由分别匹配前缀的
/foo
和/bar
的请求 - 前缀为
/foo
请求将会转发给名为foo-service的service - foo-service将通过端口8080将请求转发给名为foo-pod的pod
- 前缀为
/bar
请求将会转发给名为bar-service的service - bar-service将通过端口8080将请求转发给名为bar-pod的pod
流量的整体走向如下:
user request with prefix /foo
--> example-ingress --> foo-service -> foo-pod
user request with prefix /bar
--> example-ingress --> bar-service -> bar-pod
更新ingress的路由配置
kubectl apply -f ingress-sample.yaml
通过ingress访问pod
通过curl发送请求进行测试
# should output "foo-app"
curl localhost/foo/hostname
NOW: 2024-03-19 02:39:59.736679574 +0000 UTC m=+3073.537368749%
# should output "bar-app"
curl localhost/bar/hostname
NOW: 2024-03-19 02:39:37.509405736 +0000 UTC m=+3051.259002000%