一、NodePort
采用云服务器:
准备三个yaml文件(参照 博主Java4ye juejin.cn/post/717537… 学习):
- ns.yarm(创建命名空间)
apiVersion: v1
kind: Namespace
metadata:
name: test-ns
- deployment.yaml(创建deployment)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-dep
name: my-dep
namespace: test-ns
spec:
replicas: 3
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
- image: nginx
name: nginx
- svc.yaml(创建service)
apiVersion: v1
kind: Service
metadata:
labels:
app: my-dep
name: my-dep
namespace: test-ns
spec:
selector:
app: my-dep
ports:
- port: 8000
protocol: TCP
targetPort: 80
type: NodePort
按顺序执行文件
[root@VM-12-5-centos ~]# kubectl apply -f ns.yaml
namespace/test-ns created
[root@VM-12-5-centos ~]# kubectl apply -f deployment.yaml
deployment.apps/my-dep created
[root@VM-12-5-centos ~]# kubectl apply -f svc.yaml
service/my-dep created
查看pods,service(svc)情况
[root@VM-12-5-centos ~]# kubectl get pods -n test-ns -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-dep-55464574c7-9xxcp 1/1 Running 0 4m5s 172.17.0.4 minikube-m03 <none> <none>
my-dep-55464574c7-mfdxp 1/1 Running 0 4m5s 172.17.0.2 minikube-m03 <none> <none>
my-dep-55464574c7-tsxbt 1/1 Running 0 4m5s 172.17.0.5 minikube-m03 <none> <none>
[root@VM-12-5-centos ~]# kubectl get svc -n test-ns -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
my-dep NodePort 10.102.76.52 <none> 8000:32127/TCP 8m5s app=my-dep
进入 minikube 访问 svc:
可能会出现上述错误,网络不通,查看kube-proxy代理日志:
[root@VM-12-5-centos ~]# kubectl get pod -n kube-system
[root@VM-12-5-centos ~]# kubectl logs kube-proxy-vzrr9 -n kube-system
因为采用使用了多节点集群,本处是由于ipvs转发没有配置,导致Service无法访问pod。
为了应对大集群中 iptables 模式的性能问题,kubernetes v1.8 中引入了 ipvs 模式,ipvs 由两个部分组成,分别是用户空间的管理工具 ipvsadm 以及运行在内核中 ipvs 模块,由于没有安装ipvsadm工具,只能直接修改kube-proxy的配置。
扩展一下,ipvs 包含 tunnel(ipip)、direct、nat 三中工作模式,kube-proxy 基于 nat 模式工作。如果采用ipvsadma 工具,操作如上:
# 创建 server
$ ipvsadm -A -t 10.102.76.52:8000 -s rr
# 管理 Real Service
$ ipvsadm -a -t 10.102.76.52:8000 -r 172.17.0.4:80 -m
$ ipvsadm -a -t 10.102.76.52:8000 -r 172.17.0.2:80 -m
$ ipvsadm -a -t 10.102.76.52:8000 -r 172.17.0.5:80 -m
# 添加NAT
$ iptables -t nat -A POSTROUTING -m ipvs --vaddr 10.102.76.52 --vport 6789 -j MASQUERADE
# 绑定VIP到一张 dummy 网卡
$ ip link add ipvs0 type dummy
$ ip addr add 10.102.76.52/32 dev ipvs0
# 开启 contrack
$ echo 1 > /proc/sys/net/ipv4/vs/conntrack
通过上述命令,即可实现主机 A 上对三个 kube-apisever 的负载均衡访问。kube-proxy 通过 kube-apiserver 获取集群中所有的 Service 和 Endpoint 信息,在每个节点上创建 ipvs service/real server。
回到本操作中,直接通过配置文件方式实现ipvs配置,在kube-proxy中配置mode设置为ipvs。
[root@VM-12-10-centos ~]# kubectl edit cm kube-proxy -n kube-system
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
[root@VM-12-5-centos ~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
保存退出,授权
[root@VM-12-5-centos ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash
如果出现下列错误,是因为高版本的centos内核nf_conntrack_ipv4被nf_conntrack替换了,所以装不了。
重新编辑:
[root@VM-12-5-centos ~]# vi /etc/sysconfig/modules/ipvs.modules
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
保存退出:
[root@VM-12-5-centos ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
删除现有kube-proxy pod
[root@VM-12-5-centos ~]# kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
查看自动生成的kube-proxy情况
[root@VM-12-5-centos ~]# kubectl get pod -n kube-system
重新查看kubectl logs,ipvs已经生效,前面还是会错误,但已经不影响使用。
重新测试minikube service,访问正常,:
进入minikube访问pod,访问正常:
[root@VM-12-5-centos ~]# minikube ssh
删除测试用的deploay,pod,service,删除pod,deployment会自动重建,所以应该先删除deployment
# 查询deployment
[root@VM-12-5-centos ~]# kubectl get deployment -n test-ns
# 删除所有
[root@VM-12-5-centos ~]# kubectl delete deployment -all
# 删除一个
[root@VM-12-5-centos ~]# kubectl delete deployment DEPLOYMENT_NAME -n NAMESPACE_NAME
# 查询pod
[root@VM-12-5-centos ~]# kubectl get pod -n test-ns -o wide
# 删除所有pod
[root@VM-12-5-centos ~]# kubectl delete pod --all
# 删除一个pod
[root@VM-12-5-centos ~]# kubectl delete pod POD_NAME -n NAMESPACE_NAME
# 查询service
[root@VM-12-5-centos ~]# kubectl get svc -n test-ns -o wide
# 因为pod都删除了,对应的service就没有用了
[root@VM-12-5-centos ~]# kubectl delete svc SERVICE_NAME -n NAMESPACE_NAME
二、Ingress
minikube addons enable ingress
minikube addons enable ingress-dns
kind: Pod
apiVersion: v1
metadata:
name: foo-app
labels:
app: foo
spec:
containers:
- name: foo-app
image: 'kicbase/echo-server:1.0'
---
kind: Service
apiVersion: v1
metadata:
name: foo-service
spec:
selector:
app: foo
ports:
- port: 8080
---
kind: Pod
apiVersion: v1
metadata:
name: bar-app
labels:
app: bar
spec:
containers:
- name: bar-app
image: 'kicbase/echo-server:1.0'
---
kind: Service
apiVersion: v1
metadata:
name: bar-service
spec:
selector:
app: bar
ports:
- port: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /foo
backend:
service:
name: foo-service
port:
number: 8080
- pathType: Prefix
path: /bar
backend:
service:
name: bar-service
port:
number: 8080
---
上述配置没有deployment方式进行管理pod,采用一个service管理一个pod的方式,这里没有使用默认ClusterIP 方式,只能采用Ingress方式访问。
# 安装 kubectl apply -f nginx-ingress.yaml
# 卸载 kubectl delete -f nginx-ingress.yaml
安装出现下列错误:
解决方法:
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
重新安装成功
[root@VM-12-5-centos ~]# kubectl apply -f ns-ingress.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created
查看安装情况
[root@VM-12-5-centos ~]# kubectl get ingress -n test-ns -A
查看路由规则(Address的信息生成需求等几分钟才出现):
[root@VM-12-5-centos ~]# kubectl describe ingress example-ingress
测试安装情况:
通过Ingress访问
通过minikute ssh 内部访问pod
[root@VM-12-5-centos ~]# minikube ssh
加host访问:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: test-ns
spec:
rules:
- host: foss.test
http:
paths:
- pathType: Prefix
path: /foo
backend:
service:
name: foo-service
port:
number: 80
- pathType: Prefix
path: /bar
backend:
service:
name: bar-service
port:
number: 80
这里需要配置/etc/hosts增加下列一行:
[root@VM-12-5-centos ~]# vi /etc/hosts
192.168.49.2 foss.test foss.test
# 使用配置生效
[root@VM-12-5-centos ~]# nmcli c reload
配置完成后,只需要访问 foss.test 即可访问到后端的 service,不用再访问 API。
目前测试只能foss.test访问,IP地址无法访问。
三、集群配置pod后的,Ingress访问配置
- deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-dep
name: my-dep
spec:
replicas: 3
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
- image: nginx
name: nginx
- svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: my-dep
name: my-dep
spec:
selector:
app: my-dep
ports:
- port: 8080
protocol: TCP
targetPort: 80
type: NodePort
此处必须采用NodePort方式,不能只使用默认ClusterIP方式公布服务,这里使用一个Service服务下有pod。
- ns-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: foss.test
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-dep
port:
number: 8080
执行yaml文件
[root@VM-12-5-centos ~]# kubectl apply -f deployment.yaml
[root@VM-12-5-centos ~]# kubectl apply -f svc.yaml
[root@VM-12-5-centos ~]# kubectl apply -f ns-ingress.yaml
[root@VM-12-5-centos ~]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx foss.test 192.168.49.2 80 10m
[root@VM-12-5-centos ~]# curl foss.test
Ingress 创建成功后,会在 pod 中的 nginx.conf 文件中创建 Server :foss.test 和 foss2.test,并配置相应的路由规则进入pod,查看nginx配置
[root@VM-12-5-centos ~]# kubectl exec -it my-dep-5b7868d854-2mq62 -- /bin/bash
# 进入容器后,Nginx的配置文件通常位于 /etc/nginx 目录下。 你可以使用以下命令查看该目录下的文件: ls /etc/nginx 这将列出该目录下的文件和目录,其中主要的配置文件是 nginx.conf 。
[root@VM-12-5-centos ~]# cat /etc/nginx/nginx.conf
四、将 minikube 中的 service 暴露到公网 可采用Nginx,HAproxy, Traefik这些服务进行代理,如:利用 nginx,监听某个端口,再根据访问路径的不同进行转发,使用宿主上的docker部署。
先使用httpd生生密钥:
[root@VM-12-5-centos ~]# docker run --rm -it --entrypoint /usr/local/apache2/bin/htpasswd httpd:2.4-alpine -nb test test
test:$apr1$ELrgzNdy$LHCzXeGJp.ltDkEcXukI21
再配置docker-compose.yml文件:
[root@VM-12-5-centos ~]# vi docker-compose.yml
version: '3'
services:
traefik:
# The official v2 Traefik docker image
image: "traefik:v2.9"
container_name: "traefik"
restart: always
# Enables the web UI and tells Traefik to listen to docker
command:
- --api=true
- --api.insecure=true
- --api.dashboard=true
- --providers.docker=true
- --providers.docker.endpoint=unix:///var/run/docker.sock
- --providers.docker.watch=true
- --providers.file.directory=/etc/traefik
# - --providers.file.filename=/etc/traefik/config.yml
- --log.filePath=/var/log/traefik/traefik.log
- --log.format=json
- --log.level=DEBUG
- --accesslog=true
- --accesslog.filepath=/var/log/traefik/access.log
- --accesslog.format=json
- --entrypoints.http.address=:80
ports:
# The HTTP port
- "8000:80"
- "443:443"
# The Web UI (enabled by --api.insecure=true)
- "8080:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
- /home/traefik/config:/etc/traefik
- /home/traefik/dynamic:/data/traefik/conif
# - /home/traefik/config/traefik-config.yml:/etc/traefik/config.yml
- /home/traefik/logs/traefik:/var/log/traefik
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
networks:
- minikube
extra_hosts:
- "foss.test:192.168.49.2"
- "foss2.test:192.168.49.2"
labels:
- "traefik.enable=true"
- "traefik.docker.network=minikube"
- "traefik.http.middlewares.my-dep-stripPrefixRegex.stripprefixregex.regex=/my-dep\\d*"
- "traefik.http.middlewares.my-basic-auth.basicauth.users=test:$$apr1$$ELrgzNdy$$LHCzXeGJp.ltDkEcXukI21"
- "traefik.http.middlewares.myHeader1.headers.customrequestheaders.host=foss.test"
- "traefik.http.middlewares.myHeader2.headers.customrequestheaders.host=foss2.test"
- "traefik.http.services.my-dep.loadbalancer.servers.url=http://foss.test"
- "traefik.http.services.my-dep2.loadbalancer.servers.url=http://foss2.test"
- "traefik.http.routers.router1.middlewares=my-basic-auth, my-dep-stripPrefixRegex, myHeader1"
- "traefik.http.routers.router1.service=my-dep"
- "traefik.http.routers.router1.rule=PathPrefix(`/my-dep`) || PathPrefix(`/my-dep{a:/*$$}`)"
- "traefik.http.routers.router1.entrypoints=http"
- "traefik.http.routers.router2.middlewares=my-basic-auth, my-dep-stripPrefixRegex, myHeader2"
- "traefik.http.routers.router2.service=my-dep2"
- "traefik.http.routers.router2.rule=PathPrefix(`/my-dep2`) || PathPrefix(`/my-dep2{a:/*$$}`)"
- "traefik.http.routers.router2.entrypoints=http"
whoami:
image: "containous/whoami"
container_name: "simple-service"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`traefik.pbeta.cn`) && Path(`/whoami`)"
networks:
- minikube
networks:
minikube:
external: true
在docker-compose.yml中的特殊符号"$"的转译为"$$"。
[root@VM-12-5-centos ~]# docker-compose -f docker-compose.yml down
[root@VM-12-5-centos ~]# docker-compose -f docker-compose.yml up -d
测试
[root@VM-12-5-centos ~]# curl -vvv http://localhost:80/my-dep