kubernetes简称(K8S)作为当下最流行的容器编排管理工具,已经逐渐的被各大企业的运维作为首选的方案,基于K8S的容器部署应用的方案也就越来越多。前段时间也抽空学习了这个技术,用起来就两个字:真香。
但是作为一个小企业的开发者,在使用的过程中难免有写麻烦,那就是K8S的默认开放端口是30000-32767,使用NodePort作为外部访问的方案,就需要在域名/IP后面带上端口号,非常的不美观;如果使用LodaBalance的方案则需要依赖各大运营商的负载均衡器,这是个烧钱的工具,所以如何通过域名不带端口直接进行访问到自己的服务成为了主攻的方向。
问题原因是什么?
K8S因为默认开放30000-32767的端口,导致服务外放的时候没有不是我们默认的类似80或者443之类的HTTP(S)的指定端口,这就导致我们在访问相对应的服务的时候,需要在域名和IP后面指定端口号,才能进行访问,作为一个有强迫症的程序,就希望能解决这个问题。
解决思路
占用80和443的端口,都是Web服务器,所以考虑使用反向代理,去实现对服务的访问。
思路分析
根据上面的思路我想到了两种解决方案:
1、直接使用NGINX服务器做反向代理
优点:1、简单,快速,只要会编写nginx的配置文件就可以实现
2、更加通用,独立部署一个nginx就相当于你自己做了一个负载均衡器,对于之后的扩展和迁移都是很方便的
缺点:1、因为脱离K8S的控制,所以导致服务如果出现问题就需要自己独立监控独立维护,管理复杂
2、独立部署就需要独立维护一套配置,多起来还是蛮复杂的
2、通过k8s上面部署ingress-nginx进行反向代理
优点:1、ingress-nginx是部署在k8s上面的,所以K8S就能很好的监控到服务本身的运行,机制比较完善
2、只需要管理一套yaml文件就可以完成多域名的部署,简单,不需要自己去独立编写nginx的配置文件
缺点:如果需要使用ingress-nginx去占用80和443端口的话,则只能绑定对应的主机去实现,使得服务本身就不能脱离主机的端口,如果ingress-nginx所在的主机宕机了,就无法正常迁移,享受不到kubernetes的重试机制
实现方法
方案一:关于使用nginx的配置的话,这个很简单,做个反向代理即可,nginx使用安装方式还是docker方式都可以
我使用的是docker进行安装
在主机创建一个目录存放nginx的内容
# mkdir -p /root/docker/nginx/conf.d
# mkdir -p /root/docker/nginx/html
# mkdir -p /root/docker/nginx/log
运行docker命令
# docker run -d -v /root/docker/nginx/conf.d/:/etc/nginx/conf.d/ -v /root/docker/nginx/html/:/usr/share/nginx/html -v /root/docker/nginx/logs/:/var/log/nginx -p 80:80 -p 443:443 --name=nginx --restart=always --privileged=true nginx
通过docker ps mingl查看nginx的状态
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ae5fb0347e8 nginx "/docker-entrypoint.…" 3 weeks ago Up 7 days 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp nginx
显示出上面的内容,nginx的配置就完成了。
此处有个说明,在docker的方式下部署的nginx是因为在容器内的/etc/nginx下有一个nginx.conf文件,将conf.d目录下的所有*.conf后缀的文件都囊括进去了,所以不需要单独在加一个http层。所以我也就没有在后续的配置文件下,去套一个http{}这样的内容。
下面贴一个配置文件做个介绍:
在/root/docker/nginx/conf.d/下面,创建一个console.conf的文件
## 通过upstream添加反向代理服务器
upstream console {
server ip1:port1;
server ip2:port2;
}
## 这是80端口的配置,我加了https跳转,如果不需要,就将location部分往下的注释#去掉,将rewrite删除
server {
listen 80;
server_name 具体域名;
rewrite ^(.*)$ https://$host$1; #将所有HTTP请求通过rewrite指令重定向到HTTPS
#location / {
# proxy_pass http://console;
# proxy_redirect off;
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_set_header X-Forwarded-Proto $scheme;
# proxy_set_header Upgrade $http_upgrade;
# proxy_set_header Connection $connection_upgrade;
# proxy_set_header X-Request-ID $request_id;
# proxy_next_upstream error timeout;
# proxy_http_version 1.1;
#}
}
## https部分的有个说明,如果使用证书的话,可以考虑在主机的conf.d下去创建一个cert目录存放证书文件,但是在里面的配置中不能写主机文件的路径,要写容器文件的路径
## 例子:我的证书在主机下是/root/docker/nginx/conf.d/cert下
## 因为跟容器的映射是-v /root/docker/nginx/conf.d/:/etc/nginx/conf.d/
## 所以在下面ssl_certificate和ssl_certificate_key的配置就要写成
## ssl_certificate /etc/nginx/conf.d/xxx.pem;
## ssl_certificate_key /etc/nginx/conf.d/xxx.key
server {
listen 443 ssl;
ssl_certificate ssl的pem证书文件的路径;
ssl_certificate_key ssl的key证书文件的路径;
ssl_session_timeout 5m;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
#表示使用的加密套件的类型。
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; #表示使用的TLS协议的类型。
ssl_prefer_server_ciphers on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
#gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/javascript application/x-javascript text/javascript application/json text/css application/xml application/x-httpd-php image/jpeg image/gif image/png;
gzip_vary off;
gzip_disable "MSIE [1-6]\.";
server_name 具体域名;
location / {
proxy_pass http://console;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $request_id;
proxy_next_upstream error timeout;
proxy_http_version 1.1;
}
}
另外,https证书在做认证的时候需要跟server_name是一致的,顺道科普https配置了,复制替换就可用。
通过上面的方式,nginx的反向代理就能代理到我们容器的服务,就可以通过直接访问域名来访问服务了
方案二:通过ingress-nginx做反向代理处理
1、首先我们要在k8s上安装ingress-nginx:
# curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
通过上面的将部署文件下载下来
2、需要对通过vim对文件进行修改
## 找到文件Deployment的位置,将类型kind改成DaemonSet
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment ==> DaemonSet
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
hostNetwork: true # 在spec下设置hostNetwork设置为true,如果没有就自行添加,绑定主机网络,先保证物理机上的80和443没有被占用
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a # 因为k8s.gcr.io是google的服务器,如果有科学上网则不需要改image
# image: image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
此处的两个也是同样的替换镜像文件,科学上网忽略
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
namespace: ingress-nginx
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
# image: registry.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.0
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
allowPrivilegeEscalation: false
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
namespace: ingress-nginx
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
# image: registry.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.0
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
securityContext:
allowPrivilegeEscalation: false
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 2000
配置文件修改完成后执行命令进行安装等待完成
# kubect apply -f deploy.yaml
如何判断安装完成
## 执行
# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-r2fdf 0/1 Completed 0 3h9m
ingress-nginx-admission-patch-pw84m 0/1 Completed 1 3h9m
ingress-nginx-controller-6dt7g 1/1 Running 0 3h9m
## 看到ingress-nginx-controller-xxxx的Status是Running 且READY是1/1的时候就说明正常了
这个时候,你去访问目标机器的80端口
会出现404 not found说明就是正常的。
之后我们在default下创建一个tomcat的deployment服务:tomcat-deployment.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: tomcat
namespace: default
labels:
app: tomcat
annotations:
deployment.kubernetes.io/revision: '1'
kubesphere.io/creator: admin
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
creationTimestamp: null
labels:
app: tomcat
spec:
volumes:
- name: host-time
hostPath:
path: /etc/localtime
type: ''
containers:
- name: container-b0zpfy
image: 'tomcat:jre8'
ports:
- name: tcp-8080
containerPort: 8080
protocol: TCP
resources: {}
volumeMounts:
- name: host-time
readOnly: true
mountPath: /etc/localtime
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: default
serviceAccount: default
securityContext: {}
affinity: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
给这个tomcat创建一个Service服务:tomcat-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: tomcat
namespace: default
labels:
app: tomcat
annotations:
kubesphere.io/creator: admin
spec:
ports:
- name: http-8080
protocol: TCP
port: 8080
targetPort: 8080
selector:
app: tomcat
type: ClusterIP
sessionAffinity: None
创建一个tomcat-ingress.yaml文件
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress-tomcat
spec:
rules:
- host: www.tomcat1.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tomcat
port:
number: 8080
ingressClassName: nginx
按照上面的顺序分别执行kubectl apply -f 就可以将文件启动起来了
# kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
tomcat-54d6dc4796-llrzl 1/1 Running 0 3h7m
# kubectl get svc -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tomcat ClusterIP 10.233.22.156 <none> 8080/TCP 5m59s
# kubectl get ing -n default
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx-app-ingress-tomcat nginx www.tomcat2.com 80, 443 2m47s
执行上面三个命令可以判断服务是否启动完成
等待启动完成之后,看到ingress部分的HOSTS是www.tomcat2.com 因为没有域名我所以我就修改了本地的host到做映射
然后通过浏览器访问域名
显示出当前页面,这个是跟nginx风格不一样的异常页面,是tomcat的404,说明我们成功通过域名直接访问到了tomcat服务
至此,这两种方案都演示完成了
扩展
1、ingress-nginx添加https证书 非自签名证书,将证书放在服务器上,通过命令行生成证书的secret
## -n 后面是命名空间 www-tomcat-com 这个是秘钥名称可自定义 --from-file=tls.crt=秘钥crt所在的路径 --from-file=tls.key=秘钥key所在的路径
# kubectl -n demo-echo create secret generic www-tomcat-com --from-file=tls.crt=server.crt --from-file=tls.key=server.key
之后在tomcat-ingress.yaml文件下添加tls的配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-app-ingress
spec:
rules:
- host: www.tomcat.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: tomcat
port:
number: 8080
tls:
- hosts:
- www.tomcat.com # 此处跟证书域名保持一致
secretName: www-tomcat-com # 秘钥的名称
ingressClassName: nginx
之后部署就可以通过https访问服务了,这就不截图演示了。
附上自签名证书的生成方案:
echo "生成自签署的 ca 证书"
openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=My Cert Authority'
echo "生成用上述 ca 签署的 server 证书"
openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.tomcat.com' 这里的/CN=域名写自己的域名
openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
2、在操作的过程中,我试着去访问不同的命名空间下的服务,但是执行失败的情况
Error from server (InternalError): error when creating "tomcat-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": the server rejected our request for an unknown reason
这个异常,目前给出的方案是删除ingress-nginx-admission
## 执行下面这个命令
# kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
就可以正常了,具体原因应该是因为权限的问题
总结
以上介绍了两种关于解决k8s需要带端口访问的问题
我个人比较倾向于第一种,方案,虽然第二种优雅,但是第一种是比较常规的方案,适合绝大多数场景,有好的运维就可以维护好这些东西,当然如果是不差钱,最推荐的当然是LoadBalance就省去了运维的烦恼(头发又少了些:)
微笑)。
欢迎相互指教,交流学习。如果有帮助,麻烦送个点赞吧~
参考链接: