节点情况说明
| 节点 | 主机名 | IP地址 |
|---|---|---|
| 控制层面(master) | node1 | 172.16.21.131 |
| 工作节点(slave) | node2 | 172.16.21.132 |
| 工作节点(slave) | node3 | 172.16.21.133 |
| 工作节点(slave) | node4 | 172.16.21.134 |
k8s 部署可以参考下文
使用rook部署ceph存储
Ceph是一个统一的分布式存储系统,提供了块存储、文件存储、对象存储存储方式,Rook是一个开源的原生云存储编排器,为Ceph存储提供平台、框架和支持,以便与原生云环境进行原生集成
# 首先克隆rook仓库
git clone https://github.com/rook/rook.git
# 切换到v1.10.13版本,由于最新版测试的有问题,所以切换到以前的版本
git checkout v1.10.13
#进入到下面的目录
cd deploy/examples
# 编辑deploy/examples/operator.yaml文件,做下面修改,因为registry.k8s.io国内无法访问,所以需要设置成阿里的镜像源
@@ -107,12 +107,12 @@ data:
# The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver.
- # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.7.2"
- # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.7.0"
- # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.7.0"
- # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v3.4.0"
- # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v6.2.1"
- # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.1.0"
+ ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.7.2"
+ ROOK_CSI_REGISTRAR_IMAGE: "registry.aliyuncs.com/google_containers/csi-node-driver-registrar:v2.7.0"
+ ROOK_CSI_RESIZER_IMAGE: "registry.aliyuncs.com/google_containers/csi-resizer:v1.7.0"
+ ROOK_CSI_PROVISIONER_IMAGE: "registry.aliyuncs.com/google_containers/csi-provisioner:v3.4.0"
+ ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.aliyuncs.com/google_containers/csi-snapshotter:v6.2.1"
+ ROOK_CSI_ATTACHER_IMAGE: "registry.aliyuncs.com/google_containers/csi-attacher:v4.1.0"
部署ceph集群
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
部署工具
kubectl create -f toolbox.yaml
部署面板,该面板可以通过网页操作ceph集群(会命令的话也可以胜率这一步)
kubectl apply -f dashboard-external-http.yaml
查看ceph部署结果
01:51:07 root@node1 ~ → kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-2lbmn 2/2 Running 10 (13h ago) 7d9h
csi-cephfsplugin-b5jtk 2/2 Running 10 (13h ago) 7d9h
csi-cephfsplugin-lhjj7 2/2 Running 10 (13h ago) 7d9h
csi-cephfsplugin-provisioner-66b68f78b4-4842r 5/5 Running 10 (13h ago) 6d8h
csi-cephfsplugin-provisioner-66b68f78b4-tqhk7 5/5 Running 10 (13h ago) 6d11h
csi-rbdplugin-4skqc 2/2 Running 10 (13h ago) 7d9h
csi-rbdplugin-l5572 2/2 Running 10 (13h ago) 7d9h
csi-rbdplugin-n54f2 2/2 Running 10 (13h ago) 7d9h
csi-rbdplugin-provisioner-6c46f8d495-fwngq 5/5 Running 10 (13h ago) 6d9h
csi-rbdplugin-provisioner-6c46f8d495-rx8sf 5/5 Running 10 (13h ago) 6d8h
rook-ceph-crashcollector-node2-6cc646cddd-fbzkv 1/1 Running 2 (13h ago) 6d9h
rook-ceph-crashcollector-node3-575dc8867b-444r9 1/1 Running 1 (13h ago) 6d8h
rook-ceph-crashcollector-node4-86499c878-49tr4 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-bkrc9 1/1 Running 1 (13h ago) 6d8h
rook-ceph-crashcollector-node4-86499c878-bpfbd 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-rfbvs 0/1 Init:1/2 0 6d9h
rook-ceph-crashcollector-node4-86499c878-rl9xf 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-shpjx 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-tp6zb 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-vrwtj 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-whmln 0/1 Evicted 0 6d9h
rook-ceph-crashcollector-node4-86499c878-zb79d 0/1 Evicted 0 6d9h
rook-ceph-mgr-a-5bc574ff77-sgp46 3/3 Running 7 (13h ago) 6d11h
rook-ceph-mgr-b-7495c6669b-mvfwd 3/3 Running 6 (13h ago) 6d9h
rook-ceph-mon-a-59548898fb-9wr8h 2/2 Running 4 (13h ago) 6d9h
rook-ceph-mon-b-659bcc7b49-kmspf 2/2 Running 4 (13h ago) 6d11h
rook-ceph-mon-c-5758989cd5-k8q8q 2/2 Running 2 (13h ago) 6d8h
rook-ceph-operator-6cf8b4f6-7qm5s 1/1 Running 2 (13h ago) 6d8h
rook-ceph-osd-0-795696bb84-nq4qw 2/2 Running 3 (13h ago) 6d9h
rook-ceph-osd-1-796d9955c8-s5jdl 2/2 Running 4 (13h ago) 6d11h
rook-ceph-osd-2-f4f94f57-vqm47 2/2 Running 2 (13h ago) 6d8h
rook-ceph-osd-prepare-node2-hbvb4 0/1 Completed 0 12h
rook-ceph-osd-prepare-node3-htdxp 0/1 Completed 0 12h
rook-ceph-osd-prepare-node4-t82mk 0/1 Completed 0 12h
rook-ceph-tools-7c4b8bb9b5-fvkc7 1/1 Running 2 (13h ago) 6d8h
进入pod查看ceph状态
# 执行该命令进入
kubectl exec -it rook-ceph-tools-7c4b8bb9b5-fvkc7 -n rook-ceph -- sh
sh-4.4$ ceph status
cluster:
id: 4e878f6b-dec7-4c32-8e1d-9b157b5fc444
health: HEALTH_WARN
1 slow ops, oldest one blocked for 47005 sec, mon.c has slow ops
services:
mon: 3 daemons, quorum a,b,c (age 5h)
mgr: b(active, since 13h), standbys: a
osd: 3 osds: 3 up (since 13h), 3 in (since 7d)
data:
pools: 2 pools, 33 pgs
objects: 433 objects, 1.0 GiB
usage: 4.0 GiB used, 56 GiB / 60 GiB avail # 可以看到存储状态
pgs: 33 active+clean
io:
client: 23 KiB/s wr, 0 op/s rd, 1 op/s wr
sh-4.4$
通过网页方式进入集群
查看ceph web界面的node port
[root@master]# kubectl get svc -n rook-ceph -owide
获得admin用户密码
[root@master]# kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo
然后就可以使用密码登录到控制台了
创建storageclass
进入到下面的目录,然后部署storageClass
[root@master rbd]# pwd
/root/rook/deploy/examples/csi/rbd
[root@master rbd]# kubectl apply -f storageclass.yaml
创建mongo statefulset
apiVersion: v1
kind: Namespace # 创建mongo名称空间
metadata:
name: mongo
---
apiVersion: v1
kind: ServiceAccount # 在mongo名称空间下创建叫做mongo的ServiceAccount
metadata:
name: mongo
namespace: mongo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding # 将前面创建的mongo这个ServiceAccount通过 ClusterRoleBinding 绑定到cluster-admin这个ClusterRole角色上
metadata:
name: mongo
subjects:
- kind: ServiceAccount
name: mongo
namespace: mongo
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Service # 在mongo名称空间下创建叫做mongo的Service
metadata:
name: mongo
namespace: mongo
labels:
name: mongo
spec:
ports:
- port: 27017 # 指定该Service的端口为27017
targetPort: 27017 # 指定后端Pod的端口
clusterIP: None # 创建一个Headless Service
selector:
role: mongo
#"statefulset.kubernetes.io/pod-name": "mongo-0" # 选择mongo复制集主节点对外服务,因为mongo只有主节点可以提供写服务(这样配置可能会出现问题,当pod挂了的话就是主了,没法进行写入)
---
apiVersion: apps/v1
kind: StatefulSet # 创建StatefulSet的mongo集群,因为StatefulSet网络存储等固定,所以选择
metadata:
name: mongo # 该StatefulSet的名称为mongo
namespace: mongo
spec:
serviceName: mongo # mongo这个serviceName指的是前面创建的service负责统一管理这些新创建的pod网络
replicas: 3 # 3个Pod
selector: # selector用来匹配哪些pod符合该副本数量
matchLabels:
role: mongo
environment: staging
template:
metadata:
labels: # 给创建的Pod打标签
role: mongo
environment: staging
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity: # 添加 Pod 反亲和性,将副本打散在不同的节点
preferredDuringSchedulingIgnoredDuringExecution: # 软策略
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
serviceAccountName: mongo # 使用前面创建的serviceAccount
containers:
- name: mongo
image: mongo:4.0
command:
- mongod
- "--wiredTigerCacheSizeGB" # 设置缓存大小为0.25G
- "0.25"
- "--bind_ip" # mongo监听在所有ip上
- "0.0.0.0"
- "--replSet" # 设置副本集名称为MainRepSet
- MainRepSet
- "--noprealloc" # 禁用数据文件预分配(往往影响性能)
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-data # 挂载数据卷,名字和下面volumeClaimTemplates的名字一样
mountPath: /data/db
resources:
requests: # 指定容器请求资源
cpu: 1 # CPU 资源的限制和请求以 “cpu” 为单位。 在 Kubernetes 中,一个 CPU 等于 1 个物理 CPU 核 或者 1 个虚拟核, 取决于节点是一台物理主机还是运行在某物理主机上的虚拟机。
memory: 2Gi # 指定请求的内存
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS # 这应该是一个以逗号分隔的键值列表,与podTemplate标签相同
value: "role=mongo,environment=staging"
- name: KUBE_NAMESPACE # 查找pod的名称空间。不设置它将在所有名称空间中搜索pod
value: "mongo"
- name: KUBERNETES_MONGO_SERVICE_NAME # 指向前面创建的一个无头服务(headless service)
value: "mongo"
volumeClaimTemplates:
- metadata:
name: mongo-data
spec:
accessModes: [ "ReadWriteOnce" ] # 只能单节点读取写入
storageClassName: rook-ceph-block # 提供一个可用的 Storageclass
resources:
requests:
storage: 2Gi
查看mongo集群的状态
# 进去其中一个Pod可以查看集群的状态
05:28:31 root@node1 ~ → kubectl exec -it mongo-0 -n mongo -- bash
Defaulted container "mongo" out of: mongo, mongo-sidecar
root@mongo-0:/# mongo
MongoDB shell version v4.0.28
connecting to: mongodb://127.0.0.1:27017/?gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("4e70e75d-2321-4087-9b76-22fea5218017") }
MongoDB server version: 4.0.28
Server has startup warnings:
2023-04-18T04:43:41.656+0000 I STORAGE [initandlisten]
2023-04-18T04:43:41.656+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2023-04-18T04:43:41.656+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2023-04-18T04:43:42.374+0000 I CONTROL [initandlisten]
2023-04-18T04:43:42.374+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2023-04-18T04:43:42.374+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2023-04-18T04:43:42.374+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2023-04-18T04:43:42.374+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
MainRepSet:PRIMARY> rs.status()
{
"set" : "MainRepSet",
"date" : ISODate("2023-04-18T05:29:16.195Z"),
"myState" : 1,
"term" : NumberLong(1),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1681795755, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1681795755, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1681795755, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1681795755, 1),
"t" : NumberLong(1)
}
},
"lastStableCheckpointTimestamp" : Timestamp(1681795725, 1),
"electionCandidateMetrics" : {
"lastElectionReason" : "electionTimeout",
"lastElectionDate" : ISODate("2023-04-18T04:43:45.089Z"),
"electionTerm" : NumberLong(1),
"lastCommittedOpTimeAtElection" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"lastSeenOpTimeAtElection" : {
"ts" : Timestamp(1681793025, 1),
"t" : NumberLong(-1)
},
"numVotesNeeded" : 1,
"priorityAtElection" : 1,
"electionTimeoutMillis" : NumberLong(10000),
"newTermStartDate" : ISODate("2023-04-18T04:43:45.095Z"),
"wMajorityWriteAvailabilityDate" : ISODate("2023-04-18T04:43:45.197Z")
},
"members" : [
{
"_id" : 0,
"name" : "mongo-0.mongo.mongo.svc.cluster.local:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2735,
"optime" : {
"ts" : Timestamp(1681795755, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2023-04-18T05:29:15Z"),
"syncingTo" : "",
"syncSourceHost" : "",
"syncSourceId" : -1,
"infoMessage" : "",
"electionTime" : Timestamp(1681793025, 2),
"electionDate" : ISODate("2023-04-18T04:43:45Z"),
"configVersion" : 5,
"self" : true,
"lastHeartbeatMessage" : ""
},
{
"_id" : 1,
"name" : "mongo-1.mongo.mongo.svc.cluster.local:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2715,
"optime" : {
"ts" : Timestamp(1681795745, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1681795745, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2023-04-18T05:29:05Z"),
"optimeDurableDate" : ISODate("2023-04-18T05:29:05Z"),
"lastHeartbeat" : ISODate("2023-04-18T05:29:14.677Z"),
"lastHeartbeatRecv" : ISODate("2023-04-18T05:29:15.982Z"),
"pingMs" : NumberLong(1),
"lastHeartbeatMessage" : "",
"syncingTo" : "mongo-0.mongo.mongo.svc.cluster.local:27017",
"syncSourceHost" : "mongo-0.mongo.mongo.svc.cluster.local:27017",
"syncSourceId" : 0,
"infoMessage" : "",
"configVersion" : 5
},
{
"_id" : 2,
"name" : "mongo-2.mongo.mongo.svc.cluster.local:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2700,
"optime" : {
"ts" : Timestamp(1681795745, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1681795745, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2023-04-18T05:29:05Z"),
"optimeDurableDate" : ISODate("2023-04-18T05:29:05Z"),
"lastHeartbeat" : ISODate("2023-04-18T05:29:15.253Z"),
"lastHeartbeatRecv" : ISODate("2023-04-18T05:29:15.465Z"),
"pingMs" : NumberLong(2),
"lastHeartbeatMessage" : "",
"syncingTo" : "mongo-1.mongo.mongo.svc.cluster.local:27017",
"syncSourceHost" : "mongo-1.mongo.mongo.svc.cluster.local:27017",
"syncSourceId" : 1,
"infoMessage" : "",
"configVersion" : 5
}
],
"ok" : 1,
"operationTime" : Timestamp(1681795755, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1681795755, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
MainRepSet:PRIMARY>
连接mongo的primary节点创建用户
MainRepSet:PRIMARY> use admin
switched to db admin
MainRepSet:PRIMARY> db.createUser({user: "admin", pwd: "123456", roles:[{role:"root", db:"admin"}]})
Successfully added user: {
"user" : "admin",
"roles" : [
{
"role" : "root",
"db" : "admin"
}
]
}
关于集群用户,可以参考mongo-k8s-sidecar上面的参数,在创建Pod时候就加上
ingress nginx
什么是Ingess
Ingress 公开从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。
创建Ingress控制器
你必须拥有一个 Ingress 控制器 才能满足 Ingress 的要求。 仅创建 Ingress 资源本身没有任何效果。
下面大量使用了RBAC鉴权
Ingress nginx yaml文件
apiVersion: v1
kind: Namespace # 创建一个ingress-nginx名称空间
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration # 下面启动nginx-ingress-controller会用到,作为参数
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services # 下面启动nginx-ingress-controller会用到,作为配置,因为Ingress不支持TCP或者UDP服务,所以需要在pod运行时候使用-tcp-services-configmap 和 --udp-services-configmap指定对应的configmap
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
27017: "mongo/mongo:27017" # 使用tcp的27017端口暴露mongo名称空间下的mongo服务的27017端口
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services # 下面启动nginx-ingress-controller会用到,作为配置
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount # 创建名为nginx-ingress-serviceaccount的ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole # 创建名为nginx-ingress-clusterrole的ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- "" # apiggroups是包含资源的apiggroup的名称。""表示核心API组,"*"表示所有API组
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs: # 创建的该ClusterRole对configmaps、endpoints、nodes、pods、secrets有list和watch权限
- list
- watch
- apiGroups:
- ""
resources: # 创建的该ClusterRole对nodes有get权限
- nodes
verbs:
- get
- apiGroups:
- ""
resources: # 创建的该ClusterRole对services有get、list、watch权限
- services
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources: # 创建的该ClusterRole对events有create、patch权限
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role # 创建角色
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role # 将前面创建的Role绑定到ingress-nginx这个ServiceAccount上
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole # 将前面创建的ClusterRole绑定到ingress-nginx这个ServiceAccount上
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet # DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。 当有节点加入集群时, 也会为他们新增一个 Pod 。 当有节点从集群移除时,这些 Pod 也会被回收。
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
# replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
# wait up to five minutes for the drain of connections
hostNetwork: true
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
kubernetes.io/os: linux
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services # 为了使Ingress支持TCP
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 101
runAsUser: 101
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
---
apiVersion: v1
kind: LimitRange # 认情况下,Kubernetes 集群上的容器运行使用的没有限制。
metadata:
name: ingress-nginx
namespace: ingress-nginx # 当某命名空间中有一个 LimitRange 对象时,将在该命名空间中实施 LimitRange 限制
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
limits:
- min: # 使用min限定最小值
memory: 90Mi
cpu: 100m
type: Container # 对容器进行限制
创建Ingress使用的service
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80 # 只需要写出自己需要的端口即可,不需要的可以删除,例如这里实际用不到可以删除
targetPort: 80
protocol: TCP
- name: https
port: 443 # 只需要写出自己需要的端口即可
targetPort: 443
protocol: TCP
- name: proxied-tcp-27017
port: 30000 # 使用30000端口对应宿主机的一个端口
targetPort: 27017
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
查看部署的服务
03:20:58 root@node1 → kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.110.100.30 80:30128/TCP,443:30362/TCP,30000:30661/TCP 4h18m
流量转发示意图
使用Service的NodePort暴露mongo服务
由于上面使用Ingress nginx通过selector选择所有的mongo节点,然后暴露给外网访问的mongo不知道是primary节点还是secondary节点,所以上面的方式不太合适,可以通过将每一个mongo Pod创建一个服务,对外暴露出去,这样客户端连接时候就可以连接多个,同时也可以再加一层Ingress nginx把对应的每一个service暴露出去
在创建mongo的Pod后创建下面的service,跳过创建Ingress
# mongo-services.yaml
apiVersion: v1
kind: Service
metadata:
name: mongo-0-service
namespace: mongo
spec:
type: NodePort
selector:
statefulset.kubernetes.io/pod-name: mongo-0
ports:
# 默认情况下,为了方便起见,`targetPort` 被设置为与 `port` 字段相同的值。
- port: 31000
targetPort: 27017
# 可选字段
# 默认情况下,为了方便起见,Kubernetes 控制平面会从某个范围内分配一个端口号(默认:30000-32767)
nodePort: 31000
---
apiVersion: v1
kind: Service
metadata:
name: mongo-1-service
namespace: mongo
spec:
type: NodePort
selector:
statefulset.kubernetes.io/pod-name: mongo-1
ports:
# 默认情况下,为了方便起见,`targetPort` 被设置为与 `port` 字段相同的值。
- port: 31001
targetPort: 27017
# 可选字段
# 默认情况下,为了方便起见,Kubernetes 控制平面会从某个范围内分配一个端口号(默认:30000-32767)
nodePort: 31001
---
apiVersion: v1
kind: Service
metadata:
name: mongo-2-service
namespace: mongo
spec:
type: NodePort
selector:
statefulset.kubernetes.io/pod-name: mongo-2
ports:
# 默认情况下,为了方便起见,`targetPort` 被设置为与 `port` 字段相同的值。
- port: 31002
targetPort: 27017
# 可选字段
# 默认情况下,为了方便起见,Kubernetes 控制平面会从某个范围内分配一个端口号(默认:30000-32767)
nodePort: 31002
执行上面的yaml文件
kubectl apply -f mongo-services.yaml
查看结果
05:08:19 root@node1 ~ → kubectl get all -n mongo
NAME READY STATUS RESTARTS AGE
pod/mongo-0 2/2 Running 0 24m
pod/mongo-1 2/2 Running 0 24m
pod/mongo-2 2/2 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/mongo ClusterIP None <none> 27017/TCP 24m
service/mongo-0-service NodePort 10.105.48.205 <none> 31000:31000/TCP 24m
service/mongo-1-service NodePort 10.101.222.218 <none> 31001:31001/TCP 24m
service/mongo-2-service NodePort 10.97.197.251 <none> 31002:31002/TCP 24m
NAME READY AGE
statefulset.apps/mongo 3/3 24m
通过客户端访问对外暴露的节点
# 使用mongosh连接
mongosh "mongodb://admin:123456@172.16.21.131:31000/admin"
Current Mongosh Log ID: 643e29f306ed07c1798329a6
Connecting to: mongodb://<credentials>@172.16.21.131:31000/admin?directConnection=true&appName=mongosh+1.8.0
Using MongoDB: 4.0.28
Using Mongosh: 1.8.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
The server generated these startup warnings when booting
2023-04-18T04:43:41.656+0000:
2023-04-18T04:43:41.656+0000: ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2023-04-18T04:43:41.656+0000: ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2023-04-18T04:43:42.374+0000:
2023-04-18T04:43:42.374+0000: ** WARNING: Access control is not enabled for the database.
2023-04-18T04:43:42.374+0000: ** Read and write access to data and configuration is unrestricted.
2023-04-18T04:43:42.374+0000: ** WARNING: You are running this process as the root user, which is not recommended.
2023-04-18T04:43:42.374+0000:
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
MainRepSet [direct: primary] admin>
# 使用mongosh命令连接
mongosh mongodb://admin:123456@172.16.21.131:31001/admin
Current Mongosh Log ID: 643e304a7ce1db366f9a270d
Connecting to: mongodb://<credentials>@172.16.21.131:31001/admin?directConnection=true&appName=mongosh+1.8.0
Using MongoDB: 4.0.28
Using Mongosh: 1.8.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
The server generated these startup warnings when booting
2023-04-18T05:44:50.345+0000:
2023-04-18T05:44:50.345+0000: ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2023-04-18T05:44:50.345+0000: ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2023-04-18T05:44:50.658+0000:
2023-04-18T05:44:50.658+0000: ** WARNING: Access control is not enabled for the database.
2023-04-18T05:44:50.658+0000: ** Read and write access to data and configuration is unrestricted.
2023-04-18T05:44:50.658+0000: ** WARNING: You are running this process as the root user, which is not recommended.
2023-04-18T05:44:50.658+0000:
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
MainRepSet [direct: secondary] admin>
# 使用mongosh命令连接
mongosh mongodb://admin:123456@172.16.21.131:31002/admin
Current Mongosh Log ID: 643e309d764c2449c11bccf9
Connecting to: mongodb://<credentials>@172.16.21.131:31002/admin?directConnection=true&appName=mongosh+1.8.0
Using MongoDB: 4.0.28
Using Mongosh: 1.8.0
For mongosh info see: https://docs.mongodb.com/mongodb-shell/
------
The server generated these startup warnings when booting
2023-04-18T05:45:04.555+0000:
2023-04-18T05:45:04.555+0000: ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2023-04-18T05:45:04.555+0000: ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2023-04-18T05:45:04.850+0000:
2023-04-18T05:45:04.850+0000: ** WARNING: Access control is not enabled for the database.
2023-04-18T05:45:04.850+0000: ** Read and write access to data and configuration is unrestricted.
2023-04-18T05:45:04.850+0000: ** WARNING: You are running this process as the root user, which is not recommended.
2023-04-18T05:45:04.850+0000:
------
------
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
------
MainRepSet [direct: secondary] admin>