第13章 wordpress-demo
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: blog
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
args:
- --default_authentication_plugin=mysql_native_password
- --character-set-server=utf8mb4
- --collation-server=utf8mb4_unicode_ci
ports:
- containerPort: 3306
name: dbport
env:
- name: MYSQL_ROOT_PASSWORD
value: “123456”
- name: MYSQL_DATABASE
value: wordpress
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: wordpress
volumeMounts:
- name: db
mountPath: /var/lib/mysql
volumes:
- name: db
hostPath:
path: /data/mysql
nodeSelector:
disktype: SSD
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: blog
spec:
selector:
app: mysql
ports:
- name: mysqlport
protocol: TCP
port: 3306
targetPort: dbport
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
namespace: blog
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: wordpress
spec:
volumes:
- name: data
nfs:
server: 10.0.0.11
path: /data/nfs-volume/blog
#initContainers:
#- name: init-db
# image: busybox
# command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql service; sleep 2; done;']
containers:
- name: wordpress
image: wordpress
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: wdport
env:
- name: WORDPRESS_DB_HOST
value: mysql:3306
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: wordpress
readinessProbe:
tcpSocket:
port: 80
#initialDelaySeconds: 5
#periodSeconds: 10
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: data
mountPath: /var/www/html/wp-content
---
apiVersion: v1
kind: Service
metadata:
name: wordpress
namespace: blog
spec:
selector:
app: wordpress
ports:
- port: 80
name: wordpressport
protocol: TCP
targetPort: wdport
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: blog-web-app
namespace: blog
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- host: traefik.wordpress.com
http:
paths:
- path: /
backend:
serviceName: wordpress
servicePort: 80
第14章 数据持久化
1.Volume介绍
Volume是Pad中能够被多个容器访问的共享目录 Kubernetes中的Volume不Pad生命周期相同,但不容器的生命周期无相关 Kubernetes支持多种类型的Volume,并且一个Pod可以同时使用任意多个Volume Volume类型包括:
k8s特定的资源对象:
ConfigMap 应用配置
Secret 加密数据
ServiceAccountToken token数据
k8s本地存储类型
EmptDir: 临时存储,Pod分配时创建,K8S自动分配,当Pod被移除数据被清空。
HostPath: 宿主机目录,为Pod上挂载宿主机目录。用于持久化数据。
持久化存储(PV)和网络共享存储:
CephFS: 开源共享存储系统
GlusterFS: 开源共享存储系统
NFS: 开源共享存储,挂载相应磁盘资源
PersistentVolumeClaim: 简称PVC,持久化存储的申请空间
2.EmptyDir实验
cat >emptyDir.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox-empty
spec:
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/busybox/
name: cache-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/busybox/index.html;sleep 3;done"]
volumes:
- name: cache-volume
emptyDir: {}
EOF
3.hostPath实验
3.1 type类型说明
DirectoryOrCreate 目录不存在就自动创建
Directory 目录必须存在
FileOrCreate 文件不存在则创建
File 文件必须存在
3.2 创建hostPath类型volume资源配置清单
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeName: node2
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
4.NFS类型
4.1 NFS类型说明
我们也可以直接使用Node节点自己本身的nfs软件将共享目录挂载到Pod里,前提是NFS服务已经安装配置好,并且Node节点上安装了NFS客户端软件
4.2 创建NFS服务
yum install nfs-utils -y
cat > /etc/exports << 'EOF'
/data/nfs-volume/blog *(rw,sync,no_root_squash)
EOF
mkdir -p /data/nfs-volume/blog
systemctl restart nfs
4.3 创建NFS类型资源清单
apiVersion: v1
kind: Pod
metadata:
name: liveness-pod
spec:
nodeName: node2
volumes:
- name: nfs-data
nfs:
server: 10.0.0.10
path: /data/nfs-volume/
containers:
- name: liveness
image: nginx
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nfs-data
mountPath: /usr/share/nginx/html/
5.根据Node标签选择POD创建在指定的Node上
5.1 方法1: 直接选择Node节点名称
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeName: node2
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
5.2 方法2: 根据Node标签选择Node节点
节点添加标签
kubectl label nodes node3 disktype=SSD
资源配置清单
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeSelector:
disktype: SSD
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
6.编写mysql的持久化deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-dp
namespace: default
spec:
selector:
matchLabels:
app: mysql
replicas: 1
template:
metadata:
name: mysql-pod
namespace: default
labels:
app: mysql
spec:
containers:
- name: mysql-pod
image: mysql:5.7
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-volume
volumes:
- name: mysql-volume
hostPath:
path: /data/mysql
type: DirectoryOrCreate
nodeSelector:
disktype: SSD
第15章 持久化存储PV和PVC
1.PV和PVC介绍
PV是对底层网络共享存储的抽象,将共享存储定义为一种“资源”。 PV由管理员创建和配置 PV只能是共享存储 PVC则是用户对存储资源的一个“申请”。 就像Pod消费Node的资源一样,PVC能够“消费”PV资源 PVC可以申请特定的存储空间和访问模式
注意:需要注意的是目前PV和PVC之间是一对一绑定的关系,也就是说一个PV只能被一个PVC绑定
2.PV和PVC生命周期
3.实验-创建nfs和mysql的pv及pvc
3.1 master节点安装nfs
yum install nfs-utils -y
mkdir /data/nfs_volume/mysql -p
vim /etc/exports
/data/nfs_volume 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
showmount -e 127.0.0.1
3.2 所有node节点安装nfs
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.10
3.3 编写并创建nfs-pv资源
cat >nfs-pv.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /data/nfs_volume/mysql
server: 10.0.0.11
EOF
kubectl create -f nfs-pv.yaml
kubectl get persistentvolume
3.4 创建mysql-pvc资源
cat >mysql-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
EOF
kubectl create -f mysql-pvc.yaml
kubectl get pvc
3.5 创建mysql-deployment资源
cat >mysql-dp.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
volumeMounts:
- name: mysql-pvc
mountPath: /var/lib/mysql
- name: mysql-log
mountPath: /var/log/mysql
volumes:
- name: mysql-pvc
persistentVolumeClaim:
claimName: mysql-pvc
- name: mysql-log
hostPath:
path: /var/log/mysql
nodeSelector:
disktype: SSD
EOF
kubectl create -f mysql-dp.yaml
kubectl get pod -o wide
3.6 测试方法
1.创建nfs-pv
2.创建mysql-pvc
3.创建mysql-deployment并挂载mysq-pvc
4.登陆到mysql的pod里创建一个数据库
5.将这个pod删掉,因为deployment设置了副本数,所以会自动再创建一个新的pod
6.登录这个新的pod,查看刚才创建的数据库是否依然能看到
7.如果仍然能看到,则说明数据是持久化保存的
3.7 accessModes字段说明
ReadWriteOnce 单路读写
ReadOnlyMany 多路只读
ReadWriteMany 多路读写
resources 资源的限制,比如至少5G
3.8 volumeName精确匹配
capacity 限制存储空间大小
reclaim policy pv的回收策略
retain pv被解绑后上面的数据仍保留
recycle pv上的数据被释放
delete pvc和pv解绑后pv就被删除
备注:用户在创建pod所需要的存储空间时,前提是必须要有pv存在
才可以,这样就不符合自动满足用户的需求,而且之前在k8s 9.0
版本还可删除pv,这样造成数据不安全性
``
第16章 configMap资源
1.ConfigMap介绍
1.1.为什么要用configMap?
将配置文件和POD解耦
1.2.congiMap里的配置文件是如何存储的?
键值对
key:value
文件名:配置文件的内容
1.3.configMap支持的配置类型
直接定义的键值对
基于文件创建的键值对
1.4.configMap创建方式
命令行
资源配置清单
1.5.configMap的配置文件如何传递到POD里
变量传递
数据卷挂载
1.6 使用configMap的限制条件
1.ConfigMap必须在Pod之前创建,Pod才能引用他
2.ConfigMap受限于命名空间限制,只有处于同一个命名空间中的Pod才可以被引用
2.命令行创建configMap
2.1.命令行创建configMap
kubectl create configmap --help
kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=nginx.cookzhang.com
kubectl get cm
kubectl describe cm nginx-config
关键参数:
kubectl create configmap nginx-config \ 创建名为nginxconfig的configmap资源类型
--from-literal=nginx_port=80 \ 创建变量,key为nginx_port,value值为80
--from-literal=server_name=nginx.cookzhang.com 创建变量,key为server_name,value值为nginx.cookzhang.com
2.2.POD环境变量形式引用configMap
kubectl explain pod.spec.containers.env.valueFrom.configMapKeyRef
cat >nginx-cm.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
env:
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: nginx-config
key: nginx_port
- name: SERVER_NAME
valueFrom:
configMapKeyRef:
name: nginx-config
key: server_name
EOF
kubectl create -f nginx-cm.yaml
关键参数:
valueFrom: 引用变量
configMapKeyRef: 引用congfigmap
name: nginx-config 引用的congfigmap名称
key: nginx_port 应用congfigmap里的具体的key
2.3 查看pod是否引入了变量
[root@node1 ~/confimap]# kubectl exec -it nginx-cm /bin/bash
root@nginx-cm:~# echo ${NGINX_PORT}
80
root@nginx-cm:~# echo ${SERVER_NAME}
nginx.cookzhang.com
root@nginx-cm:~# printenv |egrep "NGINX_PORT|SERVER_NAME"
NGINX_PORT=80
SERVER_NAME=nginx.cookzhang.com
注意:
变量传递的形式,修改confMap的配置,POD内并不会生效
因为变量只有在创建POD的时候才会引用生效,POD一旦创建好,环境变量就不变了
3.文件形式创建configMap
3.1创建配置文件:
cat >www.conf <<EOF
server {
listen 80;
server_name www.cookzy.com;
location / {
root /usr/share/nginx/html/www;
index index.html index.htm;
}
}
EOF
创建configMap资源:
kubectl create configmap nginx-www --from-file=www.conf=./www.conf
查看cm资源
kubectl get cm
kubectl describe cm nginx-www
3.2编写pod并以存储卷挂载模式引用configMap的配置
cat >nginx-cm-volume.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-www
mountPath: /etc/nginx/conf.d/
volumes:
- name: nginx-www
configMap:
name: nginx-www
items:
- key: www.conf
path: www.conf
EOF
关键参数:
volumes: #定义存储卷
- name: nginx-www #名称
configMap: #引用configMap资源
name: nginx-www #引用名为nginx-www的configMap资源
items: #引用configMap里的具体的内容
- key: www.conf #引用名为nginx-www的configMap里的key为www.conf的值
path: www.conf #使用key名称作为挂载文件名
3.3 测试效果
1.进到容器内查看文件
kubectl exec -it nginx-cm /bin/bash
cat /etc/nginx/conf.d/www.conf
2.动态修改
configMap
kubectl edit cm nginx-www
3.再次进入容器内观察配置会不会自动更新
cat /etc/nginx/conf.d/www.conf
nginx -T
4.配置文件内容直接以数据格式直接存储在configMap里
4.1 创建config配置清单:
cat >nginx-configMap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: default
data:
www.conf: |
server {
listen 80;
server_name www.cookzy.com;
location / {
root /usr/share/nginx/html/www;
index index.html index.htm;
}
}
blog.conf: |
server {
listen 80;
server_name blog.cookzy.com;
location / {
root /usr/share/nginx/html/blog;
index index.html index.htm;
}
}
4.2应用并查看清单:
kubectl create -f nginx-configMap.yaml
kubectl get cm
kubectl describe cm nginx-config
4.3创建POD资源清单并引用configMap
cat >nginx-cm-volume-all.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/
volumes:
- name: nginx-config
configMap:
name: nginx-config
items:
- key: www.conf
path: www.conf
- key: blog.conf
path: blog.conf
EOF
4.4应用并查看:
kubectl create -f nginx-cm-volume-all.yaml
kubectl get pod
kubectl describe pod nginx-cm
进入容器内并查看:
kubectl exec -it nginx-cm /bin/bash
ls /etc/nginx/conf.d/
cat /etc/nginx/conf.d/www.conf
4.5测试动态修改configMap会不会生效
kubectl edit cm nginx-config
kubectl exec -it nginx-cm /bin/bash
ls /etc/nginx/conf.d/
cat /etc/nginx/conf.d/www.conf
第17章 k8s dashboard
1.官方项目地址
2.下载配置文件
wget raw.githubusercontent.com/kubernetes/…
3.修改配置文件
39 spec:
40 type: NodePort
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30000
4.应用资源配置
kubectl create -f recommended.yaml
5.创建管理员账户并应用
cat > dashboard-admin.yaml<<'EOF'
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
kubectl create -f dashboard-admin.yaml
6.查看资源并获取token
kubectl get pod -n kubernetes-dashboard -o wide
kubectl get svc -n kubernetes-dashboard
kubectl get secret -n kubernetes-dashboard
#获取token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
7.浏览器访问
https://10.0.0.11:30000 google浏览器打不开就换火狐浏览器 黑科技
8.使用ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "dashboard.k8s.com"
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: kubernetes-dashboard
port:
number: 443
访问
9.报错总结
没有显示CPU Memory
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2020-03-03T09:57:00Z"}
Skipping metric because of error: Metric label not set
原因:
没有安装metrics监控组件
解决: 在所有的节点中导入metrics镜像
并且执行components.yaml文件
第18章 使用harbor作为私有仓库
1.清理以前安装的Harbor
docker ps -a|grep "goharbor"|awk '{print "docker stop "$1}'
docker ps -a|grep "goharbor"|awk '{print "docker rm "$1}'
docker images|grep "goharbor"|awk '{print "docker rmi "$1":"$2}'
2.解压并修改harbor配置文件
#docker-compose
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
docker-compose --version
cd /opt/
tar zxf harbor-offline-installer-v1.9.0-rc1.tgz
cd harbor/
vim harbor.yml
hostname: 10.0.0.10
port: 8888
harbor_admin_password: 123456
data_volume: /data/harbor
3.执行安装并访问
./install.sh
浏览器访问: http://10.0.0.10:8888
4.创建一个私有仓库k8s
web页面操作
5.配置docker信任仓库并重启
注意!!!三台服务器都操作!!!
注意!!!三台服务器都操作!!!
注意!!!三台服务器都操作!!!
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries" : ["http://10.0.0.10:8888"]
}
EOF
systemctl restart docker
注意!!!node1重启docker后harbor会失效,需要重启harbor
cd /opt/harbor
docker-compose stop
docker-compose start
6.docker登陆harbor
docker login 10.0.0.10:8888
7.将docker登陆凭证转化为k8s能识别的base64编码
只要一台节点操作即可
[root@master ~]# cat /root/.docker/config.json|base64
ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTE6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZN
VEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tl
ci1DbGllbnQvMTguMDkuOSAobGludXgpIgoJfQp9
8.编写Secert资源配置清单
[root@master ~/demo]# cat harbor-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-secret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTE6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZNVEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTguMDkuOSAobGludXgpIgoJfQp9
type: kubernetes.io/dockerconfigjson
9.应用Secret资源
kubectl delete -f harbor-secret.yaml
kubectl apply -f harbor-secret.yaml
kubectl get secrets
kubectl describe secrets harbor-secret
10.修改镜像tag并上传到harbor
docker tag kubeguide/tomcat-app:v1 10.0.0.10:8888/k8s/tomcat-app:v1
docker tag mysql:5.7 10.0.0.10:8888/k8s/mysql:5.7
docker push 10.0.0.10:8888/k8s/tomcat-app:v1
docker push 10.0.0.10:8888/k8s/mysql:5.7
11.修改demo资源配置清单
mysql-dp.yaml
spec:
imagePullSecrets:
- name: harbor-secret
tomcat-dp.yaml
spec:
imagePullSecrets:
- name: harbor-secret
12.应用资源清单并查看
kubectl apply -f ./
kubectl get pod
第19章 基于角色访问控制RBAC
1.RBAC介绍
官方文档:
https://kubernetes.io/zh/docs/reference/access-authn-authz/rbac/
什么是RBAC:
RBAC 基于角色的访问控制
RBAC的作用是什么:
对集群中的资源和非资源权限均有完整的覆盖
RBAC的权限配置通过几个API对象即可完成,同其他API对象一样,可以用kubectl或API进行操作 可以在运行时进行调整,无需重新启动API Serve
2.API 资源对象
官方文档:
https://kubernetes.io/zh/docs/reference/using-api/#api-versioning
2.1 API概述
REST API 是 Kubernetes 的基本结构。
所有操作和组件之间的通信及外部用户命令都是调用 API 服务器处理的 REST API。
因此,Kubernetes 平台视一切皆为 API 对象, 且它们在 API 中有相应的定义。
2.2 API版本控制
JSON 和 Protobuf 序列化模式遵循相同的模式更改原则。 以下描述涵盖了这两种格式。
API 版本控制和软件版本控制是间接相关的。 API 和发布版本控制提案 描述了 API 版本控制和软件版本控制间的关系
不同的 API 版本代表着不同的稳定性和支持级别。 你可以在 API 变更文档 中查看到更多的不同级别的判定标准。
版本级别说明:
Alpha:
- 版本名称包含 alpha (例如, v1alpha1 )
- 软件可能会有 Bug。启用某个特性可能会暴露出 Bug。 某些特性可能默认禁用
- 对某个特性的支持可能会随时被删除,恕不另行通知
- API 可能在以后的软件版本中以不兼容的方式更改,恕不另行通知
- 由于缺陷风险增加和缺乏长期支持,建议该软件仅用于短期测试集群
Beta:
版本名称包含 beta (例如, v2beta3 )
软件被很好的测试过。启用某个特性被认为是安全的。 特性默认开启
尽管一些特性会发生细节上的变化,但它们将会被长期支持
在随后的 Beta 版或稳定版中,对象的模式和(或)语义可能以不兼容的方式改变。 当这种情况发生时,将提供迁移说明。 模式更改可能需要删除、编辑和重建 API 对象。 编辑过程可能并不简单。 对于依赖此功能的应用程序,可能需要停机迁移
该版本的软件不建议生产使用。 后续发布版本可能会有不兼容的变动。 如果你有多个集群可以独立升级,可以放宽这一限制
说明: 请试用测试版特性时并提供反馈。特性完成 Beta 阶段测试后, 就可能不会有太多的变更了Stable:
版本名称如 vX ,其中 X 为整数
特性的稳定版本会出现在后续很多版本的发布软件中
2.3 API 组
API 组 能够简化对 Kubernetes API 的扩展。 API 组信息出现在REST 路径中,也出现在序列化对象的 apiVersion 字段中。
以下是 Kubernetes 中的几个组:
- 核心组的 REST 路径为 /api/v1 。 核心组并不作为 apiVersion 字段的一部分,例如, apiVersion: v1
- 指定的组位于 REST 路径 /apis/VERSION , 并且使用 apiVersion: VERSION (例如, apiVersion: batch/v1 )。 你可以在 Kubernetes API 参考 文档 中查看全部的 API 组
所有API组官方说明地址:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.22/#-strongapi-groups-strong-
2.4 查看API组信息
我们可以使用以下kubectl命令查看API组的信息:
查看所有的API组:
kubectl get --raw /
查看指定组的信息:
#默认返回的格式是没有json格式化的
kubectl get --raw /api/v1
#可以安装jq命令来格式化返回的json数据
kubectl get --raw /api/v1|jq|grep '"name"'
#查看app组信息
kubectl get --raw /apis/apps/v1|jq|grep '"name"'
3.RBAC的API资源对象
k8s里所有的资源对象都是可以被操作的,类似操作数据库一样,我们可以对资源对象执行CRUD,也就 是增删改查操作
比如以下这些资源:
Pods
Secrets
Deployment
Replicasets
Statefulsets
Namespaces
对于上面的资源可以执行以下操作:
create
watch
list
delete
get
edit
4.RBAC的基本概念
目前我们已经知道可以对资源进行操作了,但是我们还需要再了解几个概念才能在k8s创建RBAC资源。
Rule 规则
角色就是一组权限的集合
Role和ClusterRole 角色和集群角色
Role
Role定义的规则只适用单个命名空间,也就是和namespace关联的。
在创建Role时,必须指定该Role所属的名字空间
ClusterRole
与之相对,ClusterRole 则是一个集群作用域的资源。
这两种资源的名字不同(Role 和 ClusterRole)是因为 Kubernetes 对象要么是名字空间作用域
ClusterRole 有若干用法。你可以用它来:
1.定义对某个命名空间域对象的访问权限,并将在各个命名空间内完成授权;
2.为命名空间作用域的对象设置访问权限,并跨所有命名空间执行授权;
3.为集群作用域的资源定义访问权限。
总结:
如果你希望在名字空间内定义角色,应该使用 Role;
如果你希望定义集群范围的角色,应该使用 ClusterRole
RoleBinding和ClusterRoleBinding 角色绑定和集群角色绑定
Subject 主题
k8s里定义了3中主题资源,分别是user,group和Service Account
User Account和Service Account
User Account:
用户账号,用户是外部独立服务管理的,不属于k8s内部的API
ServiceAccount:
也是一种账号,但他并不是给k8s集群的用户用的,而是给运行Pod里的进程用的,他为Pod里的进程提供了必要的身份证明。用来调用集群的API
RoleBinding和ClusterRoleBinding
角色绑定(Role Binding)是将角色中定义的权限赋予一个或者一组用户。
它包含若干 主体(用户、组或服务账户)的列表和对这些主体所获得的角色的引用。
RoleBinding 在指定的名字空间中执行授权,
ClusterRoleBinding 在集群范围执行授权。
RBAC关系图
5.RBAC 举例
5.0 创建 ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: oldboy
namespace: default
5.1 创建 Role
创建一个位于default命名空间的Role,拥有对pods的访问权限
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
参数解释:
rules:
- apiGroups: [""] #""表示核心 API组
resources: ["pods"] #需要操作的资源对象列表
verbs: ["get", "watch", "list"] #允许对资源对象操作的方法列表
5.2 创建RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: oldboy
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
参数解释:
subjects:
- kind: User
name: oldboy #绑定ServiceAccount创建的用户oldboy
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader #关联Role规则
apiGroup: rbac.authorization.k8s.io
5.3 创建 ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
# "namespace" 被忽略,因为 ClusterRoles 不受名字空间限制
name: secret-reader
rules:
- apiGroups: [""]
# 在 HTTP 层面,用来访问 Secret 对象的资源的名称为 "secrets"
resources: ["secrets"]
verbs: ["get", "watch", "list"]
5.4 创建 ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
# 此集群角色绑定允许 “manager” 组中的任何人访问任何名字空间中的 secrets
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager # 'name' 是区分大小写的
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
5.5 创建只能访问某个namespace的普通用户
创建用户凭证
openssl genrsa -out oldboy.key 2048
openssl req -new -key /etc/kubernetes/pki/oldboy.key -out /etc/kubernetes/pki/oldboy.csr -subj "/CN=oldboy/O=oldboy"
openssl x509 -req -in /etc/kubernetes/pki/oldboy.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/oldboy.crt -days 300
kubectl config set-credentials oldboy --client-certificate=/etc/kubernetes/pki/oldboy.crt --client-key=/etc/kubernetes/pki/oldboy.key
kubectl config set-context oldboy-context --cluster=kubernetes --namespace=kube-system --user=oldboy
kubectl get pods --context=oldboy-context
创建角色
cat > oldboy-role.yaml << 'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: oldboy-role
namespace: kube-system
rules:
- apiGroups: ["", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["*"]
EOF
kubectl apply -f oldboy-role.yaml
创建角色绑定
cat > oldboy-rolebinding.yml << 'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: oldboy-rolebinding
namespace: kube-system
subjects:
- kind: User
name: oldboy
apiGroup: ""
roleRef:
kind: Role
name: oldboy-role
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f oldboy-rolebinding.yml
测试访问
[root@master ~]# kubectl get pods --context=oldboy-context
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-jfhwc 1/1 Running 8 7d4h
coredns-6d56c8448f-vd77d 1/1 Running 8 7d4h
etcd-master 1/1 Running 8 7d4h
kube-apiserver-master 1/1 Running 8 7d4h
kube-controller-manager-master 1/1 Running 8 6d21h
kube-flannel-ds-kg9v7 1/1 Running 9 7d3h
kube-flannel-ds-rl6p5 1/1 Running 11 7d3h
kube-flannel-ds-x9j5t 1/1 Running 9 7d3h
kube-proxy-7xx8g 1/1 Running 9 7d3h
kube-proxy-87dsd 1/1 Running 9 7d4h
kube-proxy-krkt9 1/1 Running 8 7d3h
kube-scheduler-master 1/1 Running 8 6d21h
metrics-server-5dbcdf9f85-2tfnn 1/1 Running 0 82m
访问其他的命名空间
[root@master ~]# kubectl --context=oldboy-context get pods --namespace=default
Error from server (Forbidden): pods is forbidden: User "oldboy" cannot list resource "pods" in API group "" in the namespace "default"
5.6 只能访问某个 namespace 的 ServiceAccount
创建ServiceAccount凭证
cat > oldboy-sa.yml << 'EOF'
apiVersion: v1
kind: ServiceAccount
metadata:
name: oldboy-sa
namespace: default
EOF
创建角色
cat > oldboy-sa-role.yml << 'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: oldboy-sa-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "delete"]
EOF
kubectl apply -f oldboy-sa-role.yml
创建角色绑定
cat > oldboy-sa-rolebinding.yml << 'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: ServiceAccount
name: oldboy-sa
namespace: default
roleRef:
kind: Role
name: oldboy-sa-role
apiGroup: rbac.authorization.k8s.io
EOF
kubectl apply -f oldboy-sa-rolebinding.yml
获取用户的token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetesdashboard get secret | grep oldboy-sa | awk '{print $1}')
注意!!!
kubeadm安装的k8s集群,证书默认只有1年
第20章 tomcat-app实验
项目需求
镜像:
10.0.0.10:8888/k8s/tomcat-app:v1
10.0.0.10:8888/k8s/mysql:5.7
端口:
tomcat-app 8080
mysql 3306
service:
mysql-svc
ingress:
tomcat.k8s.com
Pod控制器:
tomcat dp 2个副本
mysql dp 1个副本
tomcat连接mysql环境变量:
MYSQL_SERVICE_HOST
需求:
0.命名空间为tomcat
kubectl create namespace tomcat
1.tomcat使用svc名称连接mysql
2.tomcat使用ingress暴露服务tomcat.k8s.com
3.mysql使用nfs类型的pv和pvc作为数据存储
4.tomcat服务的配置文件使用configmap资源管理
/usr/local/tomcat/conf
kubectl create configmap conf --from-file=server.xml=./server.xml -n tomcat
5.创建一个用户账号tomcat-admin,具有对tomcat命名空间所有资源的所有权限,用于节点控制
6.创建一个服务账号tomcat-sa,具有对tomcat命名空间所有资源的所有权限,用于dashboard查看
7.所有镜像使用harbor下载
注意:用configmap挂载的文件,先将文件弄出来server.xml
configmap tomcat-cm.sh
#!/bin/bash
kubectl create configmap tomcat-conf --from-file=server.xml=./server.xml -n tomcat
namespace-secret.yaml
apiVersion: v1
kind: Namespace
metadata:
name: tomcat
---
apiVersion: v1
kind: Secret
metadata:
name: tomcat-secret
namespace: tomcat
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTA6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZNVEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTUgKGxpbnV4KSIKCX0KfQ==
type: kubernetes.io/dockerconfigjson
mysql-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv
namespace: tomcat
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /data/nfs-volume/mysql
server: 10.0.0.10
mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
namespace: tomcat
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
mysql-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-dp
namespace: tomcat
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql-dp
template:
metadata:
name: mysql-dp
labels:
app: mysql-dp
spec:
volumes:
- name: mysql-pvc
persistentVolumeClaim:
claimName: mysql-pvc
imagePullSecrets:
- name: tomcat-secret
containers:
- name: mysql-dp
image: 10.0.0.10:8888/k8s/mysql:5.7
imagePullPolicy: IfNotPresent
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
args:
- --character-set-server=utf8
- --collation-server=utf8_bin
volumeMounts:
- name: mysql-pvc
mountPath: /var/lib/mysql
mysql-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
namespace: tomcat
spec:
selector:
app: mysql-dp
ports:
- name: mysql-port
port: 3306
protocol: TCP
targetPort: 3306
tomcat-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-dp
namespace: tomcat
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat-dp
template:
metadata:
name: tomcat-dp
labels:
app: tomcat-dp
spec:
imagePullSecrets:
- name: tomcat-secret
containers:
- name: tomcat-dp
image: 10.0.0.10:8888/k8s/tomcat-app:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcat-port
containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: mysql-svc
volumeMounts:
- name: tomcat-conf
mountPath: /usr/local/tomcat/conf/server.xml
subPath: server.xml
volumes:
- name: tomcat-conf
configMap:
name: tomcat-conf
items:
- key: server.xml
path: server.xml
tomcat-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat-svc
namespace: tomcat
spec:
selector:
app: tomcat-dp
ports:
- protocol: TCP
port: 8080
targetPort: 8080
tomcat-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tomcat-ingress
namespace: tomcat
spec:
rules:
- host: tomcat.k8s.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: tomcat-svc
port:
number: 8080
创建用户 tomcat-admin.sh
#!/bin/bash
openssl genrsa -out /etc/kubernetes/pki/tomcat.key 2048
openssl req -new -key /etc/kubernetes/pki/tomcat.key -out /etc/kubernetes/pki/tomcat.csr -subj "/CN=tomcat/O=tomcat"
openssl x509 -req -in /etc/kubernetes/pki/tomcat.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out /etc/kubernetes/pki/tomcat.crt -days 300
kubectl config set-credentials tomcat --client-certificate=/etc/kubernetes/pki/tomcat.crt --client-key=/etc/kubernetes/pki/tomcat.key
kubectl config set-context tomcat-context --cluster=kubernetes --namespace=kube-system --user=tomcat
tomcat-sa.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tomcat-role
namespace: tomcat
rules:
- apiGroups: ["", "apps"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tomcat-rolebinding
namespace: tomcat
subjects:
- kind: User
name: tomcat
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: tomcat-role
apiGroup: rbac.authorization.k8s.io
第21章搭建ES
将nfs配置文件配置
[root@master ~/ELK]# cat /etc/exports
/data/nfs-volume/ 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
创建目录并授权
mkdir /data/nfs-volume/es -p
chown -R 1000:1000 /data/nfs-volume/es
es-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: es-secret
namespace: elk
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTA6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZNVEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTUgKGxpbnV4KSIKCX0KfQ==
type: kubernetes.io/dockerconfigjson
es-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-pv
namespace: elk
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /data/nfs-volume/es
server: 10.0.0.10
es-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-pvc
namespace: elk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
es-dp.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: es-dp
namespace: elk
labels:
app: es
spec:
replicas: 1
selector:
matchLabels:
app: es-dp
template:
metadata:
name: es-dp
labels:
app: es-dp
spec:
volumes:
- name: es-pvc
persistentVolumeClaim:
claimName: es-pvc
imagePullSecrets:
- name: es-secret
containers:
- name: es-dp
image: 10.0.0.10:8888/k8s/elasticsearch:7.9.3
imagePullPolicy: IfNotPresent
ports:
- name: es-port
containerPort: 9200
env:
- name: discovery.type
value: "single-node"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
volumeMounts:
- name: es-pvc
mountPath: /usr/share/elasticsearch/data
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 1Gi
cpu: 1
readinessProbe:
httpGet:
path: /
port: 9200
initialDelaySeconds: 60
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: 9200
initialDelaySeconds: 60
periodSeconds: 5
es-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: es-svc
namespace: elk
spec:
selector:
app: es-dp
ports:
- name: es-port
port: 9200
protocol: TCP
targetPort: 9200
es-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: es-ingress
namespace: elk
spec:
rules:
- host: es.k8s.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: es-svc
port:
number: 9200
第22章搭建kibana
kibana-dp.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana-dp
namespace: elk
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana-dp
template:
metadata:
name: kibana-dp
labels:
app: kibana-dp
spec:
nodeName: node1
imagePullSecrets:
- name: es-secret
containers:
- name: kibana-dp
image: 10.0.0.10:8888/k8s/kibana:7.9.3
imagePullPolicy: IfNotPresent
ports:
- name: kibana-port
containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: "http://es-svc:9200"
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 2Gi
cpu: 1
readinessProbe:
httpGet:
path: /
port: 5601
initialDelaySeconds: 180
periodSeconds: 5
#livenessProbe:
# httpGet:
# path: /
# port: 5601
# initialDelaySeconds: 180
# periodSeconds: 5
kibana-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana-svc
namespace: elk
spec:
selector:
app: kibana-dp
ports:
- name: kibana-port
port: 5601
protocol: TCP
targetPort: 5601
kibana-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
namespace: elk
spec:
rules:
- host: kibana.k8s.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: kibana-svc
port:
number: 5601
第23章收集日志
通用文件
default-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: default-secret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTA6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZNVEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTkuMDMuMTUgKGxpbnV4KSIKCX0KfQ==
type: kubernetes.io/dockerconfigjson
filebeat-cm.sh
#!/bin/bash
kubectl create configmap filebeat-conf --from-file=filebeat.yml=./filebeat.yml
filebeat.yaml-配置文件
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/tomcat/localhost_access_log.*.txt
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["es-svc.elk:9200"]
index: "tomcat-access-%{[agent.version]}-%{+yyyy.MM}"
setup.ilm.enabled: false
setup.template.enabled: false
收集日志模式1
- 在每台宿主机中开启一个filebeat,每个服务挂载自己的目录中,意思就是,多个目录
收集日志模式2
- 在每台宿主机中开启一个filebeat,各个服务的日志挂载到一个目录中
- 缺点:
- 要手动的定期清除日志
- 优点:
- 不用改造dp文件,dp文件该怎么写就怎么写
filebeat-dp.yaml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat-dp
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat-dp
template:
metadata:
name: filebeat-dp
labels:
app: filebeat-dp
spec:
imagePullSecrets:
- name: default-secret
containers:
- name: filebeat-dp
image: 10.0.0.10:8888/k8s/filebeat:7.9.3
imagePullPolicy: IfNotPresent
volumeMounts:
- name: filebeat-conf
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
- name: tomcat-log
mountPath: /var/log/tomcat/
volumes:
- name: filebeat-conf
configMap:
name: filebeat-conf
items:
- key: filebeat.yml
path: filebeat.yml
- name: tomcat-log
hostPath:
path: /var/log/tomcat/
type: DirectoryOrCreate
收集日志模式3(边车模式)
- 每个pod开启一个filebeat,要改造dp文件,一个pod开两个容器
- 缺点:
- 要改造dp文件
- 优点:
- 不用定期的删除日志文件,pod删除会自动销毁
tomcat-filebeat-dp.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-dp
namespace: tomcat
labels:
app: tomcat
spec:
replicas: 2
selector:
matchLabels:
app: tomcat-dp
template:
metadata:
name: tomcat-dp
labels:
app: tomcat-dp
spec:
imagePullSecrets:
- name: tomcat-secret
containers:
#tomcat容器
- name: tomcat-dp
image: 10.0.0.10:8888/k8s/tomcat-app:v1
imagePullPolicy: IfNotPresent
ports:
- name: tomcat-port
containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: mysql-svc
volumeMounts:
- name: tomcat-conf
mountPath: /usr/local/tomcat/conf/server.xml
subPath: server.xml
#引用同一个emptyDir数据目录
- name: tomcat-log
mountPath: /usr/local/tomcat/logs/
#filebeat容器
- name: filebeat-dp
image: 10.0.0.10:8888/k8s/filebeat:7.9.3
imagePullPolicy: IfNotPresent
volumeMounts:
- name: filebeat-conf
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
#引用同一个emptyDir数据目录
- name: tomcat-log
mountPath: /var/log/tomcat/
volumes:
- name: tomcat-conf
configMap:
name: tomcat-conf
items:
- key: server.xml
path: server.xml
- name: filebeat-conf
configMap:
name: filebeat-conf
items:
- key: filebeat.yml
path: filebeat.yml
- name: tomcat-log
emptyDir: {}
收集日志模式4
直接收集宿主机中docker的所有日志文件
第24章 prometheus
1.官网地址
https://github.com/prometheus/prometheus
2.监控k8s需要的组件
使用metric-server收集数据给k8s集群内使用,如kubectl,hpa,scheduler等
使用prometheus-operator部署prometheus,存储监控数据
使用kube-state-metrics收集k8s集群内资源对象数据
使用node_exporter收集集群中各节点的数据
使用prometheus收集apiserver,scheduler,controller-manager,kubelet组件数据
使用alertmanager实现监控报警
使用grafana实现数据可视化
metrics-server 主要关注的是资源度量 API 的实现,比如 CPU、文件描述符、内存、请求延时等指标。
kube-state-metrics 主要关注的是业务相关的一些元数据,比如 Deployment、Pod、副本状态等
3.安装部署prometheus
导入镜像
docker load < prom-prometheusv2_2_1.tar
创建命名空间
kubectl create namespace prom
创建资源
cd prometheus
kubectl create -f ./
检查资源
kubectl -n prom get all -o wide
web浏览器查看
http://10.0.0.11:30090/targets
4.安装部署metrics-server
导入镜像
docker load < k8s-gcr-io-addon-resizer1_8_6.tar
docker load < k8s-gcr-io-metrics-server-amd64v0-3-6.tar
创建资源
kubectl create -f ./
检查
kubectl top node
kubectl top pod
5.安装node-exporterv
导入镜像
docker load < prom-node-exporterv0_15_2.tar
创建资源
kubectl create -f ./
查看资源
kubectl -n prom get pod -o wide
kubectl -n prom get svc
浏览器查看
http://10.0.0.12:9100/metrics
http://10.0.0.13:9100/metrics
6.安装kube-state-metrics
导入镜像
docker load < gcr-io-google_containers-kube-state-metrics-amd64v1-3-1.tar
创建资源
kubectl create -f ./
查看
kubectl -n prom get pod
kubectl -n prom get svc
curl 10.1.232.109:8080/metrics
7.安装grafna和k8s-prometheus-adapter
导入镜像
docker load < directxman12-k8s-prometheus-adapter-amd64-latest.tar
docker load < k8s-gcr-io-heapster-grafana-amd64v5_0_4.tar
修改grafana资源配置清单
1 apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 name: monitoring-grafana
5 namespace: prom
6 spec:
7 selector:
8 matchLabels:
9 k8s-app: grafana
10 replicas: 1
11 template:
创建资源
cd k8s-prometheus-adapter
kubectl create -f ./
检查创建的资源
kubectl -n prom get pod -o wide
kubectl -n prom get svc
浏览器查看
http://10.0.0.11:32725
导入dashboard
https://grafana.com/grafana/dashboards/10000
prometheus查询语句
sum by (name) (rate (container_cpu_usage_seconds_total{image!=""}[1m]))
container_cpu_usage_seconds_total{name =~ "^k8s_POD.*",namespace="default"}
正则表达式:
=~ 模糊匹配
== 完全匹配
!= 不匹配
!~ 不匹配正则表达式
查询语句:
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",kubernetes_io_hostname=~".*",namespace="default"}) by (pod)
翻译:
sum (监控指标{字段1!="字段1配置的值",字段2!="字段2配置的值"}) by (分组字段名)
添加namespace标签后grafana修改图标
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",kubernetes_io_hostname=~"^$Node$",namespace=~"^$Namespace$"}) by (pod)
第xx章声明式管理
管理名为kustomization.yaml
将要执行的文件按照顺序编写
执行: kubectl apply -k ./
删除: kubectl delete -k ./
如果涉及到configmap挂载的话,创建命名空间最好用命令行模式创建
resources:
- tomcat-secret.yml
- mysql-pv.yml
- mysql-pvc.yaml
- mysql-dp.yml
- mysql-svc.yml
- tomcat-dp.yml
- tomcat-svc.yml
- tomcat_ingress.yml
- tomcat-sa.yml
- tomcat-admin.yml
第xx章 k8s1.8以上内核问题解决
联网升级内核:
1. 查看内核版本
uname -r
2. 导入ELRepo软件仓库的公共秘钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
3. 安装ELRepo软件仓库的yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
4. 启用 elrepo 软件源并下载安装最新稳定版内核
yum --enablerepo=elrepo-kernel install kernel-ml -y
5. 查看系统可用内核,并设置内核启动顺序
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
6. 机器上存在 多个内核,我们要使用最新版本,可以通过 grub2-set-default 0 命令生成 grub 配置文件
grub2-set-default 0 #初始化页面的第一个内核将作为默认内核
grub2-mkconfig -o /boot/grub2/grub.cfg #重新创建内核配置
7. 重启系统并验证
reboot
uname -r
8. 删除旧内核
yum -y remove kernel kernel-tools
离线升级内核:
1. 到官网下载内核 rpm 包,官方地址:https://elrepo.org/linux/kernel/el7/x86_64/RPMS/
下载最新版的内核并上传至服务器
2. 执行安装
rpm -ivh kernel-lt-5.4.95-1.el7.elrepo.x86_64.rpm kernel-lt-devel-5.4.95-1.el7.elrepo.x86_64.rpm
3. 查看系统可用内核,并设置内核启动顺序
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
4. 生成 grub 配置文件
grub2-set-default 0 && grub2-mkconfig -o /boot/grub2/grub.cfg
5. 重启系统,并验证
reboot
uname -r
6. 删除旧内核
yum -y remove kernel kernel-tools