来源于官方的例子
使用 Kubernetes 和 Docker 构建和部署一个简单的(非生产就绪)、多层 Web 应用程序
此示例由以下组件组成:用于存储留言簿条目的 单实例 Redis 多个 Web 前端实例
操作:
-
启动一个 Redis 领导者。
-
启动两个 Redis 追随者。
-
启动留言板前端。
-
公开并查看前端服务。
-
清理。
启动redis 数据库集群
基于deployment 方式创建redis leader应用
#https://k8s.io/examples/application/guestbook/redis-leader-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
role: leader
tier: backend
spec:
containers:
- name: leader
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
kubectl apply -f redis-leader-deployment.yaml
kubectl get pods
kubectl logs -f deployment/redis-leader
创建redis leader service
# https://k8s.io/examples/application/guestbook/redis-leader-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: leader
tier: backend
kubectl apply -f redis-leader-service.yaml
kubectl get service
基于deployment 方式创建redis followers应用
#https://k8s.io/examples/application/guestbook/redis-follower-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-follower
labels:
app: redis
role: follower
tier: backend
spec:
replicas: 2
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
role: follower
tier: backend
spec:
containers:
- name: follower
image: gcr.io/google_samples/gb-redis-follower:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
kubectl apply -f redis-follower-deployment.yamlkubectl get pods
创建redis follower service
# https://k8s.io/examples/application/guestbook/redis-follower-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-follower
labels:
app: redis
role: follower
tier: backend
spec:
ports:
# the port that this service should serve on
- port: 6379
selector:
app: redis
role: follower
tier: backend
kubectl apply -f redis-follower-service.yaml
kubectl get service
Guestbook部署
基于 deployment 部署Guestbook
# https://k8s.io/examples/application/guestbook/frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: guestbook
tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v5
env:
- name: GET_HOSTS_FROM
value: "dns"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
kubectl apply -f frontend-deployment.yaml
kubectl get pods -l app=guestbook -l tier=frontend
部署 Guestbook service
这里需要将type 改成 NodePort 因为本地环境是没有LoadBalancer
# https://k8s.io/examples/application/guestbook/frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
#type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
app: guestbook
tier: frontend
kubectl apply -f frontend-service.yaml
kubectl get services
如果上面配置没改,需要使用proxy 才能访问上面的服务
kubectl port-forward svc/frontend 8080:80
在浏览器中 加载页面 http://localhost:8080 以查看您的GuestBooks
GuestBook 应用扩缩容
kubectl scale deployment frontend --replicas=5
kubectl scale deployment frontend --replicas=2
清理GuestBook环境
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment frontend
kubectl delete service frontend