九、K8S初上手:发布镜像和部署应用

282 阅读5分钟
做个简单应用,大概就是这种结构
  • index.html (简单的html)
  • Dockerfile (docker镜像配置,主要是安装nginx)
  • default.conf (nginx配置)
  • deploy.yml (部署到k8s的规则)
index.html
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <title>k8s app</title>
</head>
<body>
    <h2>Hello</h2>
    <p>我是k8s中的应用</p>
</body>
</html>
Dockerfile
FROM nginx:alpine
 
LABEL maintainer="Heartide Developers <heywalkerman@gmail.com>" version="1.0" license="MIT"
 
ADD default.conf /etc/nginx/conf.d/
 
COPY index.html /var/html/index.html
 
EXPOSE 80
default.conf
server {
    listen       80;
    listen  [::]:80;
    server_name  _;
 
    location / {
        root   /var/html;
        index  index.html index.htm;
    }
 
    error_page 400 403 404 500 501 502 503 504 = @errorPage;
 
    location @errorPage {
        add_header Content-Type 'text/html;charset=utf-8';
        return 200 "# 页面错误";
    }
}
在发布之前先在本地跑一遍
# 在项目目录下执行
$ docker build -t app .
 
Sending build context to Docker daemon  4.096kB
Step 1/5 : FROM nginx:alpine
alpine: Pulling from library/nginx
df20fa9351a1: Pull complete
3db268b1fe8f: Pull complete
f682f0660e7a: Pull complete
7eb0e8838bc0: Pull complete
e8bf1226cc17: Pull complete
Digest: sha256:a97eb9ecc708c8aa715ccfb5e9338f5456e4b65575daf304f108301f3b497314
Status: Downloaded newer image for nginx:alpine
 ---> 6f715d38cfe0
Step 2/5 : LABEL maintainer="Heartide Developers <heywalkerman@gmail.com>" version="1.0" license="MIT"
 ---> Running in f59aabaaf304
Removing intermediate container f59aabaaf304
 ---> 012f9b6f16a5
Step 3/5 : ADD default.conf /etc/nginx/conf.d/
 ---> 411644b457c6
Step 4/5 : COPY index.html /var/html/index.html
 ---> a7bb6d4c5ec2
Step 5/5 : EXPOSE 80
 ---> Running in 808126e5f6dc
Removing intermediate container 808126e5f6dc
 ---> 9f5c64d4aace
Successfully built 9f5c64d4aace
Successfully tagged app:latest
 
 
# 然后运行
$ docker run -d --name=app -p 8980:80 app
 
a4661d97a4d1fdcaf4e9e0efe4405d05ed4d87948f6d19609a51698b8f8b4d5e
 
# 浏览器访问 http://localhost:8980,如果正常显示内容,就说明镜像OK了
在阿里镜像仓库建个仓库

cr.console.aliyun.com/

我设定的是“公开”,免去了docker拉取时要认证,这里是测试,就不考虑那么多了,如果是生产环境,请选择私有

image-20201111115154370

这里选择本地仓库,待会我是要把项目push上去,不需求CI/CD

image-20201111115257829

在push到仓库前,处理下认证
# 登录
$ docker login --username=xxx registry.cn-shenzhen.aliyuncs.com
 
# 把刚才做好的镜像推上去
# 9f5c64d4aace 是镜像ID,可以 docker images 去查看
# latest 取最后的版本
# tag 重命名镜像名称
$ docker tag 9f5c64d4aace registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app:latest
 
# 查看下
$ docker images
REPOSITORY                                            TAG                 IMAGE ID            CREATED             SIZE
app                                                   latest              9f5c64d4aace        22 minutes ago      22.1MB
registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app   latest              9f5c64d4aace        22 minutes ago      22.1MB
nginx                                                 alpine              6f715d38cfe0        12 days ago         22.1MB
 
# 然后push
$ docker push registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app:latest
 
The push refers to repository [registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app]
6f8cee086e17: Pushed
f7a312835304: Pushed
6ad8d562c843: Pushed
425ee8569962: Pushed
5d9ee84be1ec: Pushed
6bcd003260b2: Pushed
50644c29ef5a: Pushed
latest: digest: sha256:2fa50d4004061f976da0f73043fd11f45efed715a59fff57a856b9854eb89ee5 size: 1774
 
# 搞定
查看仓库

image-20201111115359623

现在镜像也有了,开始定义k8s的部署配置 deploy.yml

master 节点上新建一个 deploy.yml 的文件,内容如下:

apiVersion: apps/v1 #
kind: Deployment # 类型为“部署”
metadata: # 名字啊,标签啊
  name: k8s-app
  labels:
    app: k8s-app
spec:
  replicas: 1 # 部署多少个,这里部署1个
  selector:
    matchLabels:
      app: k8s-app
  template:
    metadata:
      labels:
        app: k8s-app
    spec:
      containers:
      - name: k8s-app
        image: registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app:latest # 这就是我们前面上传好的镜像地址,tag我指定的是latest
        ports:
          - containerPort: 80 # 容器暴露的端口
        resources: # 使用的、限制的资源,各种cpu、内存
          requests:
            cpu: "1000m"
            memory: "1Gi"
          limits:
            cpu: "2000m"
            memory: "2Gi"
 
---
 
# 这一部分是services
apiVersion: v1
kind: Service # 类型为 service
metadata:
  name: k8s-app-service
spec:
  selector:
    app: k8s-app # 这里跟容器的定义保持一致
  type: NodePort #当前的类型是 NodePort ,后面我们会用到 Ingresss ,这里先不做更改,后面介绍到 Ingresss的时候会再配置
  ports:
    - name: web
      port: 80
      protocol: TCP
发布配置
$ kubectl apply -f deploy.yml
 
deployment.apps/k8s-app created
service/k8s-app-service created
 
# 看下执行情况
# 镜像很小,部署的很快
# k8s-app-5c8757d6dd-th9zt 看到这个容器已经是 Running 的状态,在 default 的命名空间中,然后当前运行在 kube-node-1 这台节点服务器上,flannel 给他分配的IP是 10.244.1.2
# 然后这个镜像会在哪里拉取呢?会在 kube-node-1 这台节点服务器上
 
$ kubectl get pods --all-namespaces -o wide       
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
default       k8s-app-5c8757d6dd-th9zt              1/1     Running   0          9m8s    10.244.1.2   kube-node-1   <none>           <none>
kube-system   coredns-66bff467f8-9qlqm              1/1     Running   0          6h2m    10.244.0.3   kube-master   <none>           <none>
kube-system   coredns-66bff467f8-l8ksl              1/1     Running   0          6h2m    10.244.0.2   kube-master   <none>           <none>
kube-system   etcd-kube-master                      1/1     Running   0          6h2m    10.0.0.3     kube-master   <none>           <none>
kube-system   kube-apiserver-kube-master            1/1     Running   0          6h2m    10.0.0.3     kube-master   <none>           <none>
kube-system   kube-controller-manager-kube-master   1/1     Running   0          6h2m    10.0.0.3     kube-master   <none>           <none>
kube-system   kube-flannel-ds-amd64-6g7j2           1/1     Running   0          3h47m   10.0.0.5     kube-node-2   <none>           <none>
kube-system   kube-flannel-ds-amd64-d5vpn           1/1     Running   0          3h53m   10.0.0.4     kube-node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-vcmx9           1/1     Running   0          4h20m   10.0.0.3     kube-master   <none>           <none>
kube-system   kube-proxy-l66h2                      1/1     Running   0          3h47m   10.0.0.5     kube-node-2   <none>           <none>
kube-system   kube-proxy-p4vmz                      1/1     Running   0          3h53m   10.0.0.4     kube-node-1   <none>           <none>
kube-system   kube-proxy-rg44d                      1/1     Running   0          6h2m    10.0.0.3     kube-master   <none>           <none>
kube-system   kube-scheduler-kube-master            1/1     Running   0          6h2m    10.0.0.3     kube-master   <none>           <none>
 
# 看下详情
$ kubectl describe pods k8s-app-5c8757d6dd-th9zt
Name:         k8s-app-5c8757d6dd-th9zt
Namespace:    default
Priority:     0
Node:         kube-node-1/10.0.0.4
Start Time:   Wed, 26 Aug 2020 14:26:18 +0800
Labels:       app=k8s-app
              pod-template-hash=5c8757d6dd
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
  IP:           10.244.1.2
Controlled By:  ReplicaSet/k8s-app-5c8757d6dd
Containers:
  k8s-app:
    Container ID:   docker://d718fca68e732e056d77cf7244009d8f69ab2378aec5c98ac89b9c91cf272810
    Image:          registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app
    Image ID:       docker-pullable://registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app@sha256:2fa50d4004061f976da0f73043fd11f45efed715a59fff57a856b9854eb89ee5
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Wed, 26 Aug 2020 14:26:22 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     2
      memory:  2Gi
    Requests:
      cpu:        1
      memory:     1Gi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8zvv5 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-8zvv5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8zvv5
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From                  Message
  ----    ------     ----       ----                  -------
  Normal  Scheduled  8m7s       default-scheduler     Successfully assigned default/k8s-app-5c8757d6dd-th9zt to kube-node-1
  Normal  Pulling    <invalid>  kubelet, kube-node-1  Pulling image "registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app"
  Normal  Pulled     <invalid>  kubelet, kube-node-1  Successfully pulled image "registry.cn-shenzhen.aliyuncs.com/walkerman/k8s-app"
  Normal  Created    <invalid>  kubelet, kube-node-1  Created container k8s-app
  Normal  Started    <invalid>  kubelet, kube-node-1  Started container k8s-app
 
# 再看下serveices
# 新建了一个 k8s-app-service 的 service,类型是NodePort,看似好像没有问题
$ kubectl get service -o wide
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE    SELECTOR
k8s-app-service   NodePort    10.105.32.4   <none>        80:32383/TCP   60s    app=k8s-app
kubernetes        ClusterIP   10.96.0.1     <none>        443/TCP        6h9m   <none>

实际上很有问题,目前这个应用,前面说过了的,是部署在了 kube-node-1 这台节点服务器上,那我们怎么访问这个服务?

部署到这里,很多人说,这是集群,肯定是从 master 节点上访问,不完全是!

目前只能从应用所在的服务器上访问,现在注意 service 的属性里,k8s-app-service 生成的端口映射里,是 80:32383/TCP,所以现在访问服务,只能是 http://kube-node-1:32383

其实这并不是k8s的设计导致 的问题,正常情况下,只要在集群内的所有节点,任意节点的IP都能访问服务

问题出在虚拟机的网卡上!

看下网卡信息
# 第一篇我们设定了两个网卡,一个是NAT,一个是主机网络
# K8s的网络插件flannel默认使用了错误的网卡,他指向了NAT,也就是enp0s3
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:71:07:fc brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::4563:d4f6:2f9a:da55/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:60:bf:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.3/24 brd 10.0.0.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::f3f2:bff0:8aa1:cd36/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:15:6d:18:38 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether ae:ec:e9:1c:0b:87 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::acec:e9ff:fe1c:b87/64 scope link
       valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 82:77:41:8f:13:52 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::8077:41ff:fe8f:1352/64 scope link
       valid_lft forever preferred_lft forever
7: veth029d217a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether a2:7a:f2:8f:35:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a07a:f2ff:fe8f:351d/64 scope link
       valid_lft forever preferred_lft forever
8: veth940bf9d9@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 8e:fe:d3:6d:64:75 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::8cfe:d3ff:fe6d:6475/64 scope link
       valid_lft forever preferred_lft forever
重新修改下之前定义的 kube-flannel.yml 文件中的配置并发布
# 大概在 191行处,添加网卡指定的配置
 
args:
  - --ip-masq
  - --iface=enp0s8
  - --kube-subnet-mgr
 
 
# 再次发布flannel配置
$ kubectl apply -f kube-flannel.yml
 
# 然后各个节点的 flannel 会逐个更新配置
$ kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                  READY   STATUS        RESTARTS   AGE     IP           NODE          NOMINATED NODE   READINESS GATES
default       k8s-app-5c8757d6dd-nnkpc              1/1     Running       0          33m     10.244.2.2   kube-node-2   <none>           <none>
kube-system   coredns-66bff467f8-9qlqm              1/1     Running       0          6h42m   10.244.0.3   kube-master   <none>           <none>
kube-system   coredns-66bff467f8-l8ksl              1/1     Running       0          6h42m   10.244.0.2   kube-master   <none>           <none>
kube-system   etcd-kube-master                      1/1     Running       0          6h42m   10.0.0.3     kube-master   <none>           <none>
kube-system   kube-apiserver-kube-master            1/1     Running       0          6h42m   10.0.0.3     kube-master   <none>           <none>
kube-system   kube-controller-manager-kube-master   1/1     Running       0          6h42m   10.0.0.3     kube-master   <none>           <none>
kube-system   kube-flannel-ds-amd64-djkvk           1/1     Running       0          34s     10.0.0.4     kube-node-1   <none>           <none>
kube-system   kube-flannel-ds-amd64-ndwfs           1/1     Running       0          77s     10.0.0.5     kube-node-2   <none>           <none>
kube-system   kube-flannel-ds-amd64-vcmx9           0/1     Terminating   0          4h59m   10.0.0.3     kube-master   <none>           <none>
kube-system   kube-proxy-l66h2                      1/1     Running       0          4h27m   10.0.0.5     kube-node-2   <none>           <none>
kube-system   kube-proxy-p4vmz                      1/1     Running       0          4h32m   10.0.0.4     kube-node-1   <none>           <none>
kube-system   kube-proxy-rg44d                      1/1     Running       0          6h42m   10.0.0.3     kube-master   <none>           <none>
kube-system   kube-scheduler-kube-master            1/1     Running       0          6h42m   10.0.0.3     kube-master   <none>           <none>

配置更新之后,集群中所有的节点都可以通过IP来进行访问,但这里还不能算是发布成功了,因为这里的 PORT 他是动态的,80:32383/TCP,80端口映射的32383,他是不固定的,要解决这一点,我们需要用到 Ingress

现在的效果:http://节点IP:端口

实现:http://负载均衡IP/

image-20201111115714530