容器技术

177 阅读12分钟

Sealos 快速离线构建k8s集群

参考网址

  1. 安装说明文档
  2. 安装包下载路径

安装说明

事前准备[centos7]

  1. 关闭防火墙
 #查看防火墙状态
 systemctl status firewalld
 # 停止防火墙
 systemctl stop firewalld
 # 永久关闭防火墙
 systemctl disable firewalld
  1. 禁用SELiunx

    1. 查看当前SELinux 状态
     getenforce
    
    1. 通过文件/etc/seliunx/config 文件查看
     cat /etc/selinux/config | grep SELINUX
    
    1. 设置SELiux 模式为Premissive模式
    # 当前生效
    setenforce 0
    # 修改配置文件,当系统被重启后依然保持
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/'       /etc/selinux/config
    
  2. 禁用交换分区

    # 当前生效
    swapoff -a
    # 系统重启后依然有效
    sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab
    

    3.1. 查看是否设置成功

     free -h 
    

    显示swap 为0

  3. 配置内核参数。

    cat <<EOF >  /etc/sysctl.d/kubernetes.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    net.ipv6.conf.all.disable_ipv6 = 1
    EOF
    

    4.1 刷新上述配置,使其生效

     # 刷新参数
     sysctl -p /etc/sysctl.d/kubernetes.conf
    
  4. 配置ipvs.modules文件。

    5.1 内核版本低于4.19 配置

    vi /etc/sysconfig/modules/ipvs.modules
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    

    5.2 高于4.19

     vi /etc/sysconfig/modules/ipvs.modules
     modprobe -- ip_vs
     modprobe -- ip_vs_rr
     modprobe -- ip_vs_wrr
     modprobe -- ip_vs_sh
     modprobe -- nf_conntrack
     modprobe – rbd
    

    5.3 设置文件权限

    chmod 755 /etc/sysconfig/modules/ipvs.modules
    

    5.4 确认配置是否生效

    lsmod | grep ip_vs
    

    ps: 查看系统内核版本

    uname -r | cut -d- -f
    

安装

依赖包下载
  1. sealos 的安装文件下载
# amd 即x86架构
sealos_5.0.0_linux_amd64.rpm
# arm
sealos_5.0.0_linux_arm64.rpm
  1. k8s依赖镜下载打包
  • 获取k8s的基础镜像
sealos pull registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.28.0
sealos save -o kubernetes.tar registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.28.0
  • 获取网络组件
sealos pull registry.cn-shanghai.aliyuncs.com/labring/flannel:v0.26.0-amd64
sealos save -o flannel.tar registry.cn-shanghai.aliyuncs.com/labring/flannel:v0.26.0-amd64

-获取helm 组件

sealos pull registry.cn-shanghai.aliyuncs.com/labring/helm:v3.11.1
sealos save -o helm.tar  registry.cn-shanghai.aliyuncs.com/labring/helm:v3.11.1
  1. 将下载打包的k8s镜像copy至目标机器
  2. 加载镜像
sealos load -i kubernetes.tar
sealos load -i flannel.tar 
sealos load -i helm.tar
  1. 安装k8s单机版
sealos run registry.cn-shanghai.aliyuncs.com/labring/kubernetes:v1.28.0 \
registry.cn-shanghai.aliyuncs.com/labring/helm:v3.11.1 \
registry.cn-shanghai.aliyuncs.com/labring/flannel:v0.26.0-amd64 --single


docker hub 被禁用后可用的docker mirror

参考网址

镜像地址

  1. 基础镜像 只包含基础镜像,共336个
https://atomhub.openatom.cn

centos7.5 下安装k8scluster

安装说明

依赖的离线rpm 包

图片.png

安装kubeadm工具
  1. 上传上述的依赖包至服务器
  2. 安装
 rpm -ivh *.rpm --force --nodeps

  1. 设置kubelet 开机自启动
systemctl enable kubelet

centos7 下设定k8s-worknode

前置条件

  1. 设定主机名称
  2. 关闭防火墙
  3. 关闭swap
  4. 安装docker服务
  5. 安装k8s服务

安装

  1. 设定主机名称
## 查看是否已经设定了计算机名称
hostname
## 设定计算机名称
hostnamectl set-hostname k8s-work-01

  1. 关闭防火墙
## 查看当前防火墙状态
sysytemctl status firewalld
## 关闭防火墙
sudo systemctl stop firewalld
  1. 关闭swap
## 查看swap状态,查看swap 的应用状况
free -h
## 关闭swap
swapoff -a
## 永久关闭
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  1. 安装docker服务
  2. 安装k8s服务

离线安装harbor

环境说明

  1. OS -Centos7.5
  2. harbor-v2.9.4

前置条件

  1. docker 17.06.0+
  2. docker-compose

CentOS7 下离线安装Docker

下载网址

离线安装的rpm 离线安装的tgz

安装说明

安装说明

centos7 下离线安装docker-compose

下载网址

离线安装

安装说明

安装说明

安装说明要点
  1. 将下载的docker-compose文件导入到目录/usr/local/bin/
     cp docker-compose-linux-x86_64 /usr/local/bin/
  1. 修改文件权限,使得docker-compose文件可执行
sudo chmod 775
  1. 使用docker-compose version命令检证是否安装成功
# 安装正确的场合,输出docker-compose 的版本信息
Docker Compose version v2.27.0
安装问题
  1. 安装完成docker-compose 提示无该command
    • 事象: 安装完成后,普通用户使用docker-compose命令场合,其可正常使用,但作为root 用户使用docker-compose的场合,其提示无该命令
    • 原因分析,上述将可执行文件置于目录/usr/local/bin/下。但是root用户的PATH的可执行目录为/usr/bin/因此,root用户无法找到对应的command
    • 解决方案,创建软连接至root用户的可执行目录
    ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
    ``
    
    

实战

部署服务到k8s集群中的指定节点

前置条件
  1. k8s集群为多master,多work
  2. 需要部署服务到指定的work节点上
实施方式
  1. 给指定的work 节点添加标签
kubectl label nodes <node-name> <label-key>=<label-value>
  1. 部署资源时指定nodeSelector的方式实施进行
spec:
  nodeSelector:
    <label-key>: <label-value>
  containers:
    - name: my-service
      image: my-service-image

部署k8s dashboard

前置条件

  1. OS centos7
  2. 无法访问外网

安装步骤

获取部署dashboard的yaml
https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
本地保存recommended.yaml文件
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
在有网环境下下载上述文件所需的镜像
docker pull kubernetesui/dashboard:v2.7.0

docker pull kubernetesui/metrics-scraper:v1.0.8
将下载的镜像保存打包
 docker save -o kubernetesui-dashboard.tar kubernetesui/dashboard:v2.7.0
 docker save -o kubernetesui-metrics-scraper.tar kubernetesui/metrics-scraper:v1.0.8

将打包后的镜像上传至本地的harbor中
修改保存至本地的recommended.yaml文件

将其中的镜像改修为本地harbor所在的镜像

image: kubernetesui/metrics-scraper:v1.0.8=>image: hub.pcitc.iot.com/kubernetesui/metrics-scraper:v1.0.8
image: kubernetesui/dashboard:v2.7.0=>image:hub.pcitc.iot.com/kubernetesui/dashboard:v2.7.0

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort # add the type of node port      
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31260 # add the nodePort 31260
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: hub.pcitc.iot.com/kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: hub.pcitc.iot.com/kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

外部可访问,设定dashboard 的yaml中的service 类型为NODEPORT
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort # add the type of node port      
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31260 # add the nodePort 31260
  selector:
    k8s-app: kubernetes-dashboard

依据文件recommended.yaml创建资源
 kubectl create -f recommended.yaml
资源创建完成后,可通过command查看dashboard pod 所在的机器。即可打开dashboard登录页
#浏览器端访问
https://dashboard所在的IP:31260/
使用下属的yaml构建基本的用户资源以及token
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  
---
apiVersion: v1
kind: Secret
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin-user"   
type: kubernetes.io/service-account-token  

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
使用command查看生成的token
kubectl -n kubernetes-dashboard describe secret admin-user

利用上述命令输出的token,进行登录dashboard

使用kind 构建k8s 集群

kind 加载本地镜像到kind的集群中
# 将本地镜像加载到集群中
kind load docker-image my-custom-image --name cluster-name

# 镜像的打包文件加载到集群中
kind load image-archive /my-image-archive.tar

本地构建镜像仓库

harbor在本地的安装

安装宿主机的需求

软件需求

Docker Engineversion 17.0.60 +
Docker composeversion v1.18.0 +
下载安装文件 在线安装

harbor-online-installer-v2.7.4.tgz

解压压缩文件
  # 解压缩文件
  tar xzvf harbor-online-installer-v2.7.4.tgz
  # 生成harbor目录
解压缩后,以文件 harbor.yml.tmp 构建harbor.yml文件
copy harbor.yml.tmp harbor.yml
修改配置文件
  1. 修改挂载目录
# The default data volume
data_volume:/data

# 变更为解压缩文件目录中创建的data目录
data_volume: /Users/mimasige0/harbor/data

重点:需要在Docker desktop 客户端将上述的挂载目录设定为File sharing image.png

  1. 修改log存储的位置
# 变更前
# The directory on your host that store log
location: /var/log/harbor
# 变更后
location: /Users/mimasige0/harbor/data/log/harbor
  1. 不使用Https的方式登录,注释掉相关的配置设定
# https related config
#https:
  # https port for harbor, default is 443
# port: 443
  # The path of cert and key files for nginx
#  certificate: /your/certificate/path
# private_key: /your/private/key/path

  1. 修改host & port 为本机
hostname: 10.238.130.164
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80

  1. 以非root用户执行install.sh文件,进行安装。安装成功后控制台显示 image.png

  2. 浏览器端访问http://127.0.0.1:80,用户名:admin 密码:Harbor12345

image.png

  1. 启动后关闭以及重启harbor
docker-compose down -v
docker-compose up -d

问题一览
  1. 使用Docker client 连接 harbor
 # docker client 连接 harbor所在的host ip
 docker login harbor 所在的host ip
 
事象:

连接被拒

Error response from daemon: Get "https://10.238.130.164/v2/": dialing 10.238.130.164:443 static system has no HTTPS proxy: connecting to 10.238.130.164:443: dial tcp 10.238.130.164:443: connect: connection refused
原因:

harbor 开启的是http 连接。而docker client 默认的链接方式为https

解决方案 官网解决案
  • 在docker 的daemon.json 文件中添加下述内容
{
"insecure-registries" : ["需要访问的IP,harbor服务所在的地址"]
}

  • 修改后重启docker 服务
 sudo killall Docker && open /Applications/Docker.app

  • stop harbor && restart harbor
# 停止
docker-compose down -v
# 启动
docker-compose up -d
配置正确后,重新进行登录
 docker login IP
 
 提示login success

将本地镜像保存,导入到harbor 中
#1.在有网的环境中使用docker save 的命令保存指定镜像为tar 文件
docker save -o xxx.tar image:tag
#2.将save 的docker 镜像删除到harbor 所在的服务器
#3.使用docker load -i xxx.tar 将tar所包含的文件load到本地的docker镜像中
docker load -i xx.tar
#4.load 完成后可以使用docker images 查看已经load的镜像
#5.重新标记加载的镜像为 Harbor 的镜像路径
docker tag myimage:latest harbord的host/myproject/myimage:latest
#6.推送镜像到 Harbor 仓库
docker push harbor.example.com/myproject/myimage:latest
#7.harbor的镜像在yaml 中的应用
containers: 
  - name: myapp 
    image: harbor.example.com/myproject/myimage:latest

使用Dockerfile构建Springboot项目的镜像

Dockerfile 内容

# 使用一个基础的 Java 运行时镜像
FROM adoptopenjdk:11-jre-hotspot
# 设置工作目录
WORKDIR /app
# 复制构建好的 Spring Boot JAR 文件到镜像中
COPY target/group-application-aggregation-0.0.1-SNAPSHOT.jar app.jar
# 暴露应用程序的端口(根据你的实际配置进行修改)
EXPOSE 8080
# 设置容器启动命令,运行 Spring Boot 应用程序
CMD ["java", "-Dspring.profiles.active=dev", "-jar", "app.jar"]

构建Docker镜像

 docker build -t image-name:tag .

依据镜像构建运行示例

 # 使用-p 参数将docker容器中的端口8080映射到本机的8080端口,从而实现从本机对于应用的访问
 docker run -d -p 8080:8080 image-name:tag

查看容器的log

 docker logs -f 容器ID

使用k8s的方式发布上述的应用

kind 中安装ingress-controlelr