Ubuntu20.04 安装 kubesphere&&devops流水线部署项目(安装)

696 阅读6分钟

Ubuntu20.04 安装 kubesphere和 devops流水线部署项目(一)

作为一个东北程序员,部署项目那都是选修中的必修课啊。下面让我给大家带来《kubesphere安装与部署的全部流程》

前言

  • 我会写三篇文章来记录 我安装 kubesphere 和 部署项目的全过程
  • 第一篇 记录了我安装 kunesphere 的过程
  • 第二篇 记录我运用 kunesphere 部署java代码的过程
  • 第三篇 记录我运用 kunesphere 部署vue代码的过程

环境介绍

在这里先说一下我踩的坑

第一个坑 devops 内存不足
  • kubesphere 的 devops 必须要改一下 运行配置
  • 这个坑 太大 你运行流水线他只会报失败 具体的却不给告诉你 他自带的 配置 2G 运行 根本就不够运行流水线的
  • 运行流水线就会重启 希望大家注意啊
  • 以下我贴出 我的配置 仅供参考(项目比较大的酌情自行增大内存)
  • 一定要在安装前修改这个配置 安装后再修改不会生效
    #原配置
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 1200m
    jenkinsJavaOpts_Xmx: 1600m
    jenkinsJavaOpts_MaxRAM: 2g
    #改为
    jenkins_memory_lim: 8Gi # Jenkins 内存限制,默认 8 Gi(原设置 )
    jenkins_memory_req: 4Gi # Jenkins 内存请求,默认 4 Gi
    jenkins_volume_size: 20Gi # Jenkins 存储卷大小,默认 20 Gi
    jenkinsJavaOpts_Xms: 3g # 以下三项为 jvm 启动参数
    jenkinsJavaOpts_Xmx: 6g
    jenkinsJavaOpts_MaxRAM: 8g

第二个坑
  • harbor 没有域名 docker 登录不上
  • 解决方案
  • k8s 集群用的是宿主机的 docker
  • 因此 在安装 kubsphere 的所有节点
  • 每台 主机的docker 都要登录到harbor (原因是 你不知道 devops 会在那个节点运行)
  • 这个建议大家在没安装 kubesphere 的时候就要 这部操作做完
  • 因为 kubesphere 有自修复功能 也就是检测没运行的服务自己在启动一遍 (不是重启 而是在启动一个新的容器)

第三个坑 k8s 的命名空间
  • k8s 的命名空间 在 kubesphere 中为 为项目
  • 具体的可以参考 kubesphere 文档
  • 进行理解

第四个坑
  • 我用的是ubuntu 系统 (原因是 甲方爸爸听到contos 停止维护了 要求必须ubuntu)
  • 最开始甲方爸爸 给弄的是 ubuntu 22.04 我发现可以安装kubesphere 但是访问不到 30880端口
  • 所以 联系了甲方的管理 把系统改为了 ubuntu20.04 然后可以30880 端口了

下面开始正文 安装 kubsphere

第一步 安装kubesphere 是需要的依赖
  • 所需依赖 docker socat conntrack ebtables ipset ipvsadm curl chrony
  • 这块是设置 sudo 免密 有root 用户最好用root
    sudo vim /etc/sudoers
        NOPASSWD:
        改为以下这样
        %admin ALL=(ALL:ALL) NOPASSWD:ALL
        %sudo   ALL=(ALL:ALL) NOPASSWD:ALL
    sudo su root
  • 先卸载docker然后再安装docker-ce & 并且 安装后设置 harbor地址 登录harbor
    sudo apt-get remove docker docker-engine docker.io containerd runc docker-ce
    sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
      $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update
    sudo apt-get install docker-ce docker-ce-cli containerd.io 
    sudo service docker start
    echo '{ "insecure-registries":["10.104.209.80:18088"] }' > /etc/docker/daemon.json
    systemctl restart docker
    sudo docker login 10.104.209.80:18088 -u admin -p A81JP3BUYK
  • 设置环境变量 安装kunsphere 依赖 & 设置 时间同步
    swapoff -a
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF

    sudo apt-get install -y curl ipset
    sudo apt-get install ca-certificates gnupg lsb-release
    sudo apt-get install -y ipvsadm conntrack chrony socat
    sudo service chrony start
    sudo timedatectl set-timezone Asia/Shanghai
    
    setenforce 0
    sed -i 's/enforcing/disabled/' /etc/selinux/config 
    sudo sed -i 's/enforcing/disabled/' /etc/selinux/config 
    echo "PubkeyAcceptedKeyTypes +ssh-rsa" >> /etc/ssh/sshd_config
  • 下面开始安装 kubesphere 和 kubernetes 因为是cloud 所以我安装的是多节点
  • 这里要注意啊 一定要 执行export KKZONE=cn 或者我建议你们 把这个加到 /etc/profile
  • 这个是设置从国内节点下载 不加这个 走的是国外节点 慢的很
  • 我这里是现在的 官方推荐的 kk 官方文档上下载的也是这个
  • 这里说一下 kubesphere启动相应组建说明 开启很简单 生成 的 kubesphere.yaml 会有 配置项 吧false 改成 true 就行
    curl -fsLO https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v2.2.1/kubekey-v2.2.1-linux-amd64.tar.gz
    #下载完后 
    tar -zxvf kubekey-v2.2.1-linux-amd64.tar.gz
    #然后跟正常安装一样
    #付执行权限
    chmod +x kk
    #然后 
    kk create config   --with-kubernetes v1.22.10 --with-kubesphere v3.3.0  -f kubesphere.yaml
    # 这里注意了 默认是不开启  devops与日志还有 应用商店的需要手动改一下配置
    # 还有啊 在执行下面这句安装时一定要注意 开启了 devops 功能的一定要看 我在开始提到的
    # devops 自带配置的内存不足 运行流水线脚本 会导致容器重启 导致部署失败
    kk create cluster -f kubesphere.yaml 
# kubesphere.yaml 配置
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: master, address: 12.104.209.103, internalAddress: 10.104.209.103, user: mzjtwxzx, password: "Aa1234546"}
  - {name: master2, address: 12.104.209.104, internalAddress: 192.168.0.3, user: mzjtwxzx, password: "Aa1234546"} # 这样是 多主节点
  - {name: master3, address: 12.104.209.105, internalAddress: 192.168.0.4, user: mzjtwxzx, password: "Aa1234546"} # 这样是 多主节点
  - {name: node1, address: 12.104.209.71, internalAddress: 10.104.209.71, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node2, address: 12.104.209.72, internalAddress: 10.104.209.72, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node3, address: 12.104.209.73, internalAddress: 10.104.209.73, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node4, address: 12.104.209.74, internalAddress: 10.104.209.74, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node5, address: 12.104.209.75, internalAddress: 10.104.209.75, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node6, address: 12.104.209.76, internalAddress: 10.104.209.76, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node7, address: 12.104.209.77, internalAddress: 10.104.209.77, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node8, address: 12.104.209.78, internalAddress: 10.104.209.78, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node9, address: 12.104.209.79, internalAddress: 10.104.209.79, user: mzjtwxzx, password: "Aa1234546"}
  - {name: node10, address: 12.104.209.89, internalAddress: 10.104.209.89, user: mzjtwxzx, password: "Aa1234546"}
  roleGroups:
    etcd:
    - master
    - master2 # 这样是 多主节点
    - master3 # 这样是 多主节点
    control-plane: 
    - master
    - master2 # 这样是 多主节点
    - master3 # 这样是 多主节点
    worker:
    - node1
    - node2
    - node3
    - node4
    - node5
    - node6
    - node7
    - node8
    - node9
    - node10
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: "12.104.209.*"
    port: 6443
  kubernetes:
    version: v1.22.12
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
    # 默认容器组110  可以改小
    maxPods: 110
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.2
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: true
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: true
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 10Gi
    jenkinsJavaOpts_Xms: 3g
    jenkinsJavaOpts_Xmx: 6g
    jenkinsJavaOpts_MaxRAM: 8g
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: true
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: true
  servicemesh:
    enabled: true
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: true
        cni:
          enabled: true
  edgeruntime:
    enabled: true
    kubeedge:
      enabled: true
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

  • 安装过程大概在 20 到 40 分钟 这取决于你有几个节点和网速
  • 看到下面这个句话证明安装成功了
    #####################################################
    ###              Welcome to KubeSphere!           ###
    #####################################################

    Console: http://192.168.0.2:30880
    Account: admin
    Password: P@88w0rd

    NOTES:
      1. After you log into the console, please check the
         monitoring status of service components in
         "Cluster Management". If any service is not
         ready, please wait patiently until all components 
         are up and running.
      2. Please change the default password after login.

    #####################################################
    https://kubesphere.io             20xx-xx-xx xx:xx:xx
    #####################################################

到这里 KubeSphere 的安装就完成了