Kubekey一站式安装Kubernetes和Kubesphere

1,405 阅读4分钟

前言

很多初次学习Kubernetes的小伙伴都会需要自己安装环境,鉴于手动安装Kubernetes环境确实很繁琐,并且成功率不高,因此我们可以借助Kubekey工具来完成一站式安装。接下来我们来简单介绍一下安装步骤;

软硬件配置

  • 系统要求

    我自己使用的操作系统是Ubuntu20.04,虽然目前Ubuntu版本已经到22.04,但是不建议使用最高版本,否则可能安装失败;相关操作系统配置如下:

    系统最低要求(每个节点)
    Ubuntu 16.04,18.04,20.04CPU:2 核,内存:4 G,硬盘:40 G
    Debian Buster,StretchCPU:2 核,内存:4 G,硬盘:40 G
    CentOS 7.xCPU:2 核,内存:4 G,硬盘:40 G
    Red Hat Enterprise Linux 7CPU:2 核,内存:4 G,硬盘:40 G
    SUSE Linux Enterprise Server 15 /openSUSE Leap 15.2CPU:2 核,内存:4 G,硬盘:40 G

    CPU 必须为 x86_64,暂时不支持 Arm 架构的 CPU。

  • 节点要求

    • 每个节点必须能够通过SSH访问;

    • 每个节点必须时间同步;

      因为我是在中国大陆地区,所以可以通过下面命令调整机器时区:

      sudo timedatectl set-timezone "Asia/Shanghai"
      

      如果想改成其他时区,可参考这篇文章:juejin.cn/post/684490…

    • 每个节点都要能够使用以下命令:sudocurlopenssltar

    • 每个节点要能够相互ping通,并能够正常访问网络;

  • 安装依赖

    因为安装Kubernetes的过程需要依赖socatconntrack两个依赖项,在ubuntu中我们可以通过以下命令来安装:

    sudo apt-get install socat
    sudo apt-get install conntrack
    # 下面两个依赖项可选,但建议安装
    sudo apt-get install ebtables
    sudo apt-get install ipset
    

下载Kubekey

因为我们是在中国大陆,众所周知的原因我们无法正常访问github,所以我们需要通过以下命令下载Kubekey

export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.2 sh -

下载完毕后,我们需要给kk添加可执行权限:

chmod +x kk

创建配置文件

我们可以通过以下命令生成一个示例配置文件:

./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1 -f config-sample.yaml

我们可以通过cat config-sample.yaml命令查看配置文件内容,我们需要根据自己的实际情况对生成的配置文件做部分修改:

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  # 根据情况修改master、node1、node2节点信息
  - {name: master, address: 192.168.56.143, internalAddress: 192.168.56.143, user: master, password: "123456"}
  - {name: node1, address: 192.168.56.144, internalAddress: 192.168.56.144, user: node1, password: "123456"}
  - {name: node2, address: 192.168.56.145, internalAddress: 192.168.56.145, user: node2, password: "123456"}
  roleGroups:
    etcd:
    - master
    control-plane:
    - master
    # 指定master
    master:
    - master
    # 指定工作节点
    worker:
    - master
    - node1
    - node2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    # kubernetes版本
    version: v1.21.5
    clusterName: cluster.local
  network:
    # 网络插件
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    plainHTTP: false
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    # kubesphere版本
    version: v3.2.1
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    jenkinsMemoryLim: 2Gi
    jenkinsMemoryReq: 1500Mi
    jenkinsVolumeSize: 8Gi
    jenkinsJavaOpts_Xms: 512m
    jenkinsJavaOpts_Xmx: 512m
    jenkinsJavaOpts_MaxRAM: 2g
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    containerruntime: docker
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    #   adapter:
    #     resources: {}
    # node_exporter:
    #   resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
  kubeedge:
    enabled: false
    cloudCore:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      cloudhubPort: "10000"
      cloudhubQuicPort: "10001"
      cloudhubHttpsPort: "10002"
      cloudstreamPort: "10003"
      tunnelPort: "10004"
      cloudHub:
        advertiseAddress:
          - ""
        nodeLimit: "100"
      service:
        cloudhubNodePort: "30000"
        cloudhubQuicNodePort: "30001"
        cloudhubHttpsNodePort: "30002"
        cloudstreamNodePort: "30003"
        tunnelNodePort: "30004"
    edgeWatcher:
      nodeSelector: {"node-role.kubernetes.io/worker": ""}
      tolerations: []
      edgeWatcherAgent:
        nodeSelector: {"node-role.kubernetes.io/worker": ""}
        tolerations: []

创建集群

接下来我们需要根据修改好的配置文件来创建kubernetes服务和kubesphere管理平台:

./kk create cluster -f config-sample.yaml

等待大概20分钟,我们的集群就会创建成功;我们可以在终端看见如下信息:

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################
​
Console: http://192.168.56.143:30880
Account: admin
Password: P@88w0rd
​
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     the "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.
​
#####################################################
https://kubesphere.io             2023-04-06 12:48:35
#####################################################

接下来我们就可以通过 NodeIP:30880 使用默认帐户和密码 (admin/P@88w0rd) 访问 KubeSphere 的 Web 控制台;

EBAE7AF6E7FF8E4FD0B7C62A05D5A700.jpg

B254C9F006F702718F4444717DB02E9E.jpg

本文正在参加「金石计划」