K8s私服部署(Harbor容器源+Nexus3软件源)

202 阅读3分钟

有些同学不太清楚k8s镜像私服和apt源我用的是什么工具,这里做下基本配置展示,搭建的话网上教程挺多的我就先不写了。主要用到了Nexus3(系统源),harbor(容器源)

1.建立Nexus3源(Ubuntu)

1.1 创建apt源存储

image-20251111140912292

image-20251111140912292

1.2 创建软件源(Docker源+Apt源+K8s)

image-20251111141018175

image-20251111141018175

image-20251111141221381

image-20251111141221381

image-20251111141356972

image-20251111141356972

image-20251111141528349

image-20251111141528349

2.系统配置私有源(Ubuntu)

1.1 源初始化

1.1.1 apt源初始化

root@k8smaster232:~# cat /etc/apt/sources.list.d/ubuntu.sources
Types: deb
URIshttp://192.168.1.12:8081/repository/Ubuntu-Proxy/
Suites: noble noble-updates noble-backports
Components: main restricted universe multiverse
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg

Types: deb
URIshttp://192.168.1.12:8081/repository/Ubuntu-Proxy/
Suites: noble-security
Components: main restricted universe multiverse
Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg

1.1.2 Docker源初始化

# 代理源使用的ali,这里添加阿里云的源密钥
root@k8smaster232:~# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg 
# 源内容
root@k8smaster232:~# cat /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] http://192.168.1.12:8081/repository/Ubuntu-Docker noble stable

1.1.3 K8s源初始化

# 代理源使用的ali,这里添加阿里云的源密钥
root@k8smaster232:~# curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.34/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# 源内容
root@master233:~# cat /etc/apt/sources.list.d/kubernetes.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/deb/ /

1.1.4 确认源准备就绪

root@k8smaster232:~# apt update
Hit:1 http://192.168.1.12:8081/repository/Ubuntu-Docker noble InRelease
Get:2 http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/deb  InRelease [1,186 B]
Hit:3 http://192.168.1.12:8081/repository/Ubuntu-Proxy noble InRelease
Hit:4 http://192.168.1.12:8081/repository/Ubuntu-Proxy noble-updates InRelease
Hit:5 http://192.168.1.12:8081/repository/Ubuntu-Proxy noble-backports InRelease
Hit:6 http://192.168.1.12:8081/repository/Ubuntu-Proxy noble-security InRelease
Get:7 http://192.168.1.12:8081/repository/Ubuntu-K8s/v1.34/deb  Packages [4,405 B]

1.1.5 安装Docker、K8s

# 安装Docker
root@k8smaster232:~# apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
....
Reading package lists... Done
.....
Get:1 http://192.168.1.12:8081/repository/Ubuntu-Docker noble/stable amd64 containerd.io amd64 2.1.5-1~ubuntu.24.04~noble [22.4 MB]
.......
# 安装K8s
root@k8smaster232:~# apt install kubeadm=1.34.1-1.1 kubelet=1.34.1-1.1 kubectl=1.34.1-1.1
...
Reading package lists... Done
After this operation, 333 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://192.168.1.12:8081/repository/Ubuntu-Proxy noble/main amd64 conntrack amd64 1:1.4.8-1ubuntu1 [37.9 kB]
...

3.创建K8s私有源

3.1 在harbor上创建项目

这里创建3个项目

  • • google_containers 存储K8s的镜像
  • • tigera Calico网络插件的镜像
  • • calico Calico网络插件的镜像

image-20251111154121756

image-20251111154121756

3.2 查看K8s安装所需要的镜像

# 查看默认
root@k8smaster232:~# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/coredns/coredns:v1.12.1
registry.k8s.io/pause:3.10.1
registry.k8s.io/etcd:3.6.4-0
# 指定版本查看
root@k8smaster232:~# kubeadm config images list --kubernetes-version=v1.34.1

3.3 修改docker的镜像仓库

root@k8smaster232:~# vi /etc/docker/daemon.json
# 修改配置文件
{
  "insecure-registries": ["https://abc.dns.com"]
}
root@k8smaster232:~# systemctl restart docker

3.4 下载镜像并上传到Harbor上(方便点用脚本)

root@k8smaster232:~# vi k8s_images_pull_push.sh
#!/bin/bash
# 定义你的私有镜像仓库地址
PRIVATE_REGISTRY="abc.dns.com/google_containers"

# 定义 Harbor 账户信息
HARBOR_USERNAME="admin"
HARBOR_PASSWORD="xxxxxx"

# k8s镜像列表
IMAGES=(
    "registry.k8s.io/kube-apiserver:v1.34.1"
    "registry.k8s.io/kube-controller-manager:v1.34.1"
    "registry.k8s.io/kube-scheduler:v1.34.1"
    "registry.k8s.io/kube-proxy:v1.34.1"
    "registry.k8s.io/coredns/coredns:v1.12.1"
    "registry.k8s.io/pause:3.10.1"
    "registry.k8s.io/etcd:3.6.4-0"
)
# 阿里云镜像列表
IMAGES2=(
    "registry.aliyuncs.com/google_containers/kube-apiserver:v1.34.1"
    "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.34.1"
    "registry.aliyuncs.com/google_containers/kube-scheduler:v1.34.1"
    "registry.aliyuncs.com/google_containers/kube-proxy:v1.34.1"
    "registry.aliyuncs.com/google_containers/coredns/coredns:v1.12.1"
    "registry.aliyuncs.com/google_containers/pause:3.10.1"
    "registry.aliyuncs.com/google_containers/etcd:3.6.4-0"
)

# 登录到私有镜像仓库
echo "Logging in to Harbor..."
echo "$HARBOR_PASSWORD" | docker login "$PRIVATE_REGISTRY" -u "$HARBOR_USERNAME" --password-stdin

# 遍历镜像列表
for IMAGE in "${IMAGES[@]}"do
    # 提取镜像名称和版本
    NAME=$(echo $IMAGE | awk -F'/' '{print $NF}' | awk -F':' '{print $1}')
    VERSION=$(echo $IMAGE | awk -F':' '{print $2}')
    
    # 构建新的镜像名称
    NEW_IMAGE="${PRIVATE_REGISTRY}/${NAME}:${VERSION}"
    
    # 检查 Harbor 中是否已存在相同版本的镜像
    echo "Checking if image $NEW_IMAGE exists in Harbor..."
    RESPONSE=$(curl -s -u "$HARBOR_USERNAME:$HARBOR_PASSWORD" "http://$PRIVATE_REGISTRY/v2/${NAME}/tags/list")

    if echo "$RESPONSE" | grep -q ""${VERSION}""then
        echo "Image $NEW_IMAGE already exists in Harbor. Skipping upload."
        continue
    fi
    
    # 拉取原始镜像
    echo "Pulling image: $IMAGE"
    docker pull $IMAGE

    # 检查拉取是否成功
    if [[ $? -ne 0 ]]; then
        echo "Failed to pull image: $IMAGE"
        continue
    fi
    
    # 打标记为私有镜像
    echo "Tagging image: $IMAGE as $NEW_IMAGE"
    docker tag $IMAGE $NEW_IMAGE
    
    # 推送到私有仓库
    echo "Pushing image: $NEW_IMAGE"
    docker push $NEW_IMAGE
    
    # 检查推送是否成功
    if [[ $? -ne 0 ]]; then
        echo "Failed to push image: $NEW_IMAGE"
    fi
done

echo "All images have been processed."

root@k8smaster232:~# chmod +x k8s_images_pull_push.sh
root@k8smaster232:~# ./k8s_images_pull_push.sh
Logging in to Harbor...
Checking if image abc.dns.com/google_containers/etcd:3.6.4-0 exists in Harbor...
Pulling image: registry.k8s.io/etcd:3.6.4-0
3.6.4-0: Pulling from etcd
a62778643d56: Pull complete
b0652f640f8e: Pull complete
7c12895b777b: Pull complete
3214acf345c0: Pull complete
5664b15f108b: Pull complete
0bab15eea81d: Pull complete
4aa0ea1413d3: Pull complete
da7816fa955e: Pull complete
ddf74a63f7d8: Pull complete
38ba01b3f28c: Pull complete
02025ef0e84d: Pull complete
Digest: sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
Status: Downloaded newer image for registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/etcd:3.6.4-0
Tagging image: registry.k8s.io/etcd:3.6.4-0 as abc.dns.com/google_containers/etcd:3.6.4-0
Pushing image: abc.dns.com/google_containers/etcd:3.6.4-0
The push refers to repository [abc.dns.com/google_containers/etcd]
b0652f640f8e: Pushed
3214acf345c0: Mounted from google_containers/kube-scheduler
0bab15eea81d: Pushed
da7816fa955e: Mounted from google_containers/kube-scheduler
ddf74a63f7d8: Mounted from google_containers/kube-scheduler
02025ef0e84d: Pushed
a62778643d56: Pushed
7c12895b777b: Mounted from google_containers/kube-scheduler
5664b15f108b: Mounted from google_containers/kube-scheduler
4aa0ea1413d3: Mounted from google_containers/kube-scheduler
38ba01b3f28c: Pushed
4eff9a62d888: Mounted from google_containers/kube-scheduler
35d697fe2738: Mounted from google_containers/kube-scheduler
bfb59b82a9b6: Mounted from google_containers/kube-scheduler
3.6.4-0: digest: sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f size: 3044

i Info → Not all multiplatform-content is present and only the available single-platform image was pushed
         sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19 -> sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
All images have been processed.

image-20251111155755072

image-20251111155755072

4. 创建网络插件Calico源

4.1 创建k8s集群

这里我使用我已经创建好的集群来做演示,主要用来查看Calico需要什么镜像,如果你知道那就不用这个步骤了自己直接下。

4.1.1 下载Calico配置文件修改后查看需要加载哪些镜像

# 下载tigera-operator.yaml
root@k8smaster232:~# mkdir calico
root@k8smaster232:~# cd calico
root@k8smaster232:~/calico# wget https://raw.githubusercontent.com/projectcalico/calico/v3.31.0/manifests/tigera-operator.yaml
root@k8smaster232:~/calico# cat tigera-operator.yaml |grep "image"
imagePullSecrets:
      - imagesets.operator.tigera.io
      - imagesets
          image: quay.io/tigera/operator:v1.40.0 # 这里是一个镜像,
          imagePullPolicy: IfNotPresent
# 创建
root@master233:~/calico# kubectl create -f  tigera-operator.yaml

# 下载custom-resources.yaml
root@master233:~/calico# wget https://raw.githubusercontent.com/projectcalico/calico/v3.31.0/manifests/custom-resources.yaml

# 修改配置并创建
root@k8smaster232:~/calico# cat custom-resources.yaml
...
  calicoNetwork:
    ipPools:
      - name: default-ipv4-ippool
        blockSize: 26
        cidr: 10.244.0.0/16 # 这里要注意修改为你再k8s部署时pod的网段
        encapsulation: VXLANCrossSubnet
        natOutgoing: Enabled
        nodeSelector: all()
...
  
root@k8smaster232:~/calico# kubectl create -f custom-resources.yaml
# 查看所需镜像
root@master233:~/calico# kubectl get pods -n calico-system -o jsonpath='{range .items[*]}{.spec.containers[*].image}{"\n"}{end}' |sort|uniq
quay.io/calico/apiserver:v3.31.0
quay.io/calico/csi:v3.31.0 quay.io/calico/node-driver-registrar:v3.31.0
quay.io/calico/goldmane:v3.31.0
quay.io/calico/kube-controllers:v3.31.0
quay.io/calico/node:v3.31.0
quay.io/calico/typha:v3.31.0
quay.io/calico/whisker:v3.31.0 quay.io/calico/whisker-backend:v3.31.0

4.1.2 镜像推送到私服

# 这里自己改下上面的脚本就行,quay.io/tigera/operator:v1.40.0镜像我就手动先修改下上传了
root@k8smaster232:~#docker pull quay.io/tigera/operator:v1.40.0 
root@k8smaster232:~#docker tag quay.io/tigera/operator:v1.40.0 abc.dns.com/tigera/operator:v1.40.0 
root@k8smaster232:~#docker push abc.dns.com/tigera/operator:v1.40.0 

image-20251111163824217

image-20251111163824217

4.1.3 脚本

root@k8smaster232:~# cat calico_images_pull_push.sh
#!/bin/bash
# 定义你的私有镜像仓库地址
PRIVATE_REGISTRY="abc.dns.com/calico"
# 定义 Harbor 账户信息
HARBOR_USERNAME="admin"
HARBOR_PASSWORD="xxx"

# k8s镜像列表
IMAGES=(
    "quay.io/calico/apiserver:v3.31.0"
    "quay.io/calico/csi:v3.31.0"
    "quay.io/calico/node-driver-registrar:v3.31.0"
    "quay.io/calico/goldmane:v3.31.0"
    "quay.io/calico/kube-controllers:v3.31.0"
    "quay.io/calico/node:v3.31.0"
    "quay.io/calico/typha:v3.31.0"
    "quay.io/calico/whisker:v3.31.0"
    "quay.io/calico/whisker-backend:v3.31.0"
)
# 阿里云镜像列表
IMAGES2=(
    "registry.aliyuncs.com/google_containers/kube-apiserver:v1.34.1"
    "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.34.1"
    "registry.aliyuncs.com/google_containers/kube-scheduler:v1.34.1"
    "registry.aliyuncs.com/google_containers/kube-proxy:v1.34.1"
    "registry.aliyuncs.com/google_containers/coredns/coredns:v1.12.1"
    "registry.aliyuncs.com/google_containers/pause:3.10.1"
    "registry.aliyuncs.com/google_containers/etcd:3.6.4-0"
)

# 登录到私有镜像仓库
echo "Logging in to Harbor..."
echo "$HARBOR_PASSWORD" | docker login "$PRIVATE_REGISTRY" -u "$HARBOR_USERNAME" --password-stdin

# 遍历镜像列表
for IMAGE in "${IMAGES[@]}"do
    # 提取镜像名称和版本
    NAME=$(echo $IMAGE | awk -F'/' '{print $NF}' | awk -F':' '{print $1}')
    VERSION=$(echo $IMAGE | awk -F':' '{print $2}')

    # 构建新的镜像名称
    NEW_IMAGE="${PRIVATE_REGISTRY}/${NAME}:${VERSION}"

    # 检查 Harbor 中是否已存在相同版本的镜像
    echo "Checking if image $NEW_IMAGE exists in Harbor..."
    RESPONSE=$(curl -s -u "$HARBOR_USERNAME:$HARBOR_PASSWORD" "http://$PRIVATE_REGISTRY/v2/${NAME}/tags/list")

    if echo "$RESPONSE" | grep -q ""${VERSION}""then
        echo "Image $NEW_IMAGE already exists in Harbor. Skipping upload."
        continue
    fi

    # 拉取原始镜像
    echo "Pulling image: $IMAGE"
    docker pull $IMAGE

    # 检查拉取是否成功
    if [[ $? -ne 0 ]]; then
        echo "Failed to pull image: $IMAGE"
        continue
    fi

    # 打标记为私有镜像
    echo "Tagging image: $IMAGE as $NEW_IMAGE"
    docker tag $IMAGE $NEW_IMAGE

    # 推送到私有仓库
    echo "Pushing image: $NEW_IMAGE"
    docker push $NEW_IMAGE

    # 检查推送是否成功
    if [[ $? -ne 0 ]]; then
        echo "Failed to push image: $NEW_IMAGE"
    fi
done

echo "All images have been processed."

image-20251111163759628

image-20251111163759628