Kuernetes安装极狐Gitlab-runner
一、前期说明
由于安装gitlab-runner,启用自签证书,尝试各种方法,参考各种官方文档,均遭失败,很尴尬,以下文档使用http注册,gitlab还是用https访问,详细改造方式,请看下文
以后有时间继续研究
二、前期准备
-
Kubernetes集群安装
- 1-Kubernetes基于Centos7构建基础环境(一)
- 2-Kubernetes基于Centos7构建基础环境(二)
- 3-Kubernetes基于Centos7构建基础环境(三)
- 1-Kuernetes安装极狐github
- 2-Kuernetes安装极狐github-runner
-
环境查看
[root@master140 jihu-15.9.3]# kubectl -n gitlab get pod,svc
NAME READY STATUS RESTARTS AGE
pod/gitlab-gitaly-0 1/1 Running 0 149m
pod/gitlab-gitlab-exporter-84dc494465-pnkpg 1/1 Running 0 149m
pod/gitlab-gitlab-shell-68df76c86c-bbf2p 1/1 Running 0 149m
pod/gitlab-gitlab-shell-68df76c86c-z44w4 1/1 Running 0 149m
pod/gitlab-kas-796dcfddf6-9dxlz 1/1 Running 0 149m
pod/gitlab-kas-796dcfddf6-dk7ms 1/1 Running 0 149m
pod/gitlab-migrations-1-v9zgh 0/1 Completed 0 149m
pod/gitlab-minio-67ccd59c56-nzhtq 1/1 Running 0 149m
pod/gitlab-minio-create-buckets-1-2h4jj 0/1 Completed 0 149m
pod/gitlab-postgresql-0 2/2 Running 0 149m
pod/gitlab-redis-master-0 2/2 Running 0 149m
pod/gitlab-registry-6c69c7b68f-rwnnx 1/1 Running 0 149m
pod/gitlab-registry-6c69c7b68f-rzhjg 1/1 Running 0 149m
pod/gitlab-runner-6794799cb7-vrtwt 1/1 Running 0 34m
pod/gitlab-sidekiq-all-in-1-v2-769f56758-b8rwn 1/1 Running 0 149m
pod/gitlab-toolbox-5fd59d8bf9-zf7wx 1/1 Running 0 149m
pod/gitlab-webservice-default-5fcc77db88-rbzxf 2/2 Running 0 149m
pod/gitlab-webservice-default-5fcc77db88-vrsgl 2/2 Running 0 149m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gitlab-gitaly ClusterIP None <none> 8075/TCP,9236/TCP 149m
service/gitlab-gitlab-exporter ClusterIP 10.96.1.193 <none> 9168/TCP 149m
service/gitlab-gitlab-shell ClusterIP 10.96.2.115 <none> 22/TCP 149m
service/gitlab-kas ClusterIP 10.96.0.12 <none> 8150/TCP,8153/TCP,8154/TCP,8151/TCP 149m
service/gitlab-minio-svc ClusterIP 10.96.3.247 <none> 9000/TCP 149m
service/gitlab-postgresql ClusterIP 10.96.0.47 <none> 5432/TCP 149m
service/gitlab-postgresql-headless ClusterIP None <none> 5432/TCP 149m
service/gitlab-postgresql-metrics ClusterIP 10.96.2.237 <none> 9187/TCP 149m
service/gitlab-redis-headless ClusterIP None <none> 6379/TCP 149m
service/gitlab-redis-master ClusterIP 10.96.0.23 <none> 6379/TCP 149m
service/gitlab-redis-metrics ClusterIP 10.96.0.140 <none> 9121/TCP 149m
service/gitlab-registry ClusterIP 10.96.0.183 <none> 5000/TCP 149m
service/gitlab-webservice-default NodePort 10.96.1.222 <none> 8080:32491/TCP,8181:32000/TCP,8083:32483/TCP 149m
[root@master140 jihu-15.9.3]#
-
准备一个SpringBoot工程
- 以tarot-sonar-server为例子
- 该例子集成了自己封装的 tarot-framework 、jacoco 、单元测试 、 sonar等等
-
在src/main下分别创建,docker、charts两个文件夹,具体资源列表如下
src/main
├── docker
└── Dockerfile
├── charts
tarot-sonar-server
├── charts
├── Chart.yaml
├── templates
│ ├── configmap.yaml
│ ├── _helpers.tpl
│ ├── provisioner.yaml
│ ├── rbac.yaml
│ ├── secret.yaml
│ └── storageclass.yaml
└── values.yaml
├── java
├── resoureces
- Dockerfile编排
FROM dev-bj.hatech.com.cn/library/openjdk:17-jdk-slim
MAINTAINER jiangincan
VOLUME /tmp
ADD tarot-sonar-server-0.0.1-SNAPSHOT.jar tarot-sonar-server-0.0.1.jar
EXPOSE 4000
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/tarot-sonar-server-0.0.1.jar"]
-
charts编排
- Chart.yaml
apiVersion: v2
name: tarot-sonar-server
description: "helm 安装 saner 测试应用"
type: application
# 配置当前chart版本
version: 0.0.1
# 配置应用版本
appVersion: "0.0.1-SNAPSHOT"
- values.yaml
global:
warehouse: dev-bj.hatech.com.cn
libraryObject: library
libraryXincan: xincan
# storageClass.create: [false, true]
# false: 客户提供nfs和storageClass,我们无需创建;
# true: 我们自己创建存储介质
storageClass:
create: false
name: tarot-nfs-storage
nfs:
server: 10.1.90.139
path: /nfs/data/cicd
# storageClass.create: [false, true]
# false: 客户提供serviceAccountName,我们无需创建,将客户提供的名称填写到serviceAccount.namne位置;
# true: 我们自己创建存储介质
# ------
# storageClass.secret.create: [false, true]
# false: 客户提供serviceAccountName,我们无需创建,将客户提供的名称填写到serviceAccount.namne位置;
# true: 我们自己创建存储介质
serviceAccount:
create: false
name: tarot-rbac
secret:
create: false
name: tarot-rbac-secret
username: hatech-jg
password: Hatech1221!
email: jiangxincan@hatech.com.cn
replicaCount: 1
app:
type: server
image:
repository: tarot-sonar-server
pullPolicy: IfNotPresent
tag: "0.0.1-SNAPSHOT"
nameOverride: "tarot-sonar-server"
fullnameOverride: "tarot-sonar-server"
service:
type: NodePort
client:
port: 4000
nodePort: 31220
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
- templates/_helper.tpl
{{/*
Expand the name of the chart.
*/}}
{{- define "tarot.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "tarot.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "tarot.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "tarot.labels" -}}
{{ include "tarot.selectorLabels" . }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "tarot.selectorLabels" -}}
tarot.kubernetes.io/company: tarot.cn
tarot.kubernetes.io/version: {{ .Chart.Version }}
tarot.kubernetes.io/product: tarot
tarot.kubernetes.io/type: {{ .Values.app.type }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "tarot.serviceAccountName" -}}
{{- if .Values.global.serviceAccount.create }}
{{- default (include "tarot.fullname" .) .Values.global.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.global.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Create Secret, download image
*/}}
{{- define "tarot.imagePullSecret" }}
{{- with .Values.global.serviceAccount.secret }}
{{- printf "{"auths":{"%s":{"username":"%s","password":"%s","email":"%s","auth":"%s"}}}" $.Values.global.warehouse .username .password .email (printf "%s:%s" .username .password | b64enc) | b64enc }}
{{- end }}
{{- end }}
- deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "tarot.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "tarot.labels" . | nindent 4 }}
xincan.kubernetes.io/app: {{ include "tarot.fullname" . }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "tarot.selectorLabels" . | nindent 6 }}
xincan.kubernetes.io/app: {{ include "tarot.fullname" . }}
template:
metadata:
labels:
{{- include "tarot.selectorLabels" . | nindent 8 }}
xincan.kubernetes.io/app: {{ include "tarot.fullname" . }}
spec:
imagePullSecrets:
- name: {{ .Values.global.serviceAccount.name }}
serviceAccountName: {{ include "tarot.serviceAccountName" . }}
containers:
- name: {{ include "tarot.fullname" . }}
image: "{{ .Values.global.warehouse }}/{{ .Values.global.libraryXincan }}/{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: SPRING_PROFILES_ACTIVE
value: dev,openapi
resources:
{{- toYaml .Values.resources | nindent 12 }}
ports:
- name: client
containerPort: {{ .Values.service.client.port }}
volumeMounts:
- name: localtime
mountPath: /etc/localtime
volumes:
- name: localtime
hostPath:
type: File
path: /etc/localtime
service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "tarot.fullname" . }}-service
namespace: {{ .Release.Namespace }}
labels:
{{- include "tarot.labels" . | nindent 4 }}
xincan.kubernetes.io/app: {{ include "tarot.fullname" . }}
spec:
type: {{ .Values.service.type }}
ports:
- name: client
port: {{ .Values.service.client.port }}
targetPort: {{ .Values.service.client.port }}
protocol: TCP
{{- if eq .Values.service.type "NodePort" }}
nodePort: {{ .Values.service.client.nodePort }}
{{- end}}
selector:
{{- include "tarot.selectorLabels" . | nindent 4 }}
xincan.kubernetes.io/app: {{ include "tarot.fullname" . }}
- pom.xml 配置
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>cn.com.tarotframework</groupId>
<artifactId>tarot-framework</artifactId>
<version>0.0.1-SNAPSHOT</version>
</parent>
<artifactId>tarot-sonar-server</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>tarot-sonar-server</name>
<description>sonar监控生成报表应用</description>
<properties>
<knife4j-openapi3-jakarta-spring-boot-starter.version>4.0.0</knife4j-openapi3-jakarta-spring-boot-starter.version>
<sonar-scanner-api.version>2.16.3.1081</sonar-scanner-api.version>
<!-- sonar key -->
<sonar.projectKey>test</sonar.projectKey>
<!-- sonar name -->
<sonar.projectName>test.sonar-server</sonar.projectName>
<!-- sonar 把jacoco测试用例上传 -->
<sonar.java.coveragePlugin>jacoco</sonar.java.coveragePlugin>
<!-- sonar host -->
<sonar.host.url>http://10.1.10.135:9000</sonar.host.url>
<!-- sonar token -->
<sonar.login>9ccd0e45fbe4f0eed94ae42ab3fc154375b50703</sonar.login>
<sonar.dynamicAnalysis>reuseReports</sonar.dynamicAnalysis>
<!--将所有jacoco定位到同样的父目录位置-->
<sonar.jacoco.reportPaths>target/site/jacoco.exec</sonar.jacoco.reportPaths>
<!-- 选择语言 -->
<sonar.language>java</sonar.language>
<sonar-maven-plugin.version>3.9.1.2184</sonar-maven-plugin.version>
<maven-surefire-plugin.version>3.0.0</maven-surefire-plugin.version>
<jacoco-maven-plugin.version>0.8.8</jacoco-maven-plugin.version>
<maven-resources-plugin.version>3.3.1</maven-resources-plugin.version>
</properties>
<dependencies>
<!-- 集成 tarot-web 配置 -->
<dependency>
<groupId>cn.com.tarotframework.boot</groupId>
<artifactId>tarot-web-spring-boot-starter</artifactId>
<version>${project.parent.version}</version>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>2.0.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.github.xiaoymin</groupId>
<artifactId>knife4j-openapi3-jakarta-spring-boot-starter</artifactId>
<version>${knife4j-openapi3-jakarta-spring-boot-starter.version}</version>
</dependency>
<dependency>
<groupId>org.sonarsource.scanner.api</groupId>
<artifactId>sonar-scanner-api</artifactId>
<version>${sonar-scanner-api.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- 接入sonar 扫描插件 -->
<plugin>
<groupId>org.sonarsource.scanner.maven</groupId>
<artifactId>sonar-maven-plugin</artifactId>
<version>${sonar-maven-plugin.version}</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>${maven-surefire-plugin.version}</version>
</plugin>
<!-- 接入 资源复制 插件,将docker编排文件、helm charts,打包时复制到target下 -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>${maven-resources-plugin.version}</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>copy-resources</goal>
</goals>
<configuration>
<outputDirectory>./target</outputDirectory>
<resources>
<resource>
<directory>${basedir}/src/main/docker</directory>
</resource>
<resource>
<directory>${basedir}/src/main/charts</directory>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
<!-- 接入 jacoco 插件 -->
<plugin>
<groupId>org.jacoco</groupId>
<artifactId>jacoco-maven-plugin</artifactId>
<version>${jacoco-maven-plugin.version}</version>
<configuration>
<append>true</append>
</configuration>
<executions>
<execution>
<id>jacoco-initialize</id>
<goals>
<goal>prepare-agent</goal>
</goals>
<phase>test-compile</phase>
</execution>
<execution>
<id>jacoco-site</id>
<phase>verify</phase>
<goals>
<goal>report</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- 接入 docker 插件 -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>${docker-maven-plugin.version}</version>
<configuration>
<!-- 打包docker镜像的docker服务器 -->
<dockerHost>${docker-host-url}</dockerHost>
<!-- 镜像名称及版本[ip:port/name:tag] -->
<imageName>${docker-harbor-registry-url}/xincan/${project.name}:${project.version}</imageName>
<!-- harbor仓库地址 -->
<registryUrl>${docker-harbor-registry-url}</registryUrl>
<!-- Dockerfile路径 -->
<dockerDirectory>src/main/docker</dockerDirectory>
<!-- 是否强制覆盖已有镜像 -->
<forceTags>true</forceTags>
<imageTags>
<!-- 镜像tag -->
<imageTag>${project.version}</imageTag>
</imageTags>
<!-- 复制jar包到docker容器指定目录配置 -->
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
<!--在maven settings.xml中配置的server的id值-->
<serverId>docker-harbor</serverId>
</configuration>
</plugin>
</plugins>
</build>
</project>
三、CICD条件准备
-
镜像改造
- 此处采用的是maven-3.8.5 、openjdk-17
- 所以在dockerhub.com上找到对应的镜像为:maven:3.8.5-openjdk-17-slim
- maven settings需要替换成自己的,或者自己公司的,对应nexus私服地址,或其他可用地址
- 然后将准备好的 settings文件复制到容器maven conf 目录下【settings.xml 参见附录】
- 最后提交容器,生成镜像,将镜像push到harbor
## 镜像下载
[root@master140 ~]# docker pull maven:3.8.5-openjdk-17-slim
## 运行镜像,生成容器
[root@master140 ~]# docker run --name maven -idt maven:3.8.5-openjdk-17-slim /bin/bash
## settings 拷贝
[root@master140 ~]# docker cp settings.xml maven:/usr/share/maven/conf
## 镜像生成
[root@master140 ~]# docker commit -a "jiangxincan" -m "替换maven settings配置文件" -p maven dev-bj.hatech.com.cn/library/hatech-maven:3.8.5-openjdk-17-slim
## 镜像发布提交到harbor
[root@master140 ~]# docker push dev-bj.hatech.com.cn/library/hatech-maven:3.8.5-openjdk-17-slim
-
在将要部署的kubernetes上创建必要信息
- 创建命名空间
- 创建动态挂载卷,当前测试采用nfs
- 创建rbac、secret、configmap等等
- 资源情况如下,详细内容参见:gitee.com/xincan/taro…
[root@master140 xincan]# tree tarot
tarot
├── charts
├── Chart.yaml
├── templates
│ ├── configmap.yaml
│ ├── _helpers.tpl
│ ├── provisioner.yaml
│ ├── rbac.yaml
│ ├── secret.yaml
│ └── storageclass.yaml
└── values.yaml
2 directories, 8 files
[root@master140 xincan]#
-
创建命名空间
- 命名空间为tarot,后续资源及项目安装在此
[root@master140 xincan]# kubectl create ns tarot
# helm 安装前期资源
[root@master140 xincan]# helm -n tarot install tarot tarot
- 安装结果(部分资源)
[root@master140 xincan]# helm -n tarot ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
tarot-base tarot 1 2023-04-07 19:33:39.421058539 +0800 CST deployed tarot-0.0.1 0.0.1
tarot-sonar-server tarot 5 2023-04-10 08:06:39.590154115 +0000 UTC deployed tarot-sonar-server-0.0.1 0.0.1-SNAPSHOT
[root@master140 xincan]#
[root@master140 xincan]# kubectl -n tarot get sa,secrets,cm,pod,svc,ing
NAME SECRETS AGE
serviceaccount/default 1 3d5h
serviceaccount/tarot-rbac 1 2d21h
NAME TYPE DATA AGE
secret/default-token-xt25v kubernetes.io/service-account-token 3 3d5h
secret/sh.helm.release.v1.tarot-base.v1 helm.sh/release.v1 1 2d21h
secret/sh.helm.release.v1.tarot-sonar-server.v1 helm.sh/release.v1 1 2d20h
secret/sh.helm.release.v1.tarot-sonar-server.v2 helm.sh/release.v1 1 7h7m
secret/sh.helm.release.v1.tarot-sonar-server.v3 helm.sh/release.v1 1 68m
secret/sh.helm.release.v1.tarot-sonar-server.v4 helm.sh/release.v1 1 49m
secret/sh.helm.release.v1.tarot-sonar-server.v5 helm.sh/release.v1 1 42m
secret/tarot-rbac-secret kubernetes.io/dockerconfigjson 1 2d21h
secret/tarot-rbac-token-th5j5 kubernetes.io/service-account-token 3 2d21h
NAME DATA AGE
configmap/kube-root-ca.crt 1 3d5h
configmap/tarot-config 1 2d21h
NAME READY STATUS RESTARTS AGE
pod/tarot-nfs-provisioner-59bd4cfdfb-x6bzr 1/1 Running 0 2d21h
[root@master140 xincan]#
-
gitlab-runner.yml 文件编排,创建该文件放到项目根目录下
- HARBOR_HOSTS:镜像仓库IP地址
- HARBOR_ADDRESS:镜像仓库域名
- HARBOR_LOGIN_USERNAME:镜像仓库登录名称
- HARBOR_LOGIN_PASSWORD:镜像仓库登录密码
- HARBOR_REPOSITORY:镜像仓库项目名称
- DOCKER_REMOTE_HOST:Docker服务器开启远程访问地址和端口
- MAVEN_OPTS:流水线作业时,在执行maven命令,下载的第三方包缓存地址,当前缓存在项目下 .m2/repository文件夹下
- MAVEN_IMAGE:流水线作业时,所用到的maven和jdk镜像,此处是复合镜像,maven版本为:3.8.5,jdk版本为:openjdk-17-slim
- CONTAINER_IMAGE:流水线作业时,docker生成镜像,所使用的容器镜像,版本为:19.03.12-dind
- HELM_IMAGE:流水线作业时,helm命令运行容器的镜像
- KUBERNETES_NAMESPACE:流水线作业时,项目或模块发布时部署的集群命名空间
## 定义环境变量
variables:
HARBOR_HOSTS: "192.168.1.80"
HARBOR_ADDRESS: "dev-bj.hatech.com.cn"
HARBOR_LOGIN_USERNAME: "hatech-jg"
HARBOR_LOGIN_PASSWORD: "Hatech1221!"
HARBOR_REPOSITORY: "xincan"
# DOCKER_REMOTE_HOST: "10.1.90.139:2375"
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
MAVEN_IMAGE: "${HARBOR_ADDRESS}/library/hatech-maven:3.8.5-openjdk-17-slim"
CONTAINER_IMAGE: "${HARBOR_ADDRESS}/library/docker:19.03.12-dind"
HELM_IMAGE: "${HARBOR_ADDRESS}/library/helm:3.11.2"
KUBERNETES_NAMESPACE: "tarot"
## 全局定义配置缓存
cache:
key: tarot-server-cache
paths:
- .m2/repository
- target/*.jar
- target/Dockerfile
- target/tarot-sonar-server
- .version
## 全局定义步骤
stages:
- test
- sonar
- maven
- docker
- publish
pre-test-job:
stage: test
tags:
- kubernetes
- sonar
only:
- main
script:
- export
- echo '输出路径和当前路径下所有文件'
- pwd
- ls
sonar-job:
stage: sonar
image: ${MAVEN_IMAGE}
tags:
- kubernetes
- sonar
only:
- main
script:
- echo "--sonar代码扫描--"
- mvn clean test verify sonar:sonar ${MAVEN_OPTS}
artifacts:
name: ${CI_PROJECT_NAME}-artifacts
paths:
- ./target/site/jacoco
cache:
key: tarot-server-cache
paths:
- .m2/repository
maven-package-job:
stage: maven
image: ${MAVEN_IMAGE}
tags:
- kubernetes
- sonar
only:
- main
dependencies:
- pre-test-job
- sonar-job
before_script:
- CI_PROJECT_VERSION=$(mvn --batch-mode --non-recursive help:evaluate -Dexpression=project.version | grep -v '[.*' | tail -1)
- echo "CI_PROJECT_VERSION=${CI_PROJECT_VERSION}" > .version
script:
- echo "maven开始打包"
- mvn clean -DskipTests=true package ${MAVEN_OPTS}
artifacts:
name: ${CI_PROJECT_NAME}-artifacts
paths:
- ./target/Dockerfile
- ./target/*.jar
- ./target/*.jar.original
- ./target/${CI_PROJECT_NAME}
- ./.version
expire_in: 1 week
cache:
key: tarot-server-cache
paths:
- .m2/repository
- ./target/Dockerfile
- ./target/*.jar
- ./target/*.jar.original
- ./target/${CI_PROJECT_NAME}
- ./.version
needs:
- pre-test-job
- sonar-job
maven-deploy-job:
stage: maven
image: ${MAVEN_IMAGE}
tags:
- kubernetes
- sonar
only:
- main
dependencies:
- pre-test-job
- sonar-job
before_script:
- CI_PROJECT_VERSION=$(mvn --batch-mode --non-recursive help:evaluate -Dexpression=project.version | grep -v '[.*' | tail -1)
- echo "CI_PROJECT_VERSION=${CI_PROJECT_VERSION}" > .version
script:
- echo "maven发布jar到nexus"
- source ./.version
- mvn clean -DskipTests=true deploy ${MAVEN_OPTS}
artifacts:
name: ${CI_PROJECT_NAME}-artifacts
paths:
- ./target/Dockerfile
- ./target/*.jar
- ./target/*.jar.original
- ./target/${CI_PROJECT_NAME}
- ./.version
expire_in: 1 week
cache:
key: tarot-server-cache
paths:
- .m2/repository
- ./target/Dockerfile
- ./target/*.jar
- ./target/*.jar.original
- ./target/${CI_PROJECT_NAME}
- ./.version
needs:
- pre-test-job
- sonar-job
docker-build-job:
stage: docker
image: ${CONTAINER_IMAGE}
tags:
- kubernetes
- sonar
only:
- main
dependencies:
- maven-package-job
- maven-deploy-job
# variables:
# DOCKER_HOST: tcp://${DOCKER_REMOTE_HOST}
# DOCKER_TLS_CERTDIR: ""
before_script:
- source ./.version
script:
- cd ./target && docker build -t ${HARBOR_ADDRESS}/${HARBOR_REPOSITORY}/${CI_PROJECT_NAME}:${CI_PROJECT_VERSION} .
artifacts:
name: ${CI_PROJECT_NAME}-artifacts
paths:
- ./.version
needs:
- maven-package-job
- maven-deploy-job
docker-push-job:
stage: docker
image: ${CONTAINER_IMAGE}
tags:
- kubernetes
- sonar
only:
- main
dependencies:
- docker-build-job
# variables:
# DOCKER_HOST: tcp://${DOCKER_REMOTE_HOST}
# DOCKER_TLS_CERTDIR: ""
services:
- name: ${CONTAINER_IMAGE}
command:
# 可以加多个镜像仓库
- --registry-mirror=https://hub-mirror.c.163.com
- --registry-mirror=https://registry.docker-cn.com
# 公司私有的镜像仓库
- --insecure-registry=${HARBOR_ADDRESS}
before_script:
- echo "${HARBOR_HOSTS} ${HARBOR_ADDRESS}" > /etc/hosts
- source ./.version
- docker login ${HARBOR_ADDRESS} -u ${HARBOR_LOGIN_USERNAME} -p ${HARBOR_LOGIN_PASSWORD}
script:
- docker push ${HARBOR_ADDRESS}/${HARBOR_REPOSITORY}/${CI_PROJECT_NAME}:${CI_PROJECT_VERSION}
needs:
- docker-build-job
helm-deploy-job:
stage: publish
image: ${HELM_IMAGE}
tags:
- kubernetes
- sonar
only:
- main
dependencies:
- docker-push-job
before_script:
- source ./.version
script:
- echo "kubernetes基于helm部署应用"
- cd target && helm -n ${KUBERNETES_NAMESPACE} upgrade --install ${CI_PROJECT_NAME} ${CI_PROJECT_NAME}
needs:
- docker-push-job
- 代码提交,【 Ctrl + K 】【 Ctrl + Shift + K 】
四、结果查看
- 流水线情况如下图
- 缓存必要数据、cache可以在minio上看到、artifacts缓存的可以在gitlab对应的阶段上看到
- 最终部署到指定集群的命名空间下
- 查看部署结果
[root@master140 tarot-sonar-server]# kubectl -n tarot get pod
NAME READY STATUS RESTARTS AGE
tarot-nfs-provisioner-59bd4cfdfb-x6bzr 1/1 Running 0 2d21h
tarot-sonar-server-574455b77b-cf2tx 1/1 Running 0 7h33m
[root@master140 tarot-sonar-server]# kubectl -n tarot get pod,svc
NAME READY STATUS RESTARTS AGE
pod/tarot-nfs-provisioner-59bd4cfdfb-x6bzr 1/1 Running 0 2d21h
pod/tarot-sonar-server-574455b77b-cf2tx 1/1 Running 0 7h33m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tarot-sonar-server-service NodePort 10.96.1.200 <none> 4000:31220/TCP 2d20h
[root@master140 tarot-sonar-server]#
- 界面访问
五、附录
maven打包结果如下
tartet
├─classes
├─generated-sources
├─generated-test-sources
├─maven-archiver
├─maven-status
├─surefire-reports
├─ docker
│ └─Dockerfile
├─tarot-sonar-server
│ └─templates
│ ├─_helper.tpl
│ ├─deployment.yaml
│ ├─service.yaml
│ └─Chart.yaml
│ └─values.yaml
└─test-classes