此篇文档为生产系统K8S环境迁移服务,在全部文档中编号为7, 内容为使用jenkins进行CI/CD. 本例中使用jenkin有两种情况,单独使用CI和CI/CD并行.单独使用CI即只使用jenkins进行构建容器并推送到Harbor,不参与部署到k8s.CI/CD并行使用指jenkins全程参与构建与部署
Jenkins部署到K8S
单独使用Jenkins进行CI,可以把Jenkins部署到k8s中,但是不推荐这样做。CI/CD并行使用时,Jenkins成为部署中心服务器,所以最好把Jenkins部署在运维机中。
准备镜像
docker pull jenkins/jenkins:2.164.1
docker tag 1bdf jenkins/jenkins:2.164.1 harbor.ylls.com/base/jenkins:2.164.1
docker push harbor.ylls.com/base/jenkins:2.164.1
构建Jenkins底包
dockerfile
FROM harbor.ylls.com/base/jenkins:2.164.1
USER root
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo 'Asia/Shanghai' >/etc/timezone
ADD id_rsa /root/.ssh/id_rsa
ADD config.json /root/.docker/config.json
ADD get-docker.sh /get-docker.sh
RUN echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config
RUN /get-docker.sh
这个Dockerfile里我们主要做了以下几件事
设置容器用户为root
设置容器内的时区
将ssh私钥加入(使用git拉代码时要用到,配对的公钥应配置在gitlab中)
加入了登录自建harbor仓库的config文件
修改了ssh客户端的指纹验证为no(初次登陆yes那个)
安装一个docker的客户端,可以连接宿主机的docker引擎,为了jenkins可以执行docker build
config.json
{
"auths": {
"harbor.ylls.com": {
"auth": "************"
}
}
}
获得get-docker.sh
curl -fsSL get.docker.com -o get-docker.sh
chmod +x get-docker.sh
构建
docker build . -t harbor.ylls/infra/jenkins:2.164.1
docker push harbor.ylls/infra/jenkins:2.164.1
安装NFS
在所有node上运行
yum -y install nfs-utils
运维机上运行
cat >/etc/export <<EOF
/server/nfs 172.27.0.0/16(rw,no_root_squash)
EOF
mkdir /server/nfs -p
systemctl start nfs
systemctl enable nfs
所有node上挂载nfs根目录/server/nfs,以后要用到K8S的pv,都挂到这里,以后不再描述
部署准备
创建K8S的ns
kubectl create namespace infra
创建secret
因为infra是harbor中的私库,所以要为ns infra创建一个docker-registry型的secret资源,其名为harbor
kubectl create secret docker-registry harbor --docker-server=harbor.ylls.com --docker-username=admin --docker-password=Harbor12345 -n infra
YAML文件
dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: jenkins
namespace: infra
labels:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
name: jenkins
template:
metadata:
labels:
app: jenkins
name: jenkins
spec:
volumes:
- name: data
nfs:
server: 10.0.10.10
path: /server/nfs/jenkins_home
- name: docker
hostPath:
path: /run/docker.sock #这里把宿主机的docker.sock挂到pod中,这样pod就可以和宿主机的docker通讯
type: ''
containers:
- name: jenkins
image: harbor.ylls.com/infra/jenkins:v2.164.1
imagePullPolicy: IfNotPresent #image拉取策略,allways,never,IfNotPresent三种
ports:
- containerPort: 8080
protocol: TCP
env:
- name: JAVA_OPTS
value: -Xmx512m -Xms512m -Dhudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION=true
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: data
mountPath: /var/jenkins_home
- name: docker
mountPath: /run/docker.sock
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
imagePullSecrets: #这里引用了之前创建的secret harbor
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
svc.yaml
kind: Service
apiVersion: v1
metadata:
name: jenkins
namespace: infra
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
app: jenkins
type: ClusterIP
sessionAffinity: None
ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: jenkins
namespace: infra
spec:
rules:
- host: jenkins.ylls.com
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 80
准备JRE底包
dockerfile
FROM harbor.ylls.com/public/debian9_jre8:8u152
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
echo 'Asia/Shanghai' >/etc/timezone
RUN echo "deb http://mirrors.aliyun.com/debian stretch main contrib non-free" > /etc/apt/sources.list && \
echo "deb-src http://mirrors.aliyun.com/debian stretch main contrib non-free" >> /etc/apt/sources.list && \
echo "deb http://mirrors.aliyun.com/debian stretch-updates main contrib non-free" >> /etc/apt/sources.list && \
echo "deb-src http://mirrors.aliyun.com/debian stretch-updates main contrib non-free" >> /etc/apt/sources.list && \
echo "deb http://mirrors.aliyun.com/debian-security stretch/updates main contrib non-free" >> /etc/apt/sources.list && \
echo "deb-src http://mirrors.aliyun.com/debian-security stretch/updates main contrib non-free" >> /etc/apt/sources.list && \
apt-get -y update && apt-get -y install procps && mkdir -p /server/logs /server/data /server/prom
ADD config.yml /server/prom/config.yml
ADD jmx_javaagent-0.3.1.jar /server/prom/
ENV JAVA_HOME /opt/java
WORKDIR /server/app
ADD entrypoint.sh /server/entrypoint.sh
CMD ["/server/entrypoint.sh"]
config.yml
---
rules:
- pattern: '.*'
jmx_javaagent-0.3.1.jar
wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jar
entrypoint.sh
#!/bin/sh
M_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/server/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/server/prom/config.yml"
C_OPTS=${C_OPTS}
JAR_NAME=${JAR_NAME}
exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_NAME}
$(hostname -i)取值为docker宿主机的hostname ${M_PORT:-"12346"}指M_PORT的默认值为12346(-"12346")
C_OPTS是在使用apollo是用到的连接信息,在k8s的资源配置清单中定义
JAR_NAME 取来自k8s的环境变量值,可以在k8s的资源配置清单中定义
exec 可以保持java -jar的生命周期,把java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL} 变成前台运行且pid=1 用nohup达不到这个效果.
注意,exec一定要放在脚本的最后一条执行,因为exec已经把exec之后的命令变成pid并由操作系统进行管理,脚本中exec以下的命令不会执行
JAR_NAME和C_OPTS在资源配置清单中例子
env:
- name: JAVA_NAME
value: dubbo-samples-spring-boot-provider-1.0-SNAPSHOT.jar
- name: C_OPTS
value: -Denv=dev -Dapollo.meta=http://config.ylls.com
注意,如果apollo部署在k8s环境中,apollo.meta的地址可以写svc地址,比如apollo-configService.infra:8080
这个底包在dockerhub上已经开放,欢迎使用
docker pull zxd0079/jre:8u152_debian9.2
独立CI使用的构建流水线
首先加入以下几个Parameter
Name: app_name
Type: String Parameter
Default Value :
Description : 项目名
Name : image_name
Type: String Parameter
Default Value :
Description : docker build成功后的镜像名,例如app/dubbo-demo-service
Name : git_repo
Type: String Parameter
Default Value :
Description : git地址
Name : git_ver
Type: String Parameter
Default Value :
Description : git分支名或tag名或commit id. 最好用commit id 因为tag可以人为串改,为防背锅,用唯一的commitID更好
Name : add_tag
Type: String Parameter
Default Value :
Description : 手动加入的时间戳,例如: 190117_1920 给最终运行的docker image打个时间戳
Name : mvn_dir
Type: String Parameter
Default Value : ./
Description : mvn打包时用到的项目内目录
Add Parameter -> String Parameter
Name : target_dir
Type: String Parameter
Default Value : ./target
Description : 打包成功tagget目录
Name : mvn_cmd
Type: String Parameter
Default Value : mvn clean package -Dmaven.test.skip=true
Description : maven 打包命令
Name : base_image
Type: Choice Parameter
Default Value :
base/jre7:7u80
base/jre8:8u112
Description : 可选用底包
Name : maven
Type: Choice Parameter
Default Value :
3.6.0-8u181
3.2.5-6u025
2.2.1-6u025
Description : 可选用maven版本
Pipeline Script
pipeline {
agent any
stages {
stage('pull') { //get project code from repo
steps {
sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}"
}
}
stage('build') { //exec mvn cmd
steps {
sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.mvn_dir} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}"
}
}
stage('package') { //move jar file into project_dir
steps {
sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir app && mv *.jar ./app"
}
}
stage('image') { //build image and push to registry
steps {
writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.ylls.com/${params.base_image}
ADD ${params.target_dir}/app /server/app"""
sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.ylls.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.ylls.com/${params.image_name}:${params.git_ver}_${params.add_tag}"
}
}
}
}
一个dubbo项目的yaml例子
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dubbo-provider
namespace: app
labels:
name: dubbo-provider
spec:
replicas: 1
selector:
matchLabels:
name: dubbo-provider
template:
metadata:
labels:
app: dubbo-provider
name: dubbo-provider
annotations:
blackbox_port: "20880"
blackbox_scheme: "tcp"
prometheus_io_scrape: "true"
prometheus_io_port: "12346"
prometheus_io_path: "/"
spec:
containers:
- name: dubbo-provider
image: harbor.ylls.com/app/dubbo-provider:master_2210131619
ports:
- containerPort: 20880
protocol: TCP
env:
- name: JAR_NAME
value: dubbo-samples-spring-boot-provider-1.0-SNAPSHOT.jar
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
CI/CD并行
pipeline script
node{
env.BUILD_DIR = "/server/build"
env.MODULE = "demo"
env.NAMESPACE = "app"
env.HOST = "demo.ylls.com"
stage ('Prepartion') {
git 'https://github.com/zxd0079/demo.git'
}
stage ('Maven package') {
sh "mvn -pl ${MODULE} -am clean package "
}
stage ('Build Image') {
sh "/server/tools/build-image-${MODULE}.sh"
}
stage ('Depoly') {
sh "/server/tools/depoly-${MODULE}.sh"
}
}
build-image.sh
#!/bin/bash
if [ "$BUILD_DIR" == "" ]; then
echo "env 'BUILD_DIR' is not set"
exit 1
fi
DOCKER_DIR=${BUILD_DIR}/${JOB_NAME}
if [ ! -d ${DOCKER_DIR} ]; then
mkdir -p ${DOCKER_DIR}
fi
echo "docker workspace: ${DOCKER_DIR}"
JENKINS_DIR=${WORKSPACE}/${MODULE}
echo "jenkins workspace: ${JENKINS_DIR}"
TARGET_FILE= ${JENKINS_DIR}/target/${MODULE}.jar
if [ ! -f ${TARGET_FILE} ]; then
echo "target jar file not found ${TARGET_FILE}"
exit 1
fi
DOCKER_FILE=/server/tools/template/dockerfiles/${MODULE}
if [ ! -d ${MODULE} ]; then
echo "template not found ${MODULE}"
exit 1
fi
cd ${DOCKER_DIR}
rm -rf ./*
cp -r ${DOCKER_FILE}/* .dockerfile
cp ${TARGET_FILE} .dockerfile
cd .dockerfile
VER=$(date +%Y%m%d%H%M%S)
IMAGE=harbor.ylls.com/${MODULE}/${MODULE}:${VER}
docker build . -t ${IMAGE}
docker push ${IMAGE}
echo "${IMAGE}" > ${WORKSPACE}/IMAGE
echo "build image: ${IMAGE}"
yaml template
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: {{name}}
namespace: {{namespacke}}
labels:
name: {{name}}
spec:
replicas: 1
selector:
matchLabels:
name: {{name}}
template:
metadata:
labels:
app: {{name}}
name: {{name}}
spec:
containers:
- name: {{name}}
image: {{image}}
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 20880
protocol: TCP
env:
- name: JAR_NAME
value: dubbo-demo.jar
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
---
kind: Service
apiVersion: v1
metadata:
name: {{name}}
namespace: {{namespace}}
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
app: {{name}}
clusterIP: None
type: ClusterIP
sessionAffinity: None
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: {{name}}
namespace: {{namespace}}
spec:
rules:
- host: {{host}}
http:
paths:
- path: /
backend:
serviceName: {{name}}
servicePort: 8080
depoly.sh
#!/bin/bash
depoly_dir=/server/depoly/${MODULE}
mkdir -p $depoly_dir
name=${MODULE}
image=`cat ${WORK_DIR}/IMAGE`
echo " deploying ... name:${name} namespace:${namespace} image: ${image} host: ${host}"
yaml=/server/tools/template/yaml/${MODULE}.yaml
if [ ! -f ${yaml} ]; then
echo "yaml not found ${yaml}"
exit 1
fi
cd ${depoly_dir}
cp ${yaml} .
sed -i "s/{{name}}/${name}/g" ${yaml}
sed -i "s/{{namespace}}/${namespace}/g" ${yaml}
sed -i "s/{{image}}/${image}/g" ${yaml}
sed -i "s/{{host}}/${host}/g" ${yaml}
kubectl apply -f ${yaml}
cat ${yaml}
count=60
success=0
IFS="," #数组分格符
sleep 10
while [ ${count} -get 0 ]
do
replicas=`kubectl get deploy ${name} -o go-template='{{.status.replicas}},{{.status.updateReplicas}},{{.status.readyReplicas}},{{.status.availableReplicas}}'`
echo "replicas: ${replicas}"
arr=(${replicas})
if [ "${arr[0]}" == "${arr[1]}" -a "${arr[1]}" == "${arr[2]}" -a "${arr[2]}" == "${arr[3]}" ]; then
echo "health check success!"
success=1
break
fi
((count--))
sleep 2
done
if [ ${success} -ne 1 ]; then
echo "health check failed!"
exit 1
fi
rm -rf $depoly_dir
个人心得,生产环境单独使用CI,部署的工作可以直接使用命令行或者运维工具来完成,比如spinnaker,kubesphere等。测试环境用Jenkins的CI/CD并行,做好权限管理,让测试或开发自己点。按照我的经验,测试\开发对jenkins比较熟悉,不会那么好奇、手欠,基本不会对环境产生多少影响。反过来说他们对于运维工具产生的好奇心可能会很大,即使没有权限也可能整出点什么幺蛾子,最好从源头上进行保护。