K8s 运维 Linux 环境搭建手册(单/多节点)
一、基础环境
- 操作系统:CentOS 7.9 / RHEL 8.x / Oracle Linux 8.x / Ubuntu 20.04(以下以 CentOS 为例)
- 硬件要求(最小):4C CPU、8GB RAM、50GB 磁盘
- 网络要求:所有节点互通,关闭 swap、防火墙及 SELinux,配置好主机名和 hosts
二、系统初始化
# 1. 同步时间
yum install -y chrony
systemctl enable --now chronyd
# 2. 关闭 swap / SELinux / 防火墙
swapoff -a && sed -i '/swap/d' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
systemctl disable --now firewalld
# 3. 开启 IP 转发等
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
EOF
modprobe br_netfilter && sysctl --system
三、安装 Container Runtime(以 containerd 为例)
# 1. 安装依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
# 2. 安装 containerd
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y containerd.io
# 3. 配置
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
# 4. 使用 systemd cgroup
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
systemctl enable --now containerd
四、安装 kubeadm / kubelet / kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-\$basearch/
enabled=1
gpgcheck=0
EOF
yum install -y kubelet-1.28.3 kubeadm-1.28.3 kubectl-1.28.3
systemctl enable kubelet
五、初始化主节点(Master)
kubeadm init \
--apiserver-advertise-address=<MASTER_IP> \
--pod-network-cidr=10.244.0.0/16 \
--kubernetes-version=v1.28.3 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
初始化完成后按提示执行:
mkdir -p $HOME/.kube
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
六、安装网络插件(以 Calico 为例)
kubectl apply -f https://docs.projectcalico.org/v3.25/manifests/calico.yaml
七、加入工作节点(Worker)
在每个 Worker 节点执行 Master 节点输出的 join 命令:
kubeadm join <MASTER_IP>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
若 token 过期可重新生成:
kubeadm token create --print-join-command
八、常用运维命令
# 查看集群状态
kubectl get nodes -o wide
kubectl get pods -A
# 生成 kubeconfig 给其他运维使用
kubectl config view --flatten > kubeconfig-ops
# 备份 etcd(主节点)
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd-backup.db
# 增加新 Master(高可用)建议使用 kubeadm + 外部负载均衡
Java 应用部署操作手册(基于 K8s 集群)
一、镜像准备
-
项目构建
在 CI/CD 环境或本地打包:mvn clean package -DskipTests -
编写 Dockerfile(示例)
FROM openjdk:17-jdk-slim WORKDIR /app COPY target/demo.jar app.jar ENV JAVA_OPTS="" EXPOSE 8080 ENTRYPOINT ["sh","-c","java $JAVA_OPTS -jar app.jar"] -
构建并推送镜像
docker build -t registry.example.com/project/demo:1.0.0 . docker push registry.example.com/project/demo:1.0.0
二、Kubernetes 资源编排
1. Namespace
apiVersion: v1
kind: Namespace
metadata:
name: prod
2. ConfigMap / Secret(可选)
apiVersion: v1
kind: ConfigMap
metadata:
name: demo-config
namespace: prod
data:
application.yml: |
server:
port: 8080
3. Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
namespace: prod
spec:
replicas: 3
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
containers:
- name: demo
image: registry.example.com/project/demo:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
env:
- name: JAVA_OPTS
value: "-Xms512m -Xmx1g"
volumeMounts:
- name: config
mountPath: /app/config
volumes:
- name: config
configMap:
name: demo-config
4. Service 与 Ingress
apiVersion: v1
kind: Service
metadata:
name: demo-svc
namespace: prod
spec:
type: ClusterIP
selector:
app: demo
ports:
- port: 80
targetPort: 8080
protocol: TCP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: prod
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: demo.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo-svc
port:
number: 80
三、部署流程
-
切换到 kubeconfig
export KUBECONFIG=/path/to/kubeconfig -
应用资源
kubectl apply -f namespace.yaml kubectl apply -f configmap.yaml kubectl apply -f deployment.yaml kubectl apply -f service.yaml kubectl apply -f ingress.yaml -
查看状态
kubectl get pods -n prod kubectl describe pod <pod-name> -n prod kubectl logs <pod-name> -n prod
四、灰度与滚动发布
-
更新镜像版本
kubectl set image deployment/demo demo=registry.example.com/project/demo:1.0.1 -n prod kubectl rollout status deployment/demo -n prod -
回滚
kubectl rollout undo deployment/demo -n prod
五、集群运维补充
| 操作 | 命令 |
|---|---|
| 横向扩容 | kubectl scale deployment/demo --replicas=5 -n prod |
| 查看资源使用 | kubectl top nodes、kubectl top pods -n prod |
| 配置 HPA | kubectl autoscale deployment demo --cpu-percent=70 --min=3 --max=10 -n prod |
| Pod 进入调试 | kubectl exec -it <pod> -n prod -- /bin/sh |
| 日志采集 | 建议使用 EFK/ELK 或 Loki Stack |
| 监控告警 | 建议部署 Prometheus + Grafana + Alertmanager |
六、CI/CD 建议
- Git 提交触发 Pipeline(Jenkins/GitLab CI):
- 单元测试、静态扫描 → 构建镜像 → 推送到镜像仓库 → 更新 Kubernetes YAML(或 Helm Chart) →
kubectl apply。
- 单元测试、静态扫描 → 构建镜像 → 推送到镜像仓库 → 更新 Kubernetes YAML(或 Helm Chart) →
- 使用 Helm/ArgoCD 管理配置,更易于多环境部署。
- 建议结合 GitOps 工作流,应用变更通过 Pull Request 审核 & 自动部署。
以上步骤可帮助从零搭建 Kubernetes 集群,并部署 Java 微服务。在生产环境需结合高可用控制面、节点监控、备份恢复、镜像仓库权限、网络策略等进行完善。如需 Helm Chart、CI/CD 模板或更多自动化脚本,可继续告知。