单机部署K8S Nginx测试

768 阅读4分钟

本文已参与「新人创作礼」活动,一起开启掘金创作之路。

环境 Centos7.9

k8s版本1.14.1

docker 18.06

存储 使用物理机目录映射

修改DNS

vi /etc/sysconfig/network-scripts/ifcfg-ens32 (最后一个为网卡名称)

DNS1=223.5.5.5
DNS2=8.8.8.8

设置源

手动

[root@localhost ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# ls
CentOS-Base.repo  CentOS-CR.repo  CentOS-Debuginfo.repo  CentOS-fasttrack.repo  CentOS-Media.repo  CentOS-Sources.repo  CentOS-Vault.repo  CentOS-x86_64-kernel.repo
[root@localhost yum.repos.d]# mkdir oldrepo
[root@localhost yum.repos.d]# mv *.repo ./oldrepo/

手动下载两个源 mirrors.aliyun.com/repo/Centos-7.repo mirrors.aliyun.com/repo/epel-7.repo 上传到目录 再执行

yum clean all

yum makecache

自动

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

设置环境

# 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
# 重置iptables
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
# 关闭swap
swapoff -a
sed -i '/swap/s/^(.*)$/#\1/g' /etc/fstab
# 关闭selinux
setenforce 0
# 关闭dnsmasq(否则可能导致docker容器无法解析域名)
systemctl stop dnsmasq && systemctl disable dnsmasq

安装工具

yum install -y vim wget git net-tools htop

安装docker

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo
    yum -y install docker-ce-18.06.1.ce-3.el7
    systemctl enable docker && sudo systemctl start docker

安装kubelet,kubeadm,kubectl

yum list kubeadm --showduplicates | sort -r |grep 1.14

yum install -y kubeadm-1.14.1-0 kubelet-1.14.1-0 kubectl-1.14.1-0 --disableexcludes=kubernetes

错误

图片.png

解决: yum -y install yum-utils device-mapper-persistent-data lvm2

直接安装 yum -y install kubelet-1.14.1

查看版本 kubectl version

安装Master

hostnamectl set-hostname master
vi /etc/hosts

加一个master 
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 master
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


ifconfig 

根据信息设置下面的address

kubeadm init \
--apiserver-advertise-address=192.168.1.221 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.14.1 \
--service-cidr=10.68.0.0/16 \
--pod-network-cidr=172.22.0.0/16
  • --apiserver-advertise-address:这里放master节点的ip
  • --image-repository:这里是拉取镜像地址
  • --kubernetes-version: 这里是指定kubernetes版本号
  • --service-cidr:指定Service网络的范围,即负载均衡VIP使用的IP地址段
  • --pod-network-cidr:pod的ip地址段

等待拉取镜像

错误

[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

第一个问题 : sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /lib/systemd/system/kubelet.service.d/10-kubeadm.conf

第二个问题:

[root@localhost network-scripts]# netstat -tulp|grep 10250
tcp6       0      0 [::]:10250              [::]:*                  LISTEN      18471/kubelet 

执行kubeadm reset 后重试init

成功后提示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join xx.xx.77.240:6443 --token uzsrxg.8qaxcuhvglbxxpxh \
    --discovery-token-ca-cert-hash sha256:92dda4b351ede2c5cf9a0303e86741538f5284exxxe6e8053aa98892b361975 

按照提示执行

图片.png

安装网络插件 cni


mkdir -p /usr/local/calico

cd /usr/local/calico

wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml --no-check-certificate

wget https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml --no-check-certificate

修改calico.yaml中的CALICO_IPV4POOL_CIDR值为刚才kubeadm init的pod网段172.22.0.0/16

# 替换镜像
- image: quay.io/calico/typha:v3.1.7 => registry.cn-hangzhou.aliyuncs.com/kubernetes-base/typha:v3.1.7

registry.cn-hangzhou.aliyuncs.com/kubernetes-base/node:v3.1.7
registry.cn-hangzhou.aliyuncs.com/kubernetes-base/cni:v3.1.7
# 部署calico
kubectl apply -f rbac-kdd.yaml 
kubectl apply -f calico.yaml 
# 查看部署状态
kubectl get pods -n kube-system

可以手动docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes-base/typha:v3.1.7 但是这个地址好像也访问不了

删除的话用 kubectl delete -f 文件路径

另一种方式 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml 另一种方式

kubectl apply -f "http://docs.projectcalico.org/manifests/calico.yaml"

kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

kubectl get nodes
# 查看master的STATUS 是Ready就成功了

有错误的话查看日志 

journalctl -f

图片.png

(状态是NotReady) 图片.png查看日志在等待安装CNI(?) kubectl describe pod calico-kube-controllers-5c6845fbcf-587bp -n kube-system

错误是Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container 啥啥的 修改yaml文件配置 Failed to create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container... - 是阿凯啊 - 博客园 www.cnblogs.com/xujunkai/p/…

# Cluster type to identify the deployment type
  - name: CLUSTER_TYPE
  value: "k8s,bgp"
# 新增
  - name: IP_AUTODETECTION_METHOD
    value: "interface=eno26"
    # eno26为本地网卡名字 ifconfig查看

设置pod可运行在master节点(集群可忽略)

图片.png

kubectl taint nodes --all node-role.kubernetes.io/master-

添加节点

上面init的结果拿下来执行

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

查看状态

kubectl get node,deploy,pod,svc -n kube-system

测试部署Nginx

vi nginx-test.yaml

apiVersion: apps/v1	#与k8s集群版本有关,使用 kubectl api-versions 即可查看当前集群支持的版本
kind: Deployment	#该配置的类型,我们使用的是 Deployment
metadata:	        #译名为元数据,即 Deployment 的一些基本属性和信息
  name: nginx-deployment	#Deployment 的名称
  labels:	    #标签,可以灵活定位一个或多个资源,其中key和value均可自定义,可以定义多组,目前不需要理解
    app: nginx	#为该Deployment设置key为app,value为nginx的标签
spec:	        #这是关于该Deployment的描述,可以理解为你期待该Deployment在k8s中如何使用
  replicas: 1	#使用该Deployment创建一个应用程序实例
  selector:	    #标签选择器,与上面的标签共同作用,目前不需要理解
    matchLabels: #选择包含标签app:nginx的资源
      app: nginx
  template:	    #这是选择或创建的Pod的模板
    metadata:	#Pod的元数据
      labels:	#Pod的标签,上面的selector即选择包含标签app:nginx的Pod
        app: nginx
    spec:	    #期望Pod实现的功能(即在pod中部署)
      containers:	#生成container,与docker中的container是同一种
      - name: nginx	#container的名称
        image: nginx:1.7.9	#使用镜像nginx:1.7.9创建container,该container默认80端口可访问

暴露端口 vim nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: nginx-service	#Service 的名称
  labels:     	#Service 自己的标签
    app: nginx	#为该 Service 设置 key 为 app,value 为 nginx 的标签
spec:	    #这是关于该 Service 的定义,描述了 Service 如何选择 Pod,如何被访问
  selector:	    #标签选择器
    app: nginx	#选择包含标签 app:nginx 的 Pod
  ports:
  - name: nginx-port	#端口的名字
    protocol: TCP	    #协议类型 TCP/UDP
    port: 80	        #集群内的其他容器组可通过 80 端口访问 Service
    nodePort: 32600   #通过任意节点的 32600 端口访问 Service
    targetPort: 80	#将请求转发到匹配 Pod 的 80 端口
  type: NodePort	#Serive的类型,ClusterIP/NodePort/LoaderBalancer

# 应用

kubectl apply -f nginx-test.yaml
kubectl apply -f nginx-service.yaml

# 查看 Deployment
kubectl get deployments

# 查看 Pod
kubectl get pods

# 查看Pod信息
kubectl describe pod nginx-deployment-759fccf44f-7n6hk

# 查看Deployment的信息
kubectl describe deployment nginx	


kubectl get services -o wide

如果Nginx的Pod运行不起来 出现 Warning FailedScheduling 4s (x13 over 71s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate. 执行kubectl taint nodes --all node-role.kubernetes.io/master-

访问 任意节点的IP:32600

图片.png