.NET 微服务实践(4)-搭建指令服务

316 阅读13分钟

本文系油管上一个系列教程的学习记录第四章,原链接是.NET Microservices – Full Course。本文分成七部分:项目初始化(项目创建和包安装)、控制器(服务外部调用)、介绍同步和异步消息机制,实现平台服务到指令服务的同步消息传递(HTTPS协议)、部署两个服务到K8S、集群内部网络配置、配置API Gateway。

项目初始化

创建项目

ASP.NET Core Web API + .NET Core 5.0

添加依赖的Nuget包

  • AutoMapper.Extensions.Microsoft.DependencyInject
  • Microsoft.EntityFrameworkCore.5.0.8
  • Microsoft.EntityFrameworkCore.Design.5.0.8
  • Microsoft.EntityFrameworkCore.InMemory.5.0.8

控制器

创建Controller

建立一个Controller用于模拟两个服务之间基于HTTP协议的通信。

using Microsoft.AspNetCore.Mvc;
using System;

namespace CommandService.Controllers
{
    [Route("api/c/[controller]s")]
    [ApiController]
    public class PlatformController : ControllerBase
    {
        [HttpPost]
        public ActionResult TestInboudConnection()
        {
            string feedback = ">>>Inbound Post Command Service";
            Console.WriteLine($"{feedback}");

            return Ok(feedback);
        }
    }
}

同步和异步消息

在建立平台服务和指令服务通信之前,介绍一下通信的两种方式:同步以及异步消息。

同步消息/Synchronous Message

它的特征包括:

  • 整个通信是一个Request/Response闭环
  • 请求者不得不等待响应
  • 对外暴露接口的服务通常要以以同步消息的模式才可以访问(例如以HTTP请求的方式)
  • 服务之间通常需要直到对方是谁
  • 一般有两种方式:HTTP请求与GRPC

这里要区分由异步关键词修饰的接口方法。例如

[HttpGet]
[Route("{studentName}")]
public async Task<ActionResult<Student>> GetStudentByName([FromRoute] string studentName)
{
}
  • 该方法从消息通信的角度来看,还是一个同步消息
  • 客户端(或者请求方)仍然需要等待服务端的消息处理和响应
  • 它的“异步”体现在它处理请求时的线程管理上:譬如它整个CLR同时处理多个请求时,若该方法被调用且需要等待时,CLR会暂时挂起当前工作者线程去处理其它的请求、等到它完成请求处理后再继续该线程(可能线程Id已经更换),最后返回响应值。

在微服务中,同步消息通信不可避免,同步消息可以将服务建立通信耦合(虽然解耦可能才是微服务体系实施的初衷)、继而建立起服务间的依赖关系。而这种依赖极有可能演化成长依赖关系。

异步消息/Asynchronous Message

与同步消息相反:

  • 不存在Request/Response闭环
  • 请求者无须等待响应
  • 通信的模式是事件驱动的发布订阅模式
  • 通常需要用到消息总线(Service Bus)
  • 服务之间只需要直到消息总线、而不需要直到对方是谁
  • 在服务之间更常用

消息总线对于微服务架构而言是非常重要的,有时作为唯一的消息通信介质、会不可避免得庞大

  • 内部通信会因为消息总线的宕机而停滞
  • 服务本身由于相互解耦,所以消息总线出现故障并不会影响服务本身的运行
  • 应当考虑对消息总线的网络设计、物理层持久化以及集群管理等
  • 服务和消息总线之间的通信应该设置Retry策略
  • 由于消息总线还是一个队列模式,所以服务的消息发布订阅模式仍然需要围绕小范围服务通信设计

以同步方式调用指令服务

构造调用指令服务的接口并实现

创建接口

using PlatformService.PlatformDomain;
using System.Threading.Tasks;

namespace PlatformService.Utils.CommandService
{
    public interface ICommandClient
    {
        Task SendMessageToCommand(PlatformReadDto platformReadDto);
    }
}

接口的实现

using Microsoft.Extensions.Configuration;
using PlatformService.PlatformDomain;
using System;
using System.Net.Http;
using System.Text;
using System.Text.Json;
using System.Threading.Tasks;

namespace PlatformService.Utils.CommandService
{
    public class CommandClient : ICommandClient
    {
        private readonly HttpClient _httpClient;
        private readonly IConfiguration _configuration;

        public CommandClient(HttpClient httpClient, IConfiguration configuration)
        {
            _httpClient = httpClient;
            _configuration = configuration;
        }

        public async Task SendMessageToCommand(PlatformReadDto platformReadDto)
        {
            var httpContext = new StringContent(
                JsonSerializer.Serialize(platformReadDto),
                Encoding.UTF8,
                "application/json");

            var apiPath = _configuration["CommandService"];
            var response = await _httpClient.PostAsync($"{apiPath}/api/v1/platforms/", httpContext);

            if (response.IsSuccessStatusCode)
            {
                Console.WriteLine(">>>Sync Post to CommandService was Ok");
            }
            else{
                Console.WriteLine(">>>Sync Post to CommandService was NOT Ok");
            }
        }
    }
}

要注意的是,调用CommandService需要知道其地址,这里采用配置(该配置项目前是开发环境:appsettings.Development.json)的方式来获取地址、注入到HTTP 请求中。

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },
  "CommandService": "https://localhost:44345"
}

此外,通信的实现是构造了一个HttpClient,需要在项目启动时,注册CommandService接口以及实现——它会同时注册HttpClient。

public void ConfigureServices(IServiceCollection services)
{
    *//Register HttpClient Factor*
    services.AddHttpClient<ICommandClient, CommandClient>();
}

与指令服务通信

在原先PlatformController中的创建Platform方法里,添加通知CommandService创建对象这一行为。

private readonly ICommandClient _commandClient;
private readonly IMapper _mapper;
private readonly IPlatformRepository _platformRepository;

public PlatformController(
    IPlatformRepository platformRepository,
    IMapper mapper,
    ICommandClient commandClient)
{
    _commandClient = commandClient;
    _mapper = mapper;
    _platformRepository = platformRepository;
}

[HttpPost]
public async Task<ActionResult<PlatformReadDto>> CreatePlatformAsync(
    [FromBody] PlatformWriteDto platformWriteDto)
{
    if (platformWriteDto is null)
    {
        return this.BadRequest(new{
            Message = "Platform data should not be bull."
        });
    }

    Console.WriteLine(">>>Creating target Platform...");
    var platform = _mapper.Map<Platform>(platformWriteDto);
    _ = await _platformRepository.CreatePlatformAsync(platform);

    var platformReadDto = _mapper.Map<PlatformReadDto>(platform);

    try
    {
        await _commandClient.SendMessageToCommand(platformReadDto);
    }
    catch (Exception ex)
    {
        Console.WriteLine($">>>Could not send synchronously to Command Service: {ex.Message}");
    }

    return CreatedAtRoute(
        nameof(GetPlatformByIdAsync),
        new { PlatformId = platform.PlatformId },
        platformReadDto);
}

和之前API的区别,就是调用了注册的ICommandClient接口。向CommandService发送了创建的通知。

接下来同时启动PlatformService与CommandService,进行通信测试。

测试本地服务

通过调用PlatformService接口创建Platform,然后查看CommandService中是否有收到Platform的创建消息、以及PlatformService中是否收到CommandService的回传响应。

通过Postman调用本地PlatformService创建Platform: 4-1call platform service to create locally.png 本地CommandService接收到Platform被创建的消息: 4-2 recieve create message in command service - 副本.png 本地PlatformService接收到CommandService回传的响应: 4-3 recieve response from command service.png

部署两个服务到K8S中,实现两个服务的通信

上面的部署已经实现了两个服务在本机上的通信。接下来将实现在K8S集群内部,实现两个服务的通信。步骤分成

  1. 容器化CommandService
  2. 配置两个服务的ClusterIP(作为集群内通信的地址)
  3. 通信测试

容器化CommandService

类似于PlatformService的容器镜像制作过程,为CommandService配置容器化脚本

FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app

COPY *.csproj ./
RUN dotnet restore

COPY . ./
RUN dotnet publish -c Release -o out

FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "CommandService.dll"]

构建镜像并运行

>>docker build -t <docker hub id>/commandservice .
[+] Building 0.6s (15/15) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                               0.0s
 => => transferring dockerfile: 32B                                                                                                                                                                0.0s
 => [internal] load .dockerignore                                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                                    0.0s
 => [internal] load metadata for mcr.microsoft.com/dotnet/aspnet:5.0                                                                                                                               0.5s
 => [internal] load metadata for mcr.microsoft.com/dotnet/sdk:5.0                                                                                                                                  0.5s
 => [build-env 1/6] FROM mcr.microsoft.com/dotnet/sdk:5.0@sha256:3ff465d940de3e2c727794d92fd7bb649c498d4abd91bc9213ea7831ebf01f1e                                                                  0.0s
 => [internal] load build context                                                                                                                                                                  0.0s
 => => transferring context: 11.37kB                                                                                                                                                               0.0s
 => [stage-1 1/3] FROM mcr.microsoft.com/dotnet/aspnet:5.0@sha256:1a7d811242f001673d5d25283b3af03da526de1ee8d3bb5aa295f480b7844d44                                                                 0.0s
 => CACHED [stage-1 2/3] WORKDIR /app                                                                                                                                                              0.0s
 => CACHED [build-env 2/6] WORKDIR /app                                                                                                                                                            0.0s
 => CACHED [build-env 3/6] COPY *.csproj ./                                                                                                                                                        0.0s
 => CACHED [build-env 4/6] RUN dotnet restore                                                                                                                                                      0.0s
 => CACHED [build-env 5/6] COPY . ./                                                                                                                                                               0.0s
 => CACHED [build-env 6/6] RUN dotnet publish -c Release -o out                                                                                                                                    0.0s
 => CACHED [stage-1 3/3] COPY --from=build-env /app/out .                                                                                                                                          0.0s
 => exporting to image                                                                                                                                                                             0.0s
 => => exporting layers                                                                                                                                                                            0.0s
 => => writing image sha256:ad50879b85d9369c016bb7d34feca211f9e6ea5d19536058db4b917af6520e2d                                                                                                       0.0s
 => => naming to docker.io/ricardo/commandservice
>>docker images
REPOSITORY                  TAG            IMAGE ID       CREATED         SIZE
ricardo/commandservice      latest        ad50879b85d9   14 hours ago    231MB
>>docker run -p 8080:80 -d <docker hub id>/commandservice
>>docker ps
CONTAINER ID   IMAGE                        COMMAND                  CREATED         STATUS         PORTS                  NAMES
30ab40602d2c   ricardo/commandservice    "dotnet CommandServi…"   7 seconds ago   Up 5 seconds   0.0.0.0:8080->80/tcp   suspicious_cray

将构建的镜像推送到Docker Hub上,用于之后在K8S中部署。

>>docker push <docker hub id>/commandservice

4-5 image in docker hub.png

推送到Docker Hub的CommandService镜像

Kubernetes IP 类型

K8S中有以下几种IP:

  • Node IP:Kubernetes集群中每个节点(服务器)物理网卡的IP地址,分成外部和内部。内部IP仅限集群内访问,它的作用是集群中每个节点之间通过该网络地址实现相互之间的通信;外部IP可以集群外访问,即集群节点与集群外服务进行相互通信。
>>kubectl get nodes -o wide
NAME             STATUS   ROLES                  AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                      CONTAINER-RUNTIME
docker-desktop   Ready    control-plane,master   23d   v1.22.5   192.168.65.4   <none>        Docker Desktop   5.10.16.3-microsoft-standard-WSL2   docker://20.10.12

可以看到当前集群只有一个Node、docker-desktop,内部NodeIP为192.168.65.4,而没有外部NodeIP。通过集群内的Pod访问Node地址:

root@platforms-depl-5b46897b95-dnrvw:~# ping 192.168.65.4
PING 192.168.65.4 (192.168.65.4) 56(84) bytes of data.
64 bytes from 192.168.65.4: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from 192.168.65.4: icmp_seq=2 ttl=64 time=0.037 ms
64 bytes from 192.168.65.4: icmp_seq=3 ttl=64 time=0.037 ms
64 bytes from 192.168.65.4: icmp_seq=4 ttl=64 time=0.040 ms
--- 192.168.65.4 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 90ms
  • Pod IP:Pod IP是每个Pod的IP地址,它是Docker Engine根据docker网桥的IP地址段进行分配的,通常是一个虚拟的二层网络。Node内同网段的Pod可以相互通过Pod IP通讯。
>>kubectl get pods -o wide
NAME                              READY   STATUS             RESTARTS        AGE     IP             NODE             NOMINATED NODE   READINESS GATES
hello-node-87cd7d8f5-bfhb2        0/1     Running            0               2m44s   10.1.0.112     docker-desktop   <none>           <none>
platforms-depl-5b46897b95-dnrvw   1/1     Running            0               5h34m   10.1.0.110     docker-desktop   <none>           <none>

可以看到PlatformService的Pod IP地址为10.1.0.110,示例Hello-World[3]的Pod IP地址为10.1.0.112。通过PlatformService的Pod访问Node地址:

root@platforms-depl-5b46897b95-dnrvw:~# ping 10.1.0.112
PING 10.1.0.112 (10.1.0.112) 56(84) bytes of data.
64 bytes from 10.1.0.112: icmp_seq=1 ttl=64 time=0.144 ms
64 bytes from 10.1.0.112: icmp_seq=2 ttl=64 time=0.040 ms
64 bytes from 10.1.0.112: icmp_seq=3 ttl=64 time=0.037 ms
64 bytes from 10.1.0.112: icmp_seq=4 ttl=64 time=0.032 ms
--- 10.1.0.112 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 150ms
rtt min/avg/max/mdev = 0.032/0.063/0.144/0.047 ms
  • Cluster IP:仅作用于Service对象的IP地址,此为虚拟IP地址,无法ping通。Cluster IP只能结合Service Port组成一个具体的通信端口暴露给请求方调用,单独的Cluster IP不具备TCP/IP通信的基础,并且它们属于Kubernetes集群这样一个封闭的空间、只有kubernetes集群内部访问使用。
>>kubectl describe svc platforms-clusterip-svc
Name:              platforms-clusterip-svc
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=platformservice
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.96.108.221
IPs:               10.96.108.221
Port:              platformservice  80/TCP
TargetPort:        80/TCP
Endpoints:         10.1.0.110:80
Session Affinity:  None
Events:            <none>

可以看到,尽管platforms-clusterip-svc有自己的IP地址和Port,但是其Endpoint实际从原本的IP映射为PlatformService的IP+TargetPort 。这个过程具体是由kube-proxy使用iptables规则来进行调配的,这调配的最终结果也包括将针对服务的流量映射到后端Pod。

  • External IP:用于集群外部访问集群内Service,一般有两种方式:1)通过设置NodePort映射到物理机,同时设置Service的类型为NodePort;2)通过设置LoadBalancer映射到主机上提供的LoadBalancer地址。

配置两个服务的ClusterIP

回到上一章提到的架构设计 4-13 Cluster IP.png

对于PlatformService以及CommandService,尽管两个服务可以被各自放在Pod中,然后两个Pod可以通过PodIP以及targetPort进行相互通信,但是由于Pod的生命周期有不确定,因此更可靠的方式是两个Pod通过ClusterIP进行通讯(相关内容在上一篇已经提到)。

所以,在原来的platforms-depl.yaml文件中新增配置如下:

apiVersion: apps/v1
kind: Deployment
metadata:  
  name: platforms-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: platformservice
  template:
    metadata:
      labels:
        app: platformservice
    spec:
      containers:
        - name: platformservice
          image: <Docker Hub Id>/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
  name: platforms-clusterip-svc
spec:
  selector:
    app: platformservice
  ports:
  - name: platformservice
    protocol: TCP 
    port: 80
    targetPort: 80

其中selector中的app 名字以及ports名字都应该是之前部署的Pod名。且CluseterIP(默认service就是ClusterIP)的targetPort必须为Pod的containerPort

注意这里其实也可以将配置ClusterIP的服务独立出来作为service配置。

也为CommandService配置相同的文件

apiVersion: apps/v1
kind: Deployment
metadata:  
  name: commands-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: commandservice
  template:
    metadata:
      labels:
        app: commandservice
    spec:
      containers:
        - name: commandservice
          image: <Docker Hub Id>/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
  name: commands-clusterip-svc
spec:
  selector:
    app: commandservice
  ports:
  - name: commandservice
    protocol: TCP 
    port: 80
    targetPort: 80

由于两个服务部署(生产环境)都是在集群内部通信,因此在PlatformService中不再通过localhost去访问CommandService,而是应该改成ClusterIP的地址。

在PlatformService中新增appsettings.Production.json中,添加CommandService的地址:

{
  "CommandService": "http://commands-clusterip-svc:80"
}

这里PlatformService的容器调用CommandService绑定的ClusterIP时,顺序是Port(80)→TargetPort(80)→ContainerPort(80),最终访问到CommandService Pod。

部署两个服务

对于PlatformService,由于更新了代码、需要重新构建镜像并推送到Docker Hub。然后重新运行PlatformService的部署脚本

>>kubectl apply -f platforms-depl.yaml
deployment.apps/platforms-delp unchanged
service/platforms-clusterip-svc created

发现部署的脚本(platforms-depl)没有改动,新增的ClusterIP (platforms-clusterip-svc)服务提示已经创建。

>>kubectl get services
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                          AGE
kubernetes                ClusterIP      10.96.0.1        <none>        443/TCP                          23d
platforms-clusterip-svc   ClusterIP      10.96.108.221    <none>        80/TCP                           23d
platforms-np-svc          NodePort       10.101.93.173    <none>        80:30001/TCP                     23d

因此,需要强制让K8S从Docker Hub上拉取最新的PlatformService镜像(里面配置了生产环境调用CommandService的地址,以及更新了接口、用于发送消息给CommandService)。

>>kubectl rollout restart deployment platforms-depl
deployment.apps/platforms-delp restarted
>>kubectl get pods
NAME                              READY   STATUS              RESTARTS      AGE
platforms-depl-5b46897b95-dnrvw   0/1     ContainerCreating   0             4s
platforms-depl-794fb7f857-4mfdt   1/1     Running             4 (55m ago)   2d11h
>>kubectl get pods
NAME                              READY   STATUS        RESTARTS      AGE
platforms-depl-5b46897b95-dnrvw   1/1     Running       0             21s
platforms-depl-794fb7f857-4mfdt   1/1     Terminating   4 (55m ago)   2d11h
>>kubectl get pods
NAME                              READY   STATUS    RESTARTS      AGE
platforms-depl-5b46897b95-dnrvw   1/1     Running   0             28s

可以看到执行命令后,旧的Pod还在运行的同时、新的Pod正在创建,等新的Pod创建好之后、旧的Pod就被删除了。进入Pod内部查看:

>>> Seed data exist...
>>> CommandService Endpoint http://commands-clusterip-svc:80
info: Microsoft.Hosting.Lifetime[0]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app

可以看到重新运行的容器中已经打印出当前它可以访问的CommandService地址。

接着再部署CommandService到K8S中

>>kubectl apply -f commands-depl.yaml
deployment.apps/commands-depl created
service/commands-clusterip-svc created
>>kubectl get pods
NAME                              READY   STATUS    RESTARTS      AGE
commands-depl-86d6b859c7-wm7bk    1/1     Running   0             96s
mssql-depl-856b8c48fd-njn24       1/1     Running   7 (67m ago)   23d
platforms-depl-5b46897b95-dnrvw   1/1     Running   0             12m
>>kubectl get services
NAME                      TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                          AGE
commands-clusterip-svc    ClusterIP      10.97.40.210     <none>        80/TCP                           23d
kubernetes                ClusterIP      10.96.0.1        <none>        443/TCP                          23d
platforms-clusterip-svc   ClusterIP      10.96.108.221    <none>        80/TCP                           23d
platforms-np-svc          NodePort       10.101.93.173    <none>        80:30001/TCP                     23d

测试部署的服务

为PlatformService配置的NodePort是30001,IP地址是localhost。进行部署的服务测试

4-11 call platformservice to create in k8s.png 4-12 recieve create message in command service - 副本.png 4-12 recieve create message in command service.png

以上创建Platform的调用链是

  1. Postman → 回环地址(local host, 3001)→
  2. NodePort(NordePort service,3001)→TargetPort(NordePort service,80)→ ContainerPort)PlatformService,80)→
  3. Port(ClusterIP Service,80)→TargetPort(ClusterIP Service,80)→ TargetPort(CommandService,80)。

配置API Gateway

4-13 Ingress.png 再来回顾一下整个微服务体系的架构。虽然有NodePort作为访问平台服务的入口,但是对于访问指令服务、以及在大量流量汇聚访问两个服务的情况,当前的架构并不能很好地处理。解决方案之一,是添加一个反向代理的入口,通过负载平衡,允许访问两个服务。

Ingress Controller

关于为什么要有Ingress,这篇文章解释得很详细

Kubernetes-Ingress Controller

Ingress Nginx部署

通过以下命令可以直接部署Ingress-nginx,具体信息可以查看官方说明

>>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml

这里可能会因为网络的原因,导致无法成功部署Nginx。解决方案是先科学上网、通过地址获取yaml文件,下载到本地后,再运行安装。

>>kubectl apply -f ingress-nginx-deploy.yml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

可以通过命令行具体查看安装情况,注意需要指定命名空间

>>kubectl get namespaces
NAME              STATUS        AGE
default           Active        23d
ingress-nginx     Terminating   23d
kube-node-lease   Active        23d
kube-public       Active        23d
kube-system       Active        23d
>>kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS              RESTARTS   AGE
ingress-nginx-admission-create--1-xbkpj     0/1     ImagePullBackOff    0          4m3s
ingress-nginx-admission-patch--1-2rslj      0/1     ImagePullBackOff    0          4m3s
ingress-nginx-controller-666f45c794-jccnm   0/1     ContainerCreating   0          4m3s

发现几个pod一直无法成功运行,于是查看具体原因

>>kubectl describe pod ingress-nginx-admission-patch--1-2rslj -n ingress-nginx
......
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  5m                   default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-admission-patch--1-2rslj to docker-desktop
  Normal   Pulling    60s (x4 over 4m59s)  kubelet            Pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f"
  Warning  Failed     29s (x4 over 3m57s)  kubelet            Failed to pull image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f": rpc error: code = Unknown desc = Error response from daemon: Get "https://asia-east1-docker.pkg.dev/v2/k8s-artifacts-prod/images/ingress-nginx/kube-webhook-certgen/manifests/sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f": dial tcp 64.233.188.82:443: i/o timeout
  Warning  Failed     29s (x4 over 3m57s)  kubelet            Error: ErrImagePull
  Warning  Failed     15s (x6 over 3m57s)  kubelet            Error: ImagePullBackOff
  Normal   BackOff    3s (x7 over 3m57s)   kubelet            Back-off pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f"

发现原来是镜像无法拉取,相关的问题已经有整理过了,可以参考这篇文章:解决国内k8s的ingress-nginx镜像无法正常pull拉取问题

跟着操作之后、再查看ingress-controller就会看到相关的pods

>>kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create--1-kc22j     0/1     Completed   0          49s
ingress-nginx-controller-6c646f59bb-vxz9q   0/1     Running     0          49s

配置Ingress服务

配置Ingress访问平台与指令的ClusterIP服务

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-svc
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/user-regex: 'true'
spec:
  rules:
  - host: microservicetrial.com
    http:
      paths:
      - pathType: Prefix
        path: "/api/v1/platforms"
        backend:
          service:
            name: platforms-clusterip-svc
            port: 
              number: 80
      - pathType: Prefix
        path: "/api/c/platforms"
        backend:
          service:
            name: commands-clusterip-svc
            port:
              number: 80

注意事项:

  • host:充当主机的URL,ingress会监听发送到该URL的请求。该URL的应当与集群的IP地址保持一致
  • path:指的是访问的API地址
  • backend中,service name需要和配置的cluster service name保持一致
  • port应当与cluster service的targer port保持一致
  • annotations中,主要是去识别Ingress Controller的类型

部署Ingress Nginx服务

在包含ingress配置文件的目录下,运行

>>kubectl apply -f ingress-svc.yaml

此时可能会提示报错

Error from server (InternalError): error when creating "ingress-svc.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io”: Post "ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/…": dial tcp 10.102.215.51:443: connect: connection refused

此提示说明了Ingress-nginx控制器没有响应:这是由于在配置ingress时,一些内容没有被完全配置好(需要手动配置),然而依赖这些内容的全局对象、如ValidatingWebhookConfiguration仍然存在。解决方案有两种:

  1. 删掉ValidatingWebhookConfiguration:Ingress-nginx服务部署报错的快速解决方案
  2. 另一种方案(未尝试),是打开443端口、同时在配置ingress-nginx时配置好证书认证:Issue resolve

下面是针对第一种解决方案的演示

>>kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

之后再重新部署服务就会顺利完成

>>kubectl apply -f ingress-svc.yaml
ingress.networking.k8s.io/ingress-svc created
>>kubectl get ingress
NAME          CLASS    HOSTS                   ADDRESS   PORTS   AGE
ingress-svc   <none>   microservicetrial.com             80      12s
>>kubectl describe ingress
Name:             ingress-svc
Namespace:        defaultAddress:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host                   Path  Backends
  ----                   ----  --------
  microservicetrial.com
                         /api/v1/platforms   platforms-clusterip-svc:80 (10.1.0.110:80)
                         /api/c/platforms    commands-clusterip-svc:80 (10.1.0.110:80)
Annotations:             kubernetes.io/ingress.class: nginx
                         nginx.ingress.kubernetes.io/user-regex: true
Events:                  <none>

配置Hostfile

在如下地址中,打开hosts文件

C:\Windows\System32\drivers\etc

在其中为之前设置的host url地址配置对应的IP 4-17 host ip configuration.png

主机IP地址配置

这里实际过程是,当我们访问microservicetrial.com时,浏览器会先在hosts中找到对应的URL解析的IP地址,如果DNS能够解析得到一个地址,那么浏览器的访问会被劫持、直接转向K8S集群;此时,Ingress就会监听到请求,并把请求转发给对应的服务。

关于Hosts文件以及回环地址可以参考这两篇博文:

测试Ingress Nginx

Postman中,分别测试通过Ingress访问集群的平台服务,创建和获取对象

4-19 通过Ingress访问集群平台服务1.png

通过Ingress访问集群平台服务-创建对象

4-19 通过Ingress访问集群平台服务2.png

通过Ingress访问集群平台服务-访问全部对象