spring-boot-docker-compose原理解析

308 阅读7分钟

大家好,我是程序员可乐,专注于Spring生态,给大家带来更便捷的开发体验。

wechat 公众号 : 【全栈程序员可乐】

继上篇描述了spring-boot-docker-compose框架的应用,我们来剖析一下这个框架的底层原理。

如何启动容器

我们知道,如果要在启动Spring启动时就做去执行某个操作,无非就几种方式:

  1. 作为Spring 容器的Bean,因为Spring默认是单例恶汉式加载Bean对象,所以在容器启动时就会执行Bean的初始化逻辑。
  2. 作为监听器:Spring在加载时会将监听器也加载到容器中。
//容器启动
public ConfigurableApplicationContext run(String... args) {
      this.prepareContext(bootstrapContext, context, environment, listeners, applicationArguments, printedBanner);
    }


private void prepareContext(DefaultBootstrapContext bootstrapContext, ConfigurableApplicationContext context, ConfigurableEnvironment environment, SpringApplicationRunListeners listeners, ApplicationArguments applicationArguments, Banner printedBanner) {
  			//加载监听器到容器中
        listeners.contextLoaded(context);
    }
  1. 实现接口ApplicationRunnerCommandLineRunner
public ConfigurableApplicationContext run(String... args) {
    this.callRunners(context, applicationArguments);
}

private void callRunners(ConfigurableApplicationContext context, ApplicationArguments args) {
    instancesToBeanNames.keySet().stream().sorted(comparator).forEach((runner) -> {
            this.callRunner(runner, args);
        });
}

private void callRunner(Runner runner, ApplicationArguments args) {
    if (runner instanceof ApplicationRunner) {
        this.callRunner(ApplicationRunner.class, runner, (applicationRunner) -> {
            applicationRunner.run(args);
        });
    }
    if (runner instanceof CommandLineRunner) {
        this.callRunner(CommandLineRunner.class, runner, (commandLineRunner) -> {
            commandLineRunner.run(args.getSourceArgs());
        });
    }
}

在该框架中,作者实现了一个监听器用来初始化:

class DockerComposeListener implements ApplicationListener<ApplicationPreparedEvent> {
    private final SpringApplicationShutdownHandlers shutdownHandlers;

    DockerComposeListener() {
        this(SpringApplication.getShutdownHandlers());
    }

    DockerComposeListener(SpringApplicationShutdownHandlers shutdownHandlers) {
        this.shutdownHandlers = shutdownHandlers;
    }

    public void onApplicationEvent(ApplicationPreparedEvent event) {
        ConfigurableApplicationContext applicationContext = event.getApplicationContext();
        Binder binder = Binder.get(applicationContext.getEnvironment());
        DockerComposeProperties properties = DockerComposeProperties.get(binder);
        Set<ApplicationListener<?>> eventListeners = event.getSpringApplication().getListeners();
        this.createDockerComposeLifecycleManager(applicationContext, binder, properties, eventListeners).start();
    }

    protected DockerComposeLifecycleManager createDockerComposeLifecycleManager(ConfigurableApplicationContext applicationContext, Binder binder, DockerComposeProperties properties, Set<ApplicationListener<?>> eventListeners) {
        return new DockerComposeLifecycleManager(applicationContext, binder, this.shutdownHandlers, properties, eventListeners);
    }
}

该监听器在Spring容器准备阶段会调用onApplicationEvent()方法,该方法的核心逻辑就在于this.createDockerComposeLifecycleManager(applicationContext, binder, properties, eventListeners).start();即创建docker compose的全生命周期管理器,并执行start()方法。

void start() {
        if (!Boolean.getBoolean("spring.aot.processing") && !AotDetector.useGeneratedArtifacts()) {
            if (!this.properties.isEnabled()) {
                logger.trace("Docker Compose support not enabled");
            } else if (this.skipCheck.shouldSkip(this.classLoader, this.properties.getSkip())) {
                logger.trace("Docker Compose support skipped");
            } else {
              // 获取compose.yaml文件
                DockerComposeFile composeFile = this.getComposeFile();
                Set<String> activeProfiles = this.properties.getProfiles().getActive();
              //根据文件创建docker compose实例对象
                DockerCompose dockerCompose = this.getDockerCompose(composeFile, activeProfiles);
                if (!dockerCompose.hasDefinedServices()) {
                    logger.warn(LogMessage.format("No services defined in Docker Compose file '%s' with active profiles %s", composeFile, activeProfiles));
                } else {
                    LifecycleManagement lifecycleManagement = this.properties.getLifecycleManagement();
                    DockerComposeProperties.Start start = this.properties.getStart();
                    DockerComposeProperties.Stop stop = this.properties.getStop();
                    DockerComposeProperties.Readiness.Wait wait = this.properties.getReadiness().getWait();
                    List<RunningService> runningServices = dockerCompose.getRunningServices();
                    if (lifecycleManagement.shouldStart()) {
                        DockerComposeProperties.Start.Skip skip = this.properties.getStart().getSkip();
                        if (skip.shouldSkip(runningServices)) {
                            logger.info(skip.getLogMessage());
                        } else {
                            start.getCommand().applyTo(dockerCompose, start.getLogLevel());
                            runningServices = dockerCompose.getRunningServices();
                            if (wait == Wait.ONLY_IF_STARTED) {
                                wait = Wait.ALWAYS;
                            }

                            if (lifecycleManagement.shouldStop()) {
                                this.shutdownHandlers.add(() -> {
                                    stop.getCommand().applyTo(dockerCompose, stop.getTimeout());
                                });
                            }
                        }
                    }

                    List<RunningService> relevantServices = new ArrayList(runningServices);
                    relevantServices.removeIf(this::isIgnored);
                    if (wait == Wait.ALWAYS || wait == null) {
                        this.serviceReadinessChecks.waitUntilReady(relevantServices);
                    }

                    this.publishEvent(new DockerComposeServicesReadyEvent(this.applicationContext, relevantServices));
                }
            }
        } else {
            logger.trace("Docker Compose support disabled with AOT and native images");
        }
    }

在这段代码中,其执行逻辑大致分为三个阶段:

  1. 读取compose.yaml配置文件。
  2. 根据配置文件创建DockerCompose实例对象。
  3. 检查容器状态,并根据配置文件启动或停止某些容器。

我们可以看一下底层是如何启动容器的:

start.getCommand().applyTo(dockerCompose, start.getLogLevel());

这段代码描述了启动一个容器的大致流程:获取启动容器的命令,然后根据docker compose的执行命令启动容器,同时打印启动容器日志。

public StartCommand getCommand() {
     return this.command;
}

public enum StartCommand {
    UP(DockerCompose::up),
    START(DockerCompose::start);

    private final BiConsumer<DockerCompose, LogLevel> action;

    private StartCommand(BiConsumer action) {
        this.action = action;
    }

    void applyTo(DockerCompose dockerCompose, LogLevel logLevel) {
        this.action.accept(dockerCompose, logLevel);
    }
}

获取命令的返回值是一个枚举值,枚举定义了两种状态,分别是UPSTART,而该枚举的属性又对应一个lambda函数,调用applyTo()方法就会触发该函数。

对于START枚举来说,它的函数体是这样的:

public void start(LogLevel logLevel) {
     this.cli.run(new DockerCliCommand.ComposeStart(logLevel));
}

static final class ComposeStart extends DockerCliCommand<Void> {
   ComposeStart(LogLevel logLevel) {
      super(DockerCliCommand.Type.DOCKER_COMPOSE, logLevel, Void.class, false, "start");
    }
}

<R> R run(DockerCliCommand<R> dockerCommand) {
    List<String> command = this.createCommand(dockerCommand.getType());
    command.addAll(dockerCommand.getCommand());
    Consumer<String> outputConsumer = this.createOutputConsumer(dockerCommand.getLogLevel());
    String json = this.processRunner.run(outputConsumer, (String[])command.toArray(new String[0]));
    return dockerCommand.deserialize(json);
}

private List<String> createCommand(Type type) {
	return switch (type) {
		case DOCKER -> new ArrayList<>(this.dockerCommands.get(type));
		case DOCKER_COMPOSE -> {
			List<String> result = new ArrayList<>(this.dockerCommands.get(type));
			if (this.composeFile != null) {
				result.add("--file");
				result.add(this.composeFile.toString());
			}
			result.add("--ansi");
			result.add("never");
			for (String profile : this.activeProfiles) {
				result.add("--profile");
				result.add(profile);
			}
			yield result;
		}
	};
}

Command信息如下:

image-20241010110124909

json信息如下:

{"name":"spring-docker-compose","networks":{"default":{"name":"spring-docker-compose_default","ipam":{},"external":false}},"services":{"mongodb":{"command":null,"entrypoint":null,"environment":{"MONGO_INITDB_DATABASE":"mydatabase","MONGO_INITDB_ROOT_PASSWORD":"secret","MONGO_INITDB_ROOT_USERNAME":"root"},"image":"mongo:latest","networks":{"default":null},"ports":[{"mode":"ingress","target":27017,"protocol":"tcp"}]},"mysql":{"command":null,"entrypoint":null,"environment":{"MYSQL_DATABASE":"mydatabase","MYSQL_PASSWORD":"secret","MYSQL_ROOT_PASSWORD":"verysecret","MYSQL_USER":"myuser"},"image":"mysql:latest","networks":{"default":null},"ports":[{"mode":"ingress","target":3306,"protocol":"tcp"}]},"rabbitmq":{"command":null,"entrypoint":null,"environment":{"RABBITMQ_DEFAULT_PASS":"secret","RABBITMQ_DEFAULT_USER":"myuser"},"image":"rabbitmq:latest","networks":{"default":null},"ports":[{"mode":"ingress","target":5672,"protocol":"tcp"}]},"redis":{"command":null,"entrypoint":null,"image":"redis:latest","networks":{"default":null},"ports":[{"mode":"ingress","target":6379,"protocol":"tcp"}]}}}

dockercli会通过json参数解析为docker命令并执行,这样,docker容器就在项目启动时悄无声息的创建完成了。

如何封装配置信息

光能启动容器可不够,因为我们发现我们并没有配置任何中间件的连接信息,Springboot依然可以连接对应的中间件,这其中一定有什么奥妙~

是否还记得上文的start()方法,最后一段我单独贴出来:

publishEvent(new DockerComposeServicesReadyEvent(this.applicationContext, relevantServices));

也就是说,在容器创建完毕后,它发布了一个事件,而relevantServices参数就是docker容器创建的服务,包含了MySQL、Redis、MongoDB、RabbitMQ,当然我们肯定需要找到它所对应的监听器。

这是它对应的监听器:DockerComposeServiceConnectionsApplicationListener

class DockerComposeServiceConnectionsApplicationListener
		implements ApplicationListener<DockerComposeServicesReadyEvent> {

	private final ConnectionDetailsFactories factories;

	DockerComposeServiceConnectionsApplicationListener() {
		this(new ConnectionDetailsFactories());
	}

	DockerComposeServiceConnectionsApplicationListener(ConnectionDetailsFactories factories) {
		this.factories = factories;
	}

	@Override
	public void onApplicationEvent(DockerComposeServicesReadyEvent event) {
		ApplicationContext applicationContext = event.getSource();
		if (applicationContext instanceof BeanDefinitionRegistry registry) {
			registerConnectionDetails(registry, event.getRunningServices());
		}
	}

	private void registerConnectionDetails(BeanDefinitionRegistry registry, List<RunningService> runningServices) {
		for (RunningService runningService : runningServices) {
			DockerComposeConnectionSource source = new DockerComposeConnectionSource(runningService);
			this.factories.getConnectionDetails(source, false).forEach((connectionDetailsType, connectionDetails) -> {
				register(registry, runningService, connectionDetailsType, connectionDetails);
				this.factories.getConnectionDetails(connectionDetails, false)
					.forEach((adaptedType, adaptedDetails) -> register(registry, runningService, adaptedType,
							adaptedDetails));
			});
		}
	}

	@SuppressWarnings("unchecked")
	private <T> void register(BeanDefinitionRegistry registry, RunningService runningService,
			Class<?> connectionDetailsType, ConnectionDetails connectionDetails) {
		String beanName = getBeanName(runningService, connectionDetailsType);
		Class<T> beanType = (Class<T>) connectionDetails.getClass();
		Supplier<T> beanSupplier = () -> (T) connectionDetails;
		registry.registerBeanDefinition(beanName, new RootBeanDefinition(beanType, beanSupplier));
	}

	private String getBeanName(RunningService runningService, Class<?> connectionDetailsType) {
		List<String> parts = new ArrayList<>();
		parts.add(ClassUtils.getShortNameAsProperty(connectionDetailsType));
		parts.add("for");
		parts.addAll(Arrays.asList(runningService.name().split("-")));
		return StringUtils.uncapitalize(parts.stream().map(StringUtils::capitalize).collect(Collectors.joining()));
	}

}

我们看到,在该类中有一个factories属性,看到这个我们就能意识到,这是典型的工厂模式啊!

我们看一下工厂里面有什么?

public class ConnectionDetailsFactories {
	private final List<Registration<?, ?>> registrations = new ArrayList<>();
}

果然工厂中放着注册实例集合,我们可以猜到,Registration一定是MySQL、Redis等中间件的注册信息。

在上文中,我们可以注意到一个方法:registerConnectionDetails()

private void registerConnectionDetails(BeanDefinitionRegistry registry, List<RunningService> runningServices) {
		for (RunningService runningService : runningServices) {
			DockerComposeConnectionSource source = new DockerComposeConnectionSource(runningService);
			this.factories.getConnectionDetails(source, false).forEach((connectionDetailsType, connectionDetails) -> {
				register(registry, runningService, connectionDetailsType, connectionDetails);
				this.factories.getConnectionDetails(connectionDetails, false)
					.forEach((adaptedType, adaptedDetails) -> register(registry, runningService, adaptedType,
							adaptedDetails));
			});
	}
}

public <S> Map<Class<?>, ConnectionDetails> getConnectionDetails(S source, boolean required)
			throws ConnectionDetailsFactoryNotFoundException, ConnectionDetailsNotFoundException {
		List<Registration<S, ?>> registrations = getRegistrations(source, required);
		Map<Class<?>, ConnectionDetails> result = new LinkedHashMap<>();
		for (Registration<S, ?> registration : registrations) {
			ConnectionDetails connectionDetails = registration.factory().getConnectionDetails(source);
			if (connectionDetails != null) {
				Class<?> connectionDetailsType = registration.connectionDetailsType();
				ConnectionDetails previous = result.put(connectionDetailsType, connectionDetails);
				Assert.state(previous == null, () -> "Duplicate connection details supplied for %s"
					.formatted(connectionDetailsType.getName()));
			}
		}
		if (required && result.isEmpty()) {
			throw new ConnectionDetailsNotFoundException(source);
		}
		return Map.copyOf(result);
	}

我简单解释一下这段代码的逻辑:

  1. 根据正在运行的服务类型创建Docker连接数据源对象DockerComposeConnectionSource

  2. 根据该对象获取连接信息详情ConnectionDetails

  3. ConnectionDetails是一个接口,我们可以看他的实现类有哪些:

image-20241010133224631

我们终于找到了突破口,这里创建的ConnectionDetails对象最终会根据type类型定位到对应的配置类上。

MySQL: MySqlJdbcDockerComposeConnectionDetailsFactory

Redis: RedisDockerComposeConnectionDetailsFactory

MongoDB: MongoDockerComposeConnectionDetailsFactory

RabbitMQ: RabbitDockerComposeConnectionDetailsFactory

image-20241010133614089

这是他支持的所有中间件。

以MySQL为例:

class MySqlJdbcDockerComposeConnectionDetailsFactory
		extends DockerComposeConnectionDetailsFactory<JdbcConnectionDetails> {

	private static final String[] MYSQL_CONTAINER_NAMES = { "mysql", "bitnami/mysql" };

	protected MySqlJdbcDockerComposeConnectionDetailsFactory() {
		super(MYSQL_CONTAINER_NAMES);
	}

	@Override
	protected JdbcConnectionDetails getDockerComposeConnectionDetails(DockerComposeConnectionSource source) {
		return new MySqlJdbcDockerComposeConnectionDetails(source.getRunningService());
	}

	/**
	 * {@link JdbcConnectionDetails} backed by a {@code mysql} {@link RunningService}.
	 */
	static class MySqlJdbcDockerComposeConnectionDetails extends DockerComposeConnectionDetails
			implements JdbcConnectionDetails {

		private static final JdbcUrlBuilder jdbcUrlBuilder = new JdbcUrlBuilder("mysql", 3306);

		private final MySqlEnvironment environment;

		private final String jdbcUrl;

		MySqlJdbcDockerComposeConnectionDetails(RunningService service) {
			super(service);
			this.environment = new MySqlEnvironment(service.env());
			this.jdbcUrl = jdbcUrlBuilder.build(service, this.environment.getDatabase());
		}

		@Override
		public String getUsername() {
			return this.environment.getUsername();
		}

		@Override
		public String getPassword() {
			return this.environment.getPassword();
		}

		@Override
		public String getJdbcUrl() {
			return this.jdbcUrl;
		}

	}

}

回到上文DockerComposeServiceConnectionsApplicationListener,我们只讲到了如何获取配置信息,但还没有真正去打通中间件,所以最后的register()方法,就是将我们的连接信息注册为Spring的Bean对象,从而实现真正意义上的无配置连接:

	private void registerConnectionDetails(BeanDefinitionRegistry registry, List<RunningService> runningServices) {
		for (RunningService runningService : runningServices) {
			DockerComposeConnectionSource source = new DockerComposeConnectionSource(runningService);
			this.factories.getConnectionDetails(source, false).forEach((connectionDetailsType, connectionDetails) -> {
				register(registry, runningService, connectionDetailsType, connectionDetails);
				this.factories.getConnectionDetails(connectionDetails, false)
					.forEach((adaptedType, adaptedDetails) -> register(registry, runningService, adaptedType,
							adaptedDetails));
			});
		}
	}

	@SuppressWarnings("unchecked")
	private <T> void register(BeanDefinitionRegistry registry, RunningService runningService,
			Class<?> connectionDetailsType, ConnectionDetails connectionDetails) {
		String beanName = getBeanName(runningService, connectionDetailsType);
		Class<T> beanType = (Class<T>) connectionDetails.getClass();
		Supplier<T> beanSupplier = () -> (T) connectionDetails;
		registry.registerBeanDefinition(beanName, new RootBeanDefinition(beanType, beanSupplier));
	}

	private String getBeanName(RunningService runningService, Class<?> connectionDetailsType) {
		List<String> parts = new ArrayList<>();
		parts.add(ClassUtils.getShortNameAsProperty(connectionDetailsType));
		parts.add("for");
		parts.addAll(Arrays.asList(runningService.name().split("-")));
		return StringUtils.uncapitalize(parts.stream().map(StringUtils::capitalize).collect(Collectors.joining()));
	}

写在最后

  1. 如果我们需要而外配置一个该框架没有的容器,我们只需要按照流程去实现ConnectionDetails接口以及额外的相关类即可。
  2. 该框架运用了观察者模式(监听器)和工厂模式,监听器实现了步骤的解耦,增加了框架的纵向逻辑扩展性,而工厂模式可以扩展更多的自定义容器,增加了框架的横向内容扩展性。值得我们研究学习。