ShardingJDBC源码阅读(七)执行

1,460 阅读10分钟

前言

本章分析ShardingJDBC的核心步骤:执行。

思考几个问题:

  • 执行一个逻辑SQL需要几个数据库连接?
  • 同一个逻辑SQL的多个实际SQL是并行执行还是串行执行?

一、初始化PreparedStatementExecutor

回到ShardingPreparedStatement的execute方法。

@Override
public boolean execute() throws SQLException {
    try {
        // 资源清理
        clearPrevious();
        // 解析 路由 重写
        prepare();
        // 初始化PreparedStatementExecutor
        initPreparedStatementExecutor();
        // 执行SQL
        return preparedStatementExecutor.execute();
    } finally {
        // 资源清理
        clearBatch();
    }
}

ShardingPreparedStatement#initPreparedStatementExecutor创建Connection和Statement。

private void initPreparedStatementExecutor() throws SQLException {
    // 创建连接 创建statement
    preparedStatementExecutor.init(executionContext);
    // 为statement设置参数
    setParametersForStatements();
    // 重放statement其他的反射调用
    replayMethodForStatements();
}

1、SQL分组&创建连接、statement

PreparedStatementExecutor#init

public void init(final ExecutionContext executionContext) throws SQLException {
    // 设置成员变量SQLStatementContext
    setSqlStatementContext(executionContext.getSqlStatementContext());
    // 获取连接 获取statement 转换为StatementExecuteUnit 放入成员变量inputGroups
    Collection<ExecutionUnit> executionUnits = executionContext.getExecutionUnits();
    Collection<InputGroup<StatementExecuteUnit>> inputGroups = obtainExecuteGroups(executionUnits);
    getInputGroups().addAll(inputGroups);
    // 把成员变量inputGroups 转换为 statement集合与参数列表集合 省略
    cacheStatements();
}

PreparedStatementExecutor#obtainExecuteGroups,调用SQLExecutePrepareTemplate的getExecuteUnitGroups,创建执行分组,这里把创建数据库连接和创建Statement的回调方法作为第二个参数传入。

private Collection<InputGroup<StatementExecuteUnit>> obtainExecuteGroups(final Collection<ExecutionUnit> executionUnits) throws SQLException {
  return getSqlExecutePrepareTemplate().getExecuteUnitGroups(executionUnits, new SQLExecutePrepareCallback() {
	 // 创建数据库连接的回调方法
      @Override
      public List<Connection> getConnections(final ConnectionMode connectionMode, final String dataSourceName, final int connectionSize) throws SQLException {
          return PreparedStatementExecutor.super.getConnection().getConnections(connectionMode, dataSourceName, connectionSize);
      }
	 // 创建PreparedStatement的回调方法
      @Override
      public StatementExecuteUnit createStatementExecuteUnit(final Connection connection, final ExecutionUnit executionUnit, final ConnectionMode connectionMode) throws SQLException {
          PreparedStatement preparedStatement = createPreparedStatement(connection, executionUnit.getSqlUnit().getSql());
          return new StatementExecuteUnit(executionUnit, preparedStatement, connectionMode);
      }
  });
}

这里InputGroup只是封装了List而已。

public final class InputGroup<T> {
    private final List<T> inputs;
}

SQLExecutePrepareTemplate#getExecuteUnitGroups是SQLExecutePrepareTemplate暴露的唯一公共方法,作用就是对sql分组,获取数据库连接创建Statement。

getSQLUnitGroups首先对ExecutionUnit按照dataSource分组。

public Collection<InputGroup<StatementExecuteUnit>> getExecuteUnitGroups(final Collection<ExecutionUnit> executionUnits, final SQLExecutePrepareCallback callback) throws SQLException {
  return getSynchronizedExecuteUnitGroups(executionUnits, callback);
}

private Collection<InputGroup<StatementExecuteUnit>> getSynchronizedExecuteUnitGroups(
      final Collection<ExecutionUnit> executionUnits, final SQLExecutePrepareCallback callback) throws SQLException {
  // 按照ds分组
  Map<String, List<SQLUnit>> sqlUnitGroups = getSQLUnitGroups(executionUnits);
  Collection<InputGroup<StatementExecuteUnit>> result = new LinkedList<>();
  for (Entry<String, List<SQLUnit>> entry : sqlUnitGroups.entrySet()) {
      // 一个关键的方法
      List<InputGroup<StatementExecuteUnit>> sqlExecuteGroups = getSQLExecuteGroups(entry.getKey(), entry.getValue(), callback);
      result.addAll(sqlExecuteGroups);
  }
  return result;
}

关键方法getSQLExecuteGroups对同一ds中的sql再次分组,创建Connection并创建Statement。

@RequiredArgsConstructor
public final class SQLExecutePrepareTemplate {
  // 配置 max.connections.size.per.query 默认为1
  private final int maxConnectionsSizePerQuery;
  private List<InputGroup<StatementExecuteUnit>> getSQLExecuteGroups(final String dataSourceName, final List<SQLUnit> sqlUnits, final SQLExecutePrepareCallback callback) throws SQLException {
      List<InputGroup<StatementExecuteUnit>> result = new LinkedList<>();
      // 计算每个连接需要执行的sql数量
      int desiredPartitionSize = Math.max(0 == sqlUnits.size() % maxConnectionsSizePerQuery ? sqlUnits.size() / maxConnectionsSizePerQuery : sqlUnits.size() / maxConnectionsSizePerQuery + 1, 1);
      // 分组 每组公用一个连接
      List<List<SQLUnit>> sqlUnitPartitions = Lists.partition(sqlUnits, desiredPartitionSize);
      // 选择ConnectionMode
      ConnectionMode connectionMode = maxConnectionsSizePerQuery < sqlUnits.size() ? ConnectionMode.CONNECTION_STRICTLY : ConnectionMode.MEMORY_STRICTLY;
      // 执行SQLExecutePrepareTemplate.getExecuteUnitGroups传入的callback回调方法 创建连接
      List<Connection> connections = callback.getConnections(connectionMode, dataSourceName, sqlUnitPartitions.size());
      int count = 0;
      for (List<SQLUnit> each : sqlUnitPartitions) {
          // 使用callback为每个连接创建statement 组装为StatementExecuteUnit 忽略逻辑
          InputGroup<StatementExecuteUnit> sqlExecuteGroup =
                  getSQLExecuteGroup(connectionMode, connections.get(count++), dataSourceName, each, callback);
          result.add(sqlExecuteGroup);
      }
      return result;
  }
}

这个方法很关键,需要搞清楚两个问题

  1. 1个ds执行x条sql,需要打开几个数据库连接?
  2. 对于结果集归并,采用何种方式,即ConnectionMode的选择?

对于第一个问题,关键点在于sql会被分为几组,有几组就会创建几个连接,分组数量取决于maxConnectionsSizePerQuery(单次查询最大连接数)

假设sql数量为8,maxConnectionsSizePerQuery=1(默认),desiredPartitionSize计算得8,即每组8条sql,那么最终就只有一组,只会创建一个数据库连接。

假设sql数量为8,maxConnectionsSizePerQuery=2,desiredPartitionSize计算得4,即每组4条sql,那么最终有两组(4条sql一组,一共8条sql,要分2组),会创建2个数据库连接。

对于第二个问题,ConnectionMode是根据maxConnectionsSizePerQuery(单次查询最大连接数)和单数据源sql总数量决定的

如果maxConnectionsSizePerQuery(单次查询最大连接数)< sql数量,使用CONNECTION_STRICTLY连接限制。采用内存归并,一次性读取ResultSet数据到内存,减少数据库连接开销。

如果maxConnectionsSizePerQuery(单次查询最大连接数)>= sql数量,使用MEMORY_STRICTLY内存限制。采用流式归并,ResultSet移动游标读取数据到内存,减少内存开销。

深入callback.getConnections,看看如何获取连接,定位到AbstractConnectionAdapter#getConnections

private final Multimap<String, Connection> cachedConnections = LinkedHashMultimap.create();
    
public final List<Connection> getConnections(final ConnectionMode connectionMode, final String dataSourceName, final int connectionSize) throws SQLException {
  DataSource dataSource = getDataSourceMap().get(dataSourceName);
  Collection<Connection> connections;
  synchronized (cachedConnections) {
      connections = cachedConnections.get(dataSourceName);
  }
  List<Connection> result;
  if (connections.size() >= connectionSize) {
      result = new ArrayList<>(connections).subList(0, connectionSize);
  } else if (!connections.isEmpty()) {
      result = new ArrayList<>(connectionSize);
      result.addAll(connections);
      List<Connection> newConnections = createConnections(dataSourceName, connectionMode, dataSource, connectionSize - connections.size());
      result.addAll(newConnections);
      synchronized (cachedConnections) {
          cachedConnections.putAll(dataSourceName, newConnections);
      }
  } else {
      result = new ArrayList<>(createConnections(dataSourceName, connectionMode, dataSource, connectionSize));
      synchronized (cachedConnections) {
          cachedConnections.putAll(dataSourceName, result);
      }
  }
  return result;
}

首先从成员变量cachedConnections获取连接,如果获取不到通过createConnections方法创建连接。

private List<Connection> createConnections(final String dataSourceName, final ConnectionMode connectionMode, final DataSource dataSource, final int connectionSize) throws SQLException {
  if (1 == connectionSize) {
      Connection connection = createConnection(dataSourceName, dataSource);
      replayMethodsInvocation(connection);
      return Collections.singletonList(connection);
  }
  if (ConnectionMode.CONNECTION_STRICTLY == connectionMode) {
      return createConnections(dataSourceName, dataSource, connectionSize);
  }
  synchronized (dataSource) {
      return createConnections(dataSourceName, dataSource, connectionSize);
  }
}

如果只要创建一个连接或者是连接限制模式,直接创建连接并重放Connection上的所有方法调用(比如set autocommit = 0);如果是内存限制模式,synchronized同步创建所有连接,并重放方法调用。

为什么只有内存限制模式需要synchronized?

如果max.connections.size.per.query=2,sql数量=3,分2组,需要创建2个Connection,且sql数量>max.connections.size.per.query,启用连接限制模式。如果此时执行两个同样的逻辑sql,并且连接池最大连接数是3,应该同样会发生连接争用。

参考自动化执行引擎这篇文章。

为了避免死锁的出现,Sharding-Sphere在获取数据库连接时进行了同步处理。它在创建执行单元时,以原子性的方式一次性获取本次SQL请求所需的全部数据库连接,杜绝了每次查询请求获取到部分资源的可能。这种加锁做法确实可以解决死锁问题,只是,同时会带来一定程度并发性能的损失。为了展示我们不一样!有啥不一样呢?我们针对此问题还进行了以下两方面优化:1.避免锁定一次性只需获取一个数据库连接的操作。因为每次仅需要获取一个连接,就不会发生两个请求相互等待的场景,无需锁定。对于大部分OLTP的操作,都是使用分片键路由至唯一的数据节点,此时无需担心交叉死锁问题,也无需考虑加锁问题,从而减少对并发效率的影响。除了路由至单分片的情况,读写分离也属于此范畴之内的场景。2.仅针对内存限制模式进行链接资源的锁定。在使用连接限制模式时,数据库连接资源在所有查询结果集装载至内存之后被释放掉,因此不必考虑死锁等待、加锁处理的问题。

流式归并(内存限制模式)需要维持数据库连接,但是对于内存归并(连接限制模式)数据库连接资源在所有查询结果集装载至内存之后被释放掉,所以不需要考虑

2、Statement设置参数

回到ShardingPreparedStatement#initPreparedStatementExecutor,创建完Statement之后,就是为statement设置参数。这里sharding-jdbc采用的方式是反射,先将反射调用记录到父类AbstractShardingPreparedStatementAdapter的setParameterMethodInvocations成员变量中,然后通过反射重放。

public final class ShardingPreparedStatement extends AbstractShardingPreparedStatementAdapter {
  private final PreparedStatementExecutor preparedStatementExecutor;
  // 从preparedStatementExecutor中取出statement和params,调用父类replaySetParameter方法
  private void setParametersForStatements() {
    for (int i = 0; i < preparedStatementExecutor.getStatements().size(); i++) {
        PreparedStatement statement = (PreparedStatement) preparedStatementExecutor.getStatements().get(i);
        List<Object> params = preparedStatementExecutor.getParameterSets().get(i);
        replaySetParameter(statement, params);
    }
  }
}
public abstract class AbstractShardingPreparedStatementAdapter extends AbstractUnsupportedOperationPreparedStatement {
  // 记录设置参数的反射调用
  private final List<SetParameterMethodInvocation> setParameterMethodInvocations = new LinkedList<>();
  protected final void replaySetParameter(final PreparedStatement preparedStatement, final List<Object> parameters) {
      // 清空
      setParameterMethodInvocations.clear();
      // 将反射调用加入setParameterMethodInvocations
      addParameters(parameters);
      // 重放setParameterMethodInvocations
      for (SetParameterMethodInvocation each : setParameterMethodInvocations) {
          each.invoke(preparedStatement);
      }
  }

3、重放Statement其他的反射调用

public final class ShardingPreparedStatement extends AbstractShardingPreparedStatementAdapter {
  private final PreparedStatementExecutor preparedStatementExecutor;
  private void replayMethodForStatements() {
      // 循环所有Statement
      for (Statement each : preparedStatementExecutor.getStatements()) {
          // 调用WrapperAdapter的replayMethodsInvocation方法
          replayMethodsInvocation(each);
      }
  }
}

二、执行

PreparedStatementExecutor执行SQL

public final class PreparedStatementExecutor extends AbstractStatementExecutor {

  public boolean execute() throws SQLException {
    // 是否抛出异常
    boolean isExceptionThrown = ExecutorExceptionHandler.isExceptionThrown();
    // 工厂创建SQLExecuteCallback
    SQLExecuteCallback<Boolean> executeCallback = SQLExecuteCallbackFactory.getPreparedSQLExecuteCallback(getDatabaseType(), isExceptionThrown);
    // 执行
    List<Boolean> result = executeCallback(executeCallback);
    if (null == result || result.isEmpty() || null == result.get(0)) {
        return false;
    }
    return result.get(0);
  }
}

SQLExecuteCallbackFactory创建SQLExecuteCallback,实现SQLExecuteCallback抽象类的executeSQL方法,简单的执行了PreparedStatement的execute方法。

public static SQLExecuteCallback<Boolean> getPreparedSQLExecuteCallback(final DatabaseType databaseType, final boolean isExceptionThrown) {
  return new SQLExecuteCallback<Boolean>(databaseType, isExceptionThrown) {

      @Override
      protected Boolean executeSQL(final String sql, final Statement statement, final ConnectionMode connectionMode) throws SQLException {
          return ((PreparedStatement) statement).execute();
      }
  };
}

SQLExecuteCallback会在sql执行的时候用到,继续往下走,进入抽象父类的executeCallback方法。

public abstract class AbstractStatementExecutor {
  private final SQLExecutePrepareTemplate sqlExecutePrepareTemplate;
  protected final <T> List<T> executeCallback(final SQLExecuteCallback<T> executeCallback) throws SQLException {
      // 执行
      List<T> result = sqlExecuteTemplate.execute((Collection) inputGroups, executeCallback);
      // 如果sql修改了表结构,需要刷新ShardingRuntimeContext里的ShardingSphereMetaData元数据信息
      refreshMetaDataIfNeeded(connection.getRuntimeContext(), sqlStatementContext);
      return result;
  }
}

进入SQLExecutePrepareTemplate的execute方法,执行ExecutorEngine的execute方法。

@RequiredArgsConstructor
public final class SQLExecuteTemplate {
    // 执行引擎
    private final ExecutorEngine executorEngine;
    // 是否串行执行
    private final boolean serial;
    public <T> List<T> execute(final Collection<InputGroup<? extends StatementExecuteUnit>> inputGroups, final SQLExecuteCallback<T> callback) throws SQLException {
        return execute(inputGroups, null, callback);
    }
    
    public <T> List<T> execute(final Collection<InputGroup<? extends StatementExecuteUnit>> inputGroups,
                               final SQLExecuteCallback<T> firstCallback, final SQLExecuteCallback<T> callback) throws SQLException {
        try {
            return executorEngine.execute((Collection) inputGroups, firstCallback, callback, serial);
        } catch (final SQLException ex) {
            ExecutorExceptionHandler.handleException(ex);
            return Collections.emptyList();
        }
    }
}

注意这里的serial参数,表示执行sql的方式是串行还是并行。这是在PreparedStatementExecutor父类AbstractStatementExecutor构造SQLExecutePrepareTemplate时传入的。

private final ShardingConnection connection;
private final SQLExecutePrepareTemplate sqlExecutePrepareTemplate;
public AbstractStatementExecutor(final int resultSetType, final int resultSetConcurrency, final int resultSetHoldability, final ShardingConnection shardingConnection) {
    // ...
    this.connection = shardingConnection;
    sqlExecuteTemplate = new SQLExecuteTemplate(executorEngine, connection.isHoldTransaction());
}

看到serial取决于ShardingConnection#isHoldTransaction,其含义就是,如果在事务中返回true;否则返回false。也就是说,在本地事务中或XA事务中时,串行执行sql;其他情况下,并行执行sql

public boolean isHoldTransaction() {
    // 本地事务中
    return (TransactionType.LOCAL == transactionType && !getAutoCommit()) 
       // XA事务中
        || (TransactionType.XA == transactionType && isInShardingTransaction());
}

继续深入ExecutorEngine#execute

public <I, O> List<O> execute(final Collection<InputGroup<I>> inputGroups, 
                                  final GroupedCallback<I, O> firstCallback,
                                  final GroupedCallback<I, O> callback, final boolean serial) throws SQLException {
  if (inputGroups.isEmpty()) {
      return Collections.emptyList();
  }
  if (serial) { // 串行执行
      return serialExecute(inputGroups, firstCallback, callback);
  } else { // 并行执行
      return parallelExecute(inputGroups, firstCallback, callback);
  }
}

并行执行和串行执行差不多,看并行执行。并行执行将第二个及之后的任务提交到线程池执行,第一个任务同步执行,最终合并结果。对于执行sql来说,一个任务需要处理一个InputGroup分组内的sql,一个InputGroup对应一个Connection,也就是说一个任务对应一个Connection

private <I, O> List<O> parallelExecute(final Collection<InputGroup<I>> inputGroups, final GroupedCallback<I, O> firstCallback, final GroupedCallback<I, O> callback) throws SQLException {
    Iterator<InputGroup<I>> inputGroupsIterator = inputGroups.iterator();
    InputGroup<I> firstInputs = inputGroupsIterator.next();
    // 异步执行第2个开始的任务
    Collection<ListenableFuture<Collection<O>>> restResultFutures = asyncExecute(Lists.newArrayList(inputGroupsIterator), callback);
    // 同步执行第1个任务
    Collection<O> syncExecute = syncExecute(firstInputs, null == firstCallback ? callback : firstCallback);
    // 合并执行结果
    return getGroupResults(syncExecute, restResultFutures);
}

// 合并执行结果
private <O> List<O> getGroupResults(final Collection<O> firstResults, final Collection<ListenableFuture<Collection<O>>> restFutures) throws SQLException {
  List<O> result = new LinkedList<>(firstResults);
  for (ListenableFuture<Collection<O>> each : restFutures) {
      try {
          result.addAll(each.get()); // 等待Future执行完成
      } catch (final InterruptedException | ExecutionException ex) {
          return throwException(ex);
      }
  }
  return result;
}

ExecutorEngine#asyncExecute异步执行。

// 线程池
private final ShardingSphereExecutorService executorService;

private <I, O> Collection<ListenableFuture<Collection<O>>> asyncExecute(final List<InputGroup<I>> inputGroups, final GroupedCallback<I, O> callback) {
  Collection<ListenableFuture<Collection<O>>> result = new LinkedList<>();
  for (InputGroup<I> each : inputGroups) {
      ListenableFuture<Collection<O>> future = asyncExecute(each, callback);
      result.add(future);
  }
  return result;
}

private <I, O> ListenableFuture<Collection<O>> asyncExecute(final InputGroup<I> inputGroup, final GroupedCallback<I, O> callback) {
  final Map<String, Object> dataMap = ExecutorDataMap.getValue();
  return executorService.getExecutorService()
       .submit(() -> callback.execute(inputGroup.getInputs(), false, dataMap));
}

进入SQLExecuteCallback(实现GroupedCallback接口)执行execute方法。循环所有sql,每个sql执行前后先执行SQLExecutionHook的钩子方法,最终调用由SQLExecuteCallbackFactory创建的匿名类的executeSQL方法,仅仅是执行Statement.execute(往上翻)。

public abstract class SQLExecuteCallback<T> implements GroupedCallback<StatementExecuteUnit, T> {
    
    @Override
    public final Collection<T> execute(final Collection<StatementExecuteUnit> statementExecuteUnits, 
                                       final boolean isTrunkThread, final Map<String, Object> dataMap) throws SQLException {
        Collection<T> result = new LinkedList<>();
        // 循环sql
        for (StatementExecuteUnit each : statementExecuteUnits) {
            // 执行单个sql
            T t = execute0(each, isTrunkThread, dataMap);
            result.add(t);
        }
        return result;
    }
    
    private T execute0(final StatementExecuteUnit statementExecuteUnit, final boolean isTrunkThread, final Map<String, Object> dataMap) throws SQLException {
        // 获取元数据信息
        DataSourceMetaData dataSourceMetaData = getDataSourceMetaData(statementExecuteUnit.getStatement().getConnection().getMetaData());
        // 持有所有钩子集合
        SQLExecutionHook sqlExecutionHook = new SPISQLExecutionHook();
        ExecutionUnit executionUnit = statementExecuteUnit.getExecutionUnit();
        // 执行所有start钩子(Trace、Seata...)
        sqlExecutionHook.start(executionUnit.getDataSourceName(), executionUnit.getSqlUnit().getSql(), executionUnit.getSqlUnit().getParameters(), dataSourceMetaData, isTrunkThread, dataMap);
        // 执行sql
        T result = executeSQL(executionUnit.getSqlUnit().getSql(), statementExecuteUnit.getStatement(), statementExecuteUnit.getConnectionMode());
        // 执行所有success钩子(Trace、Seata...)
        sqlExecutionHook.finishSuccess();
        return result;
    }
    
    // 由SQLExecuteCallbackFactory创建的匿名类实现,仅仅是执行Statement.execute
    protected abstract T executeSQL(String sql, Statement statement, ConnectionMode connectionMode) throws SQLException;
}

三、异步执行SQL线程池配置

看一下ShardingSphereExecutorService如何配置线程池,以及默认线程池配置。线程池配置入口:ShardingDataSource构造->ShardingRuntimeContext构造->AbstractRuntimeContext构造->ExecutorEngine构造 ->ShardingSphereExecutorService构造。

protected AbstractRuntimeContext(final T rule, final Properties props, final DatabaseType databaseType) {
   // 传入executor.size配置
  executorEngine = new ExecutorEngine(properties.<Integer>getValue(ConfigurationPropertyKey.EXECUTOR_SIZE));
}

ConfigurationPropertyKey.EXECUTOR_SIZE,默认executor.size=0

/**
 * Worker thread max size.
 * 
 * <p>
 * Execute SQL Statement and PrepareStatement will use this thread pool.
 * One sharding data source will use a independent thread pool, it does not share thread pool even different data source in same JVM.
 * Default: infinite.
 * </p>
 */
EXECUTOR_SIZE("executor.size", String.valueOf(0), int.class),

ExecutorEngine构造。

public ExecutorEngine(final int executorSize) {
  executorService = new ShardingSphereExecutorService(executorSize);
}

ShardingSphereExecutorService构造。

public final class ShardingSphereExecutorService {
    
    private static final String DEFAULT_NAME_FORMAT = "%d";
    
    private static final ExecutorService SHUTDOWN_EXECUTOR = Executors.newSingleThreadExecutor(ShardingSphereThreadFactoryBuilder.build("Executor-Engine-Closer"));
    
    private ListeningExecutorService executorService;
    
    public ShardingSphereExecutorService(final int executorSize) {
        this(executorSize, DEFAULT_NAME_FORMAT);
    }
    
    public ShardingSphereExecutorService(final int executorSize, final String nameFormat) {
        ExecutorService delegate = getExecutorService(executorSize, nameFormat);
        this.executorService = MoreExecutors.listeningDecorator(delegate);
        MoreExecutors.addDelayedShutdownHook(this.executorService, 60, TimeUnit.SECONDS);
    }
    
    private ExecutorService getExecutorService(final int executorSize, final String nameFormat) {
        ThreadFactory threadFactory = ShardingSphereThreadFactoryBuilder.build(nameFormat);
        // 默认0的情况下走newCachedThreadPool
        return 0 == executorSize ? Executors.newCachedThreadPool(threadFactory) : Executors.newFixedThreadPool(executorSize, threadFactory);
    }
}

默认executorSize=0,通过Executors.newCachedThreadPool创建线程池;其他情况,Executors.newFixedThreadPool创建线程池,corePoolSize=maxPoolSize=executor.size。

总结

  • 一个逻辑SQL开启几个数据库连接? 一个逻辑SQL会对应多个数据源,每个数据源对应多个实际SQL。每个数据源开启的连接数依赖于实际SQL数和max.connections.size.per.query(默认为1)。单数据源开启连接数量 = 实际SQL分组数量,每组元素数量 = 实际SQL数 / max.connections.size.per.query 向上取整。也就是说默认情况下每个数据源只会开启一个数据库连接

  • 对于同一数据源的n个SQL,采用何种方式做结果归并?

    • max.connections.size.per.query < sql数量,使用CONNECTION_STRICTLY连接限制。采用内存归并,一次性读取ResultSet数据到内存,减少数据库连接开销。

    • max.connections.size.per.query >= sql数量,使用MEMORY_STRICTLY内存限制。采用流式归并,ResultSet移动游标读取数据到内存,减少内存开销。

  • 串行or并行在本地事务中或XA事务中时,串行执行sql;其他情况下,并行执行sql。每个数据库连接对应一个异步任务。

  • 异步执行SQL的线程池配置取决于executor.size默认为0,使用Executors.newCachedThreadPool;非0使用Executors.newFixedThreadPool,核心线程数=executor.size。