DruidDataSource原理

2,105 阅读11分钟

1 示例

<bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource" init-method="init" destroy-method="close"> 
     <property name="url" value="${jdbc_url}" />
     <property name="username" value="${jdbc_user}" />
     <property name="password" value="${jdbc_password}" />

     <property name="filters" value="stat" />

     <property name="maxActive" value="20" />
     <property name="initialSize" value="1" />
     <property name="maxWait" value="6000" />
     <property name="minIdle" value="1" />

     <property name="timeBetweenEvictionRunsMillis" value="60000" />
     <property name="minEvictableIdleTimeMillis" value="300000" />

     <property name="testWhileIdle" value="true" />
     <property name="testOnBorrow" value="false" />
     <property name="testOnReturn" value="false" />

     <property name="poolPreparedStatements" value="true" />
     <property name="maxOpenPreparedStatements" value="20" />

     <property name="asyncInit" value="true" />
 </bean>

2 参数说明

配置缺省值说明
url连接数据库的url,不同数据库不一样。例如:jdbc:mysql://localhost:3306/test_1?autoReconnect=true&useUnicode=true&characterset=utf8mb4
username连接数据库的用户名
password连接数据库的密码。如果你不希望密码直接写在配置文件中,可以使用ConfigFilter
driverClassName根据url自动识别 这一项可配可不配,如果不配置druid会根据url自动识别dbType,然后选择相应的driverClassName
initialSize0初始化时建立物理连接的个数。初始化发生在显示调用init方法,或者第一次getConnection时
maxActive8最大连接池数量
maxIdle8已经不再使用,配置了也没效果
minIdle最小连接池数量
maxWait获取连接时最大等待时间,单位毫秒。配置了maxWait之后,缺省启用公平锁,并发效率会有所下降,如果需要可以通过配置useUnfairLock属性为true使用非公平锁。
validationQuery用来检测连接是否有效的sql,要求是一个查询语句,常用select 'x'。 如果validationQuery为null,testOnBorrow、testOnReturn、testWhileIdle都不会起作用。
validationQueryTimeout单位:秒,检测连接是否有效的超时时间。底层调用jdbc Statement对象的void setQueryTimeout(int seconds)方法
testOnBorrowtrue申请连接时执行validationQuery检测连接是否有效,做了这个配置会降低性能。
testOnReturnfalse归还连接时执行validationQuery检测连接是否有效,做了这个配置会降低性能。
testWhileIdlefalse建议配置为true,不影响性能,并且保证安全性。申请连接的时候检测,如果空闲时间大于timeBetweenEvictionRunsMillis,执行validationQuery检测连接是否有效。
keepAlivefalse(1.0.28)连接池中的minIdle数量以内的连接,空闲时间超过minEvictableIdleTimeMillis,则会执行keepAlive操作。
timeBetweenEvictionRunsMillis1分钟(1.0.14)有两个含义: 1) Destroy线程会检测连接的间隔时间,如果连接空闲时间>=minEvictableIdleTimeMillis则关闭物理连接。2) testWhileIdle的判断依据
minEvictableIdleTimeMillis连接保持空闲而不被驱逐的最小时间
connectionInitSqls物理连接初始化的时候执行的sql
exceptionSorter根据dbType自动识别 当数据库抛出一些不可恢复的异常时,抛弃连接
filters属性类型是字符串,通过别名的方式配置扩展插件,常用的插件有:监控统计用的filter:stat、日志用的filter:log4j、防御sql注入的filter:wall
proxyFilters类型是List<com.alibaba.druid.filter.Filter>,如果同时配置了filters和proxyFilters,是组合关系,并非替换关系
name配置这个属性的意义在于,如果存在多个数据源,监控的时候可以通过名字来区分开来。如果没有配置,将会生成一个名字,格式是:"DataSource-" + System.identityHashCode(this). 另外配置此属性至少在1.0.5版本中是不起作用的,强行设置name会出错。
poolPreparedStatementsfalse是否缓存preparedStatement,也就是PSCache。PSCache对支持游标的数据库性能提升巨大,比如说oracle。在mysql下建议关闭。
maxPoolPreparedStatementPerConnectionSize-1要启用PSCache,必须配置大于0,当大于0时,poolPreparedStatements自动触发修改为true。在Druid中,不会存在Oracle下PSCache占用内存过多的问题,可以把这个数值配置大一些,比如说100
numTestsPerEvictionRun30分钟(1.0.14)不再使用,一个DruidDataSource只支持一个EvictionRun

更多详细内容,参考DruidDataSource配置属性列表

3 原理

DruidDataSource的工作机制如下图所示: 连接池工作机制.png 主要流程如下:

  • 业务线程获取Connection的时候,会判断连接池中是否有可用的Connection;有的话则从池子中取,没有的话则会唤醒CreateConnectionThread。
  • CreateConnectionThread负责创建线程并放入连接池。
  • 当业务线程使用完线程后,会将线程放回连接池。
  • DestoryConnectionThread定时运行,负责清理连接池中的连接。

4 启动

4.1 初始化

DruidDataSource的启动由init()完成,主要包括以下内容:

  • 1 上锁防并发;
  • 2 初始化filters;
  • 3 各种校验,确保配置正确;
  • 4 从spi中加载filter;
  • 5 获取driverClass;
  • 6 初始化exceptionSorter;
  • 7 初始化validConnectionChecker;
  • 8 创建dataSourceStat;
  • 9 创建connections;
  • 10 创建createConnectionThread、destoryConnectionThread;
  • 11 解锁;

代码如下所示:

public void init() throws SQLException {
    if (inited) {
        return;
    }

    try {
        lock.lockInterruptibly();
    } catch (InterruptedException e) {
        throw new SQLException("interrupt", e);
    }

    boolean init = false;
    try {
        if (inited) {
            return;
        }

        init = true;

        initStackTrace = IOUtils.toString(Thread.currentThread().getStackTrace());

        this.id = DruidDriver.createDataSourceId();

        loadFilterFromSystemProperty();
        
        if (this.dbType == null || this.dbType.length() == 0) {
            this.dbType = JdbcUtils.getDbType(jdbcUrl, null);
        }

        for (Filter filter : filters) {
            filter.init(this);
        }

        if (maxActive <= 0) {
            throw new IllegalArgumentException("illegal maxActive " + maxActive);
        }

        if (maxActive < minIdle) {
            throw new IllegalArgumentException("illegal maxActive " + maxActive);
        }

        if (getInitialSize() > maxActive) {
            throw new IllegalArgumentException("illegal initialSize " + this.initialSize + ", maxActieve "
                                               + maxActive);
        }

        if (this.driverClass != null) {
            this.driverClass = driverClass.trim();
        }

        validationQueryCheck();

        if (this.jdbcUrl != null) {
            this.jdbcUrl = this.jdbcUrl.trim();
            initFromWrapDriverUrl();
        }

        initFromSPIServiceLoader();

        if (this.driver == null) {
            if (this.driverClass == null || this.driverClass.isEmpty()) {
                this.driverClass = JdbcUtils.getDriverClassName(this.jdbcUrl);
            }

            if (MockDriver.class.getName().equals(driverClass)) {
                driver = MockDriver.instance;
            } else {
                driver = JdbcUtils.createDriver(driverClassLoader, driverClass);
            }
        } else {
            if (this.driverClass == null) {
                this.driverClass = driver.getClass().getName();
            }
        }

        if (this.dbType == null || this.dbType.length() == 0) {
            this.dbType = JdbcUtils.getDbType(jdbcUrl, driverClass.getClass().getName());
        }

        initCheck();

        initExceptionSorter();
        initValidConnectionChecker();

        if (driver.getClass().getName().equals("com.mysql.jdbc.Driver")) {
            if (this.isPoolPreparedStatements()) {
                LOG.error("mysql should not use 'PoolPreparedStatements'");
            }
        }

        dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbType);

        connections = new DruidConnectionHolder[maxActive];

        SQLException connectError = null;

        try {
            // 初始化连接
            for (int i = 0, size = getInitialSize(); i < size; ++i) {
                Connection conn = createPhysicalConnection();
                DruidConnectionHolder holder = new DruidConnectionHolder(this, conn);
                connections[poolingCount++] = holder;
            }

            if (poolingCount > 0) {
                poolingPeak = poolingCount;
                poolingPeakTime = System.currentTimeMillis();
            }
        } catch (SQLException ex) {
            LOG.error("init datasource error", ex);
            connectError = ex;
        }

        createAndStartCreatorThread();
        createAndStartDestroyThread();

        initedLatch.await();

        initedTime = new Date();
        ObjectName objectName = DruidDataSourceStatManager.addDataSource(this, this.name);
        this.setObjectName(objectName);

        if (connectError != null && poolingCount == 0) {
            throw connectError;
        }
    } catch (SQLException e) {
        LOG.error("dataSource init error", e);
        throw e;
    } catch (InterruptedException e) {
        throw new SQLException(e.getMessage(), e);
    } finally {
        inited = true;
        lock.unlock();

        if (init && LOG.isInfoEnabled()) {
            LOG.info("{dataSource-" + this.getID() + "} inited");
        }
    }
}

4.2 上锁&解锁

上锁、解锁是为了创建连接、回收连接的过程不会同时执行,确保对connections的操作不会出现并发问题。

4.3 filters

加载并初始化filter。

- init()内
loadFilterFromSystemProperty();
...... 其他代码
for (Filter filter : filters) {
    filter.init(this);
}
...... 其他代码
initFromSPIServiceLoader();

private void loadFilterFromSystemProperty() throws SQLException {
    String property = System.getProperty("druid.filters");

    if (property == null || property.length() == 0) {
        return;
    }

    this.setFilters(property);
}

- 这里可以加一些自己扩展的fileter
private void initFromSPIServiceLoader() {
    String property = System.getProperty("druid.load.spifilter.skip");
    if (property != null) {
        return;
    }

    ServiceLoader<Filter> druidAutoFilterLoader = ServiceLoader.load(Filter.class);

    for (Filter autoFilter : druidAutoFilterLoader) {
        AutoLoad autoLoad = autoFilter.getClass().getAnnotation(AutoLoad.class);
        if (autoLoad != null && autoLoad.value()) {
            if (LOG.isInfoEnabled()) {
                LOG.info("load filter from spi :" + autoFilter.getClass().getName());
            }
            addFilter(autoFilter);
        }
    }
}

druid内置了很多filter,如下表所示:

类名别名
defaultcom.alibaba.druid.filter.stat.StatFilter
statcom.alibaba.druid.filter.stat.StatFilter
mergeStatcom.alibaba.druid.filter.stat.MergeStatFilter
encodingcom.alibaba.druid.filter.encoding.EncodingConvertFilter
log4jcom.alibaba.druid.filter.logging.Log4jFilter
log4j2com.alibaba.druid.filter.logging.Log4j2Filter
slf4jcom.alibaba.druid.filter.logging.Slf4jLogFilter
commonloggingcom.alibaba.druid.filter.logging.CommonsLogFilter
wallcom.alibaba.druid.wall.WallFilter

更多信息见内置Filter的别名

4.4 ExceptionSorter

ExceptionSorter是一种异常处理器,如果是致命的错误,DruidPooledConnection会丢弃此出错Connection,如下代码所示。

public void handleConnectionException(DruidPooledConnection pooledConnection, Throwable t) throws SQLException {
    final DruidConnectionHolder holder = pooledConnection.getConnectionHolder();

    errorCount.incrementAndGet();
    lastError = t;
    lastErrorTimeMillis = System.currentTimeMillis();

    if (t instanceof SQLException) {
        SQLException sqlEx = (SQLException) t;

        // broadcastConnectionError
        ConnectionEvent event = new ConnectionEvent(pooledConnection, sqlEx);
        for (ConnectionEventListener eventListener : holder.getConnectionEventListeners()) {
            eventListener.connectionErrorOccurred(event);
        }

        // exceptionSorter.isExceptionFatal
        if (exceptionSorter != null && exceptionSorter.isExceptionFatal(sqlEx)) {
            if (pooledConnection.isTraceEnable()) {
                synchronized (activeConnections) {
                    if (pooledConnection.isTraceEnable()) {
                        activeConnections.remove(pooledConnection);
                        pooledConnection.setTraceEnable(false);
                    }
                }
            }
            this.discardConnection(holder.getConnection());
            pooledConnection.disable();
        }

        throw sqlEx;
    } else {
        throw new SQLException("Error", t);
    }
}

4.5 validConnectionChecker

ValidConnectionChecker用来验证链接是否还可用。每次创建Connection的时候,都会通过validConnectionChecker来检查此Connection是否可用,可用的Connection才会放入DruidConnectionHolder[] connections中。

public void validateConnection(Connection conn) throws SQLException {
    String query = getValidationQuery();
    if (conn.isClosed()) {
        throw new SQLException("validateConnection: connection closed");
    }

    if (validConnectionChecker != null) {
        if (!validConnectionChecker.isValidConnection(conn, validationQuery, validationQueryTimeout)) {
            throw new SQLException("validateConnection false");
        }
        return;
    }

    if (null != query) {
        Statement stmt = null;
        ResultSet rs = null;
        try {
            stmt = conn.createStatement();
            if (getValidationQueryTimeout() > 0) {
                stmt.setQueryTimeout(getValidationQueryTimeout());
            }
            rs = stmt.executeQuery(query);
            if (!rs.next()) {
                throw new SQLException("validationQuery didn't return a row");
            }
        } finally {
            JdbcUtils.close(rs);
            JdbcUtils.close(stmt);
        }
    }
}

4.6 dataSourceStat

数据源相关的统计工作基本都依赖于此对象。

初始化connections

DruidDataSource的连接信息都放在DruidConnectionHolder[] connections数组中。

- init中管理connections的代码如下所示:
connections = new DruidConnectionHolder[maxActive];

SQLException connectError = null;

try {
    // 初始化连接
    for (int i = 0, size = getInitialSize(); i < size; ++i) {
        Connection conn = createPhysicalConnection();
        DruidConnectionHolder holder = new DruidConnectionHolder(this, conn);
        connections[poolingCount++] = holder;
    }

    if (poolingCount > 0) {
        poolingPeak = poolingCount;
        poolingPeakTime = System.currentTimeMillis();
    }
} catch (SQLException ex) {
    LOG.error("init datasource error", ex);
    connectError = ex;
}

4.7 创建两个线程,用于create、destory线程

  • createConnectionThread:用于创建新的Connection。
  • destoryConnectionThread:用于销毁或回收Connection。 在init()中创建好createConnectionThread、destoryConnectionThread后,并不会立即执行,而是会等initedLatch.await()来唤醒。这是为了防止init()方法为执行完毕,就开始创建或者销毁Connection。

5 获取Connection

5.1 与获取Connection相关的几个配置项

  • maxWait 获取连接时的最大等待时间
  • testOnBorrow 是否校验connection有效可用。
  • validationQuery 在testOnBorrow为true情况下,获取连接后,通过执行该sql校验connection是否正常
  • validationQueryTimeout 在testOnBorrow为ture情况下,执行validationQeury时,超时时间,默认不进行超时设置
  • testWhileIdle 超过一个空闲等待时间的连接,进行测试conneciton是否有效可用。在testOnBorrow为false情况下使用。
  • timeBetweenEvictionRunsMillis 空闲等待时间,testWhileIdle=true的时候使用

5.2 获取Connection流程

获取Connection.png 获取Connection的主要步骤如下:

  • 无限循环,直到获取到Connection或失败(超时、抛出异常等)
  • 通过getConnectionInternal获取Connection,接下来主要就是各种炎症流程。
  • 如果testOnBorrow=true,那么通过validationQuery验证连接是否可用,不可用则通过discardConnection回收,然后再次进入循环;否则继续执行。
  • 如果testOnBorrow=false,那么:
    • 如果Connection已关闭,则通过discardConnection回收,然后再次进入循环。
    • 如果testWhileIdle=true,检查连接空闲时间是否超过timeBetweenEvictionRunsMillis,超过后检查连接是否有效。如果无效,则通过discardConnection回收,然后再次进入循环;否则继续
  • 如果removeAbandoned=true,那么将Connection放入activeConnections集合中,后续将移除失效的连接。
  • 如果以上均执行成功,那么返回Connection。

代码如下:

public PooledConnection getPooledConnection() throws SQLException {
    return getConnection(maxWait);
}

public DruidPooledConnection getConnectionDirect(long maxWaitMillis) throws SQLException {
    int notFullTimeoutRetryCnt = 0;
    for (;;) {
        // handle notFullTimeoutRetry
        DruidPooledConnection poolableConnection;
        try {
            poolableConnection = getConnectionInternal(maxWaitMillis);
        } catch (GetConnectionTimeoutException ex) {
            if (notFullTimeoutRetryCnt <= this.notFullTimeoutRetryCount && !isFull()) {
                notFullTimeoutRetryCnt++;
                if (LOG.isWarnEnabled()) {
                    LOG.warn("get connection timeout retry : " + notFullTimeoutRetryCnt);
                }
                continue;
            }
            throw ex;
        }

        if (testOnBorrow) {
            boolean validate = testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
            if (!validate) {
                if (LOG.isDebugEnabled()) {
                    LOG.debug("skip not validate connection.");
                }

                discardConnection(poolableConnection.holder);
                continue;
            }
        } else {
            if (poolableConnection.conn.isClosed()) {
                discardConnection(poolableConnection.holder); // 传入null,避免重复关闭
                continue;
            }

            if (testWhileIdle) {
                final DruidConnectionHolder holder = poolableConnection.holder;
                long currentTimeMillis             = System.currentTimeMillis();
                long lastActiveTimeMillis          = holder.lastActiveTimeMillis;
                long lastExecTimeMillis            = holder.lastExecTimeMillis;
                long lastKeepTimeMillis            = holder.lastKeepTimeMillis;

                if (checkExecuteTime
                        && lastExecTimeMillis != lastActiveTimeMillis) {
                    lastActiveTimeMillis = lastExecTimeMillis;
                }

                if (lastKeepTimeMillis > lastActiveTimeMillis) {
                    lastActiveTimeMillis = lastKeepTimeMillis;
                }

                long idleMillis                    = currentTimeMillis - lastActiveTimeMillis;

                long timeBetweenEvictionRunsMillis = this.timeBetweenEvictionRunsMillis;

                if (timeBetweenEvictionRunsMillis <= 0) {
                    timeBetweenEvictionRunsMillis = DEFAULT_TIME_BETWEEN_EVICTION_RUNS_MILLIS;
                }

                if (idleMillis >= timeBetweenEvictionRunsMillis
                        || idleMillis < 0 // unexcepted branch
                        ) {
                    boolean validate = testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
                    if (!validate) {
                        if (LOG.isDebugEnabled()) {
                            LOG.debug("skip not validate connection.");
                        }

                        discardConnection(poolableConnection.holder);
                         continue;
                    }
                }
            }
        }

        if (removeAbandoned) {
            StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
            poolableConnection.connectStackTrace = stackTrace;
            poolableConnection.setConnectedTimeNano();
            poolableConnection.traceEnable = true;

            activeConnectionLock.lock();
            try {
                activeConnections.put(poolableConnection, PRESENT);
            } finally {
                activeConnectionLock.unlock();
            }
        }

        if (!this.defaultAutoCommit) {
            poolableConnection.setAutoCommit(false);
        }

        return poolableConnection;
    }
}

5.3 getConnectionInternal

Druid内部获取Connection的方式有两种,代码如下:

private DruidPooledConnection getConnectionInternal(long maxWait) throws SQLException {
    // code ....
    if (maxWait > 0) {
        holder = pollLast(nanos);
    } else {
        holder = takeLast();
    }
    // code ....
}

这两种方式的主要区别是:

  • takeLast
    • 如果连接池不为空,那么直接返回一个可用的连接;
    • 如果连接池空了: 那么通过emptySignal发送连接池空的信号,然后通过notEmpty.await()等待 createConnectionThread在收到此信号后会创建Connection,当连接创建好了以后返回唤醒获取任务的线程。 接着返回Connection。
  • pollLast 功能和takeLast很像,主要的区别是加了一个重试和等待时间。有些情况中是无法创建连接的:
    • 如果连接池已满,那么将无法创建新连接;
    • 在并发情况下,createConnectionThread创建的线程可能被其他线程获取,所以即便是createConnectionThread创建好线程以后,当前线程依然无法获取到线程;
    • 这个方法增加了一个重试和最大等待时间功能,即:一次循环没有获取到连接,那么将进入重试,仅需确保不超过最大等待时间。
DruidConnectionHolder takeLast() throws InterruptedException, SQLException {
    try {
        while (poolingCount == 0) {
            emptySignal(); // send signal to CreateThread create connection

            if (failFast && isFailContinuous()) {
                throw new DataSourceNotAvailableException(createError);
            }

            notEmptyWaitThreadCount++;
            if (notEmptyWaitThreadCount > notEmptyWaitThreadPeak) {
                notEmptyWaitThreadPeak = notEmptyWaitThreadCount;
            }
            try {
                notEmpty.await(); // signal by recycle or creator
            } finally {
                notEmptyWaitThreadCount--;
            }
            notEmptyWaitCount++;

            if (!enable) {
                connectErrorCountUpdater.incrementAndGet(this);
                if (disableException != null) {
                    throw disableException;
                }

                throw new DataSourceDisableException();
            }
        }
    } catch (InterruptedException ie) {
        notEmpty.signal(); // propagate to non-interrupted thread
        notEmptySignalCount++;
        throw ie;
    }

    decrementPoolingCount();
    DruidConnectionHolder last = connections[poolingCount];
    connections[poolingCount] = null;

    return last;
}

private DruidConnectionHolder pollLast(long nanos) throws InterruptedException, SQLException {
    long estimate = nanos;

    for (;;) {
        if (poolingCount == 0) {
            emptySignal(); // send signal to CreateThread create connection

            if (failFast && isFailContinuous()) {
                throw new DataSourceNotAvailableException(createError);
            }

            if (estimate <= 0) {
                waitNanosLocal.set(nanos - estimate);
                return null;
            }

            notEmptyWaitThreadCount++;
            if (notEmptyWaitThreadCount > notEmptyWaitThreadPeak) {
                notEmptyWaitThreadPeak = notEmptyWaitThreadCount;
            }

            try {
                long startEstimate = estimate;
                estimate = notEmpty.awaitNanos(estimate); // signal by
                                                          // recycle or
                                                          // creator
                notEmptyWaitCount++;
                notEmptyWaitNanos += (startEstimate - estimate);

                if (!enable) {
                    connectErrorCountUpdater.incrementAndGet(this);

                    if (disableException != null) {
                        throw disableException;
                    }

                    throw new DataSourceDisableException();
                }
            } catch (InterruptedException ie) {
                notEmpty.signal(); // propagate to non-interrupted thread
                notEmptySignalCount++;
                throw ie;
            } finally {
                notEmptyWaitThreadCount--;
            }

            if (poolingCount == 0) {
                if (estimate > 0) {
                    continue;
                }

                waitNanosLocal.set(nanos - estimate);
                return null;
            }
        }

        decrementPoolingCount();
        DruidConnectionHolder last = connections[poolingCount];
        connections[poolingCount] = null;

        long waitNanos = nanos - estimate;
        last.setLastNotEmptyWaitNanos(waitNanos);

        return last;
    }
}

6 回收连接

6.1 回收Connection相关的参数

  • testOnReturn 是否需要对回收的连接,检查有效可用

6.2 回收Connection而不是销毁

Druid的close方法不会真正的关闭连接,而是通过recycle方法回收连接,下次可以直接使用此链接,DruidPooledConnection#close方法如下所示:

public void close() throws SQLException {
    ...code...
    recycle();
    ...code...
}

6.3 回收Connection

回收Connection.png 回收Connection主要包括以下步骤:

  • 如果超出phyMaxUseCount(最大物理连接数),那么回收Connection;否则继续;
  • testOnReturn=true,那么先检查连接是否有效,如果无效,那么关闭无效连接;否则继续;
  • 如果DruidDataSource的enable=false(数据源失效),那么回收连接;
  • 如果Connection的连接时间超过“物理连接超时时间”,那么回收连接;
  • 将Connection放入connections集合中,新的线程需要连接时,可以直接从此集合中获取。

如下代码所示:

protected void recycle(DruidPooledConnection pooledConnection) throws SQLException {
    ...... code ......
    if (phyMaxUseCount > 0 && holder.useCount >= phyMaxUseCount) {
        discardConnection(holder);
        return;
    }
    ...... code ......
    if (testOnReturn) {
        boolean validate = testConnectionInternal(holder, physicalConnection);
        if (!validate) {
            JdbcUtils.close(physicalConnection);

            destroyCountUpdater.incrementAndGet(this);

            lock.lock();
            try {
                if (holder.active) {
                    activeCount--;
                    holder.active = false;
                }
                closeCount++;
            } finally {
                lock.unlock();
            }
            return;
        }
    }
    ...... code ......
    if (!enable) {
        discardConnection(holder);
        return;
    }
    ...... code ......
    if (phyTimeoutMillis > 0) {
        long phyConnectTimeMillis = currentTimeMillis - holder.connectTimeMillis;
        if (phyConnectTimeMillis > phyTimeoutMillis) {
            discardConnection(holder);
            return;
        }
    }
    ...... code ......
    lock.lock();
    try {
        if (holder.active) {
            activeCount--;
            holder.active = false;
        }
        closeCount++;

        result = putLast(holder, currentTimeMillis);
        recycleCount++;
    } finally {
        lock.unlock();
    }

    ...... code ......
}

6.4 shrink

DestroyTask中,在真正执行回收连接之前会通过shrink方法对连接进行检测,丢弃和关闭检测不通过的连接,调整连接池。

public class DestroyTask implements Runnable {
    public DestroyTask() {

    }

    @Override
    public void run() {
        shrink(true, keepAlive);

        if (isRemoveAbandoned()) {
            removeAbandoned();
        }
    }

}

shrink()负责整理所有连接池,然后判断哪些需要回收,简单来说此方法维护几个集合:

  • keepAliveConnections:继续保持有效的连接,所有不需要回收的连接都放到这个集合中。其中此集合默认大小是:poolingCount - minIdle,如果有空闲连接且总连接数大于poolingCount - minIdle,那么就会进行各种校验然后判断是否要回收,主要校验包括:
    • 是否设置了物理连接的超时时间phyTimoutMills。假如设置了该时间,判断连接时间存活时间是否已经超过phyTimeoutMills,是则放入evictConnections中。
    • 空余时间大于minEvictableIdleTimeMillis,并且索引(在连接池中的index)小于checkCount的连接则放入evictConnections;
    • 空余时间小于minEvictableIdleTimeMillis的不需要回收空余时间大于minEvictableIdleTimeMillis,并且索引大于checkCount的连接,假若空余时间大于maxEvictableIdleTimeMillis则放入evictConnections,否则放入keepAliveConnections中进行keepAlive检测。
  • evictConnections:需要回收的连接都放到此集合中。最终使用JdbcUtils.close() 关闭连接
public void shrink(boolean checkTime, boolean keepAlive) {
    try {
        lock.lockInterruptibly();
    } catch (InterruptedException e) {
        return;
    }

    boolean needFill = false;
    int evictCount = 0;
    int keepAliveCount = 0;
    int fatalErrorIncrement = fatalErrorCount - fatalErrorCountLastShrink;
    fatalErrorCountLastShrink = fatalErrorCount;
    
    try {
        if (!inited) {
            return;
        }

        final int checkCount = poolingCount - minIdle;
        final long currentTimeMillis = System.currentTimeMillis();
        for (int i = 0; i < poolingCount; ++i) {
            DruidConnectionHolder connection = connections[i];

            if ((onFatalError || fatalErrorIncrement > 0) && (lastFatalErrorTimeMillis > connection.connectTimeMillis))  {
                keepAliveConnections[keepAliveCount++] = connection;
                continue;
            }

            if (checkTime) {
                if (phyTimeoutMillis > 0) {
                    long phyConnectTimeMillis = currentTimeMillis - connection.connectTimeMillis;
                    if (phyConnectTimeMillis > phyTimeoutMillis) {
                        evictConnections[evictCount++] = connection;
                        continue;
                    }
                }

                long idleMillis = currentTimeMillis - connection.lastActiveTimeMillis;

                if (idleMillis < minEvictableIdleTimeMillis
                        && idleMillis < keepAliveBetweenTimeMillis
                ) {
                    break;
                }

                if (idleMillis >= minEvictableIdleTimeMillis) {
                    if (checkTime && i < checkCount) {
                        evictConnections[evictCount++] = connection;
                        continue;
                    } else if (idleMillis > maxEvictableIdleTimeMillis) {
                        evictConnections[evictCount++] = connection;
                        continue;
                    }
                }

                if (keepAlive && idleMillis >= keepAliveBetweenTimeMillis) {
                    keepAliveConnections[keepAliveCount++] = connection;
                }
            } else {
                if (i < checkCount) {
                    evictConnections[evictCount++] = connection;
                } else {
                    break;
                }
            }
        }

        int removeCount = evictCount + keepAliveCount;
        if (removeCount > 0) {
            System.arraycopy(connections, removeCount, connections, 0, poolingCount - removeCount);
            Arrays.fill(connections, poolingCount - removeCount, poolingCount, null);
            poolingCount -= removeCount;
        }
        keepAliveCheckCount += keepAliveCount;

        if (keepAlive && poolingCount + activeCount < minIdle) {
            needFill = true;
        }
    } finally {
        lock.unlock();
    }

    if (evictCount > 0) {
        for (int i = 0; i < evictCount; ++i) {
            DruidConnectionHolder item = evictConnections[i];
            Connection connection = item.getConnection();
            JdbcUtils.close(connection);
            destroyCountUpdater.incrementAndGet(this);
        }
        Arrays.fill(evictConnections, null);
    }

    if (keepAliveCount > 0) {
        // keep order
        for (int i = keepAliveCount - 1; i >= 0; --i) {
            DruidConnectionHolder holer = keepAliveConnections[i];
            Connection connection = holer.getConnection();
            holer.incrementKeepAliveCheckCount();

            boolean validate = false;
            try {
                this.validateConnection(connection);
                validate = true;
            } catch (Throwable error) {
                if (LOG.isDebugEnabled()) {
                    LOG.debug("keepAliveErr", error);
                }
                // skip
            }

            boolean discard = !validate;
            if (validate) {
                holer.lastKeepTimeMillis = System.currentTimeMillis();
                boolean putOk = put(holer, 0L);
                if (!putOk) {
                    discard = true;
                }
            }

            if (discard) {
                try {
                    connection.close();
                } catch (Exception e) {
                    // skip
                }

                lock.lock();
                try {
                    discardCount++;

                    if (activeCount + poolingCount <= minIdle) {
                        emptySignal();
                    }
                } finally {
                    lock.unlock();
                }
            }
        }
        this.getDataSourceStat().addKeepAliveCheckCount(keepAliveCount);
        Arrays.fill(keepAliveConnections, null);
    }

    if (needFill) {
        lock.lock();
        try {
            int fillCount = minIdle - (activeCount + poolingCount + createTaskCount);
            for (int i = 0; i < fillCount; ++i) {
                emptySignal();
            }
        } finally {
            lock.unlock();
        }
    } else if (onFatalError || fatalErrorIncrement > 0) {
        lock.lock();
        try {
            emptySignal();
        } finally {
            lock.unlock();
        }
    }
}

7 参考文档

Druid

druid-wiki

DruidDataSource配置属性列表

Druid连接池之DruidDataSource