线程池参数
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler)
| 参数名 | 含义 |
|---|---|
| corePoolSize | 核心线程数,线程池启动时就创建的线程数 |
| maximumPoolSize | 最大线程数,线程池所能允许的最多线程数 |
| keepAliveTime | 空闲线程所能存活的最长时间 |
| unit | 修饰keepAliveTime的单位 |
| workQueue | 阻塞队列。有基于数组的有界阻塞队列ArrayBlockingQueue、有基于链表的无界阻塞队列LinkedBlockingQueue、最多只有一个元素的SynchronousQueue、还有优先队列PriorityQueue。核心线程数都被占用时,新任务就会放进阻塞队列中,阻塞队列满了后就会判断是否超出最大线程数,未超出就接着创建线程执行任务,超出就执行下边的handler拒绝策略 |
| threadFactory | 创建线程的工厂,线程池里的线程都是通过该工厂创建的,建议使用自定义包含poolName的线程工厂,和具体业务前缀挂钩,便于日志排查 |
| handler | 拒绝策略,ThreadPoolExecutor下有各个拒绝策略的静态内部类:AbortPolicy(默认策略,抛出异常)、DiscardPolicy(丢弃当前任务)、DiscardOldestPolicy(从workerQueue里poll一个任务丢弃,提交当前任务)、CallerRunsPolicy(使用提交者所在线程来执行当前任务,会影响提交方的任务提交速度) |
| PS:Executors工具类可以创建不同种类的线程池,但是阿里代码规范里是不建议直接使用的,因为定长线程池、单线程池使用的阻塞队列是无界阻塞队列,在任务处理不及时大量任务提交极端情况下会触发OOM内存溢出。cachedThreadPool缓存线程池的阻塞队列SynchronousQueue比较特殊,它的put和take单独使用均会阻塞,调用take时需要去匹配一个put才会返回一个任务,同理调用put时需要去匹配一个take才会提交一个任务 |
线程池数选择
package com.jysun.practice.concurrent;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicInteger;
/**
* @author Jysun
* @description 阻塞系数测试
* @date 2020/9/6 13:00
*/
public class TestBlockingFactor {
public static void main(String[] args) {
long start = System.currentTimeMillis();
CountDownLatch countDownLatch = new CountDownLatch(ThreadConstants.TASK_SIZE);
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(ThreadConstants.CORE_POOL_SIZE, ThreadConstants.MAXIMUM_POOL_SIZE, ThreadConstants.KEEP_ALIVE_TIME,
ThreadConstants.TIME_UNIT, ThreadConstants.WORKER_QUEUE, ThreadConstants.THREAD_FACTORY, new ThreadPoolExecutor.CallerRunsPolicy());
try {
for (int i = 0; i < ThreadConstants.TASK_SIZE; i++) {
threadPool.execute(new Task(countDownLatch));
}
countDownLatch.await();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
threadPool.shutdownNow();
}
long end = System.currentTimeMillis();
System.out.println("Cost time:" + (end - start));
int availableProcessors = Runtime.getRuntime().availableProcessors();
System.out.println("CPU core size:" + availableProcessors);
double blockingFactor = (double) ThreadConstants.BLOCKING_TIME / (ThreadConstants.BLOCKING_TIME + ThreadConstants.WORKER_TIME);
System.out.println("Blocking factor:" + blockingFactor);
int poolSize = (int) (availableProcessors / (1 - blockingFactor));
System.out.println("Now pool size:" + ThreadConstants.CORE_POOL_SIZE);
System.out.println("Recommend pool size:" + poolSize);
}
}
class Task implements Runnable {
private CountDownLatch countDownLatch;
public Task(CountDownLatch countDownLatch) {
this.countDownLatch = countDownLatch;
}
@Override
public void run() {
try {
// worker 5 ms
long start = System.currentTimeMillis();
for (; ; ) {
if (System.currentTimeMillis() - start >= ThreadConstants.WORKER_TIME) {
break;
}
}
// sleep 5 ms
TimeUnit.MILLISECONDS.sleep(ThreadConstants.BLOCKING_TIME);
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
countDownLatch.countDown();
}
}
}
interface ThreadConstants {
int CORE_POOL_SIZE = 100;
int MAXIMUM_POOL_SIZE = CORE_POOL_SIZE;
int KEEP_ALIVE_TIME = 60;
TimeUnit TIME_UNIT = TimeUnit.SECONDS;
ThreadFactory THREAD_FACTORY = new NamedThreadFactory("test-blocking-factory");
BlockingQueue WORKER_QUEUE = new ArrayBlockingQueue(1000);
int TASK_SIZE = 1000;
int BLOCKING_TIME = 5;
int WORKER_TIME = 5;
}
class NamedThreadFactory implements ThreadFactory {
private final ThreadGroup group;
private final AtomicInteger threadNumber = new AtomicInteger(1);
private final String namePrefix;
NamedThreadFactory(String poolName) {
SecurityManager s = System.getSecurityManager();
group = (s != null) ? s.getThreadGroup() :
Thread.currentThread().getThreadGroup();
namePrefix = "pool-" + poolName + "-thread-";
}
public Thread newThread(Runnable r) {
Thread t = new Thread(group, r,
namePrefix + threadNumber.getAndIncrement(),
0);
if (t.isDaemon())
t.setDaemon(false);
if (t.getPriority() != Thread.NORM_PRIORITY)
t.setPriority(Thread.NORM_PRIORITY);
return t;
}
}
首先了解两个公式:
阻塞系数 = 阻塞时间 / (阻塞时间 + CPU运行时间)
线程池数 = CPU数 / (1 - 阻塞系数0.8~0.9之间)
从第一个公式可以看出,阻塞系数越大说明阻塞时间占比越重,这时就要开启足够多的线程数来充分发挥CPU最大利用率,以防阻塞时CPU处于空闲状态。
实验环境
CPU:6核12线程
1.实验一
1000个任务,CPU执行时间5ms、阻塞时间5ms,阻塞系数为0.5,推荐线程数为24
| poolSize | costTime |
|---|---|
| 12 | 894 |
| 16 | 683 |
| 20 | 564 |
| 24 | 500 |
| 28 | 486 |
| 32 | 492 |
| 100 | 450 |
可以从costTime中看出,12线程数到24线程数时处理效率几乎翻倍,但往后线程数的增加处理速度的提交并不明显
2.实验二
1000个任务,CPU执行时间1ms、阻塞时间5ms,阻塞系数为0.83,推荐线程数为72
| poolSize | costTime |
|---|---|
| 12 | 558 |
| 24 | 284 |
| 48 | 147 |
| 60 | 128 |
| 72 | 118 |
| 84 | 115 |
| 100 | 110 |
| 200 | 139 |
同样可以看出,12线程数到48线程数时效率倍增,在72线程数时增速逐渐放缓,后续达到200时处理速度还变慢了
总结
通过阻塞系数计算出的推荐线程数,可以看作是一种理想状态下的配置数。
仅供参考,有错误还望指出