线程
创建线程的四种方式
- 继承Thread类
@Slf4j
public class MyThread extends Thread {
@Override
public void run() {
log.info("my thread");
}
public static void main(String agrs[]) {
MyThread myThread = new MyThread();
// 调方法
myThread.run();
// 这才是启动线程
myThread.start();
}
}
- 实现Runnable接口
@Slf4j
public class MyRunnable implements Runnable{
@Override
public void run() {
log.info("需要继承其他类的时候记得用我");
}
public static void main(String args[]){
new Thread(new MyRunnable()).start();
}
}
- 通过ExecutorService和Callable实现有返回值的线程
@AllArgsConstructor
@Slf4j
public class MyCallable implements Callable<Integer> {
private Integer i;
@Override
public Integer call() throws Exception {
log.info("执行线程逻辑"+Thread.currentThread().getName());
return i;
}
public static void main(String agrs[]) throws Exception {
ExecutorService executorService = Executors.newFixedThreadPool(5);
List<Future> list = new ArrayList<>();
for (int i = 0; i < 5; i++) {
Future<Integer> future = executorService.submit(new MyCallable(i));
list.add(future);
}
executorService.shutdown();
for (Future future : list) {
try {
log.info(future.get(1, TimeUnit.SECONDS).toString());
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
- 线程池
线程池
线程复用:new一个Thread,传入Runnable对象,执行完成后换下一Runnable对象,多的Runnable对象放在一个队列里,就实现了简单的线程复用,减少了创建和销毁的开销。
核心构建方法
线程池涉及到Executors,ExecutorService,ThreadPoolExecutor,Callable,Future,FutureTask这几个核心类。其中ThreadPoolExecutor是核心的构建方法。
/**
* Creates a new {@code ThreadPoolExecutor} with the given initial
* parameters and default thread factory and rejected execution handler.
* It may be more convenient to use one of the {@link Executors} factory
* methods instead of this general purpose constructor.
*
* @param corePoolSize the number of threads to keep in the pool, even
* if they are idle, unless {@code allowCoreThreadTimeOut} is set
* @param maximumPoolSize the maximum number of threads to allow in the
* pool
* @param keepAliveTime when the number of threads is greater than
* the core, this is the maximum time that excess idle threads
* will wait for new tasks before terminating.
* @param unit the time unit for the {@code keepAliveTime} argument
* @param workQueue the queue to use for holding tasks before they are
* executed. This queue will hold only the {@code Runnable}
* tasks submitted by the {@code execute} method.
* @throws IllegalArgumentException if one of the following holds:<br>
* {@code corePoolSize < 0}<br>
* {@code keepAliveTime < 0}<br>
* {@code maximumPoolSize <= 0}<br>
* {@code maximumPoolSize < corePoolSize}
* @throws NullPointerException if {@code workQueue} is null
*/
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), defaultHandler);
}
ThreadPoolExecutor参数
| 参数 | 描述 |
|---|---|
| corePoolSize | 线程池中的核心线程数量 |
| maximumPoolSize | 线程池中最大的线程数量 |
| keepAliveTime | 线程数量>corePoolSize后,空闲线程的存活时间 |
| unit | 时间单位 |
| workQueue | 任务队列 |
| ThreadFactory | 线程创建工厂 |
| Handler | 拒绝策略 |
线程池工作流程
- 初始申请一个用于执行线程队列和管理线程池的线程资源
- 调用execute
- 如果运行线程数<corePoolSize,则启动核心线程
- 如果运行线程数>=corePoolSize&&阻塞队列未满,放入阻塞队列
- 如果maximumPoolSize>运行线程数>=corePoolSize&&阻塞队列已满,启动非核心线程
- 如果运行线程数>=maximumPoolSize&&阻塞队列已满,执行拒绝策略(默认AbortPolicy,抛出RejectedExecutionException)
- 线程执行完毕,阻塞队列中才移除
- 线程空闲时间>keepAliveTime&&运行的线程>corePoolSize,线程结束,最终收缩到corePoolSize
线程池的拒绝策略(内部类)
- AbortPolicy:执行抛出RejectedExecutionException
/**
* A handler for rejected tasks that throws a
* {@code RejectedExecutionException}.
*/
public static class AbortPolicy implements RejectedExecutionHandler {
/**
* Creates an {@code AbortPolicy}.
*/
public AbortPolicy() { }
/**
* Always throws RejectedExecutionException.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
* @throws RejectedExecutionException always
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
throw new RejectedExecutionException("Task " + r.toString() +
" rejected from " +
e.toString());
}
}
- CallerRunsPolicy:被丢弃的线程任务未关闭则执行该任务
/**
* A handler for rejected tasks that runs the rejected task
* directly in the calling thread of the {@code execute} method,
* unless the executor has been shut down, in which case the task
* is discarded.
*/
public static class CallerRunsPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code CallerRunsPolicy}.
*/
public CallerRunsPolicy() { }
/**
* Executes task r in the caller's thread, unless the executor
* has been shut down, in which case the task is discarded.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (!e.isShutdown()) {
r.run();
}
}
}
- DiscardOldestPolicy:移除线程队列中最早的一个线程并尝试提交当前任务
/**
* A handler for rejected tasks that discards the oldest unhandled
* request and then retries {@code execute}, unless the executor
* is shut down, in which case the task is discarded.
*/
public static class DiscardOldestPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code DiscardOldestPolicy} for the given executor.
*/
public DiscardOldestPolicy() { }
/**
* Obtains and ignores the next task that the executor
* would otherwise execute, if one is immediately available,
* and then retries execution of task r, unless the executor
* is shut down, in which case task r is instead discarded.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
if (!e.isShutdown()) {
e.getQueue().poll();
e.execute(r);
}
}
}
- DiscardPolicy:丢弃当前任务
/**
* A handler for rejected tasks that silently discards the
* rejected task.
*/
public static class DiscardPolicy implements RejectedExecutionHandler {
/**
* Creates a {@code DiscardPolicy}.
*/
public DiscardPolicy() { }
/**
* Does nothing, which has the effect of discarding task r.
*
* @param r the runnable task requested to be executed
* @param e the executor attempting to execute this task
*/
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
}
}
- 自定义(什么的不做,打个日志)
public static void main(String agrs[]) throws Exception {
ThreadPoolExecutor threadPoolExecutor =
new ThreadPoolExecutor(
1, 1, 0L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(1), new MyDiscardPolicy());
for (int i = 0; i < 5; i++) {
threadPoolExecutor.execute(() -> System.out.println(Thread.currentThread().getName()));
}
}
public static class MyDiscardPolicy implements RejectedExecutionHandler {
public MyDiscardPolicy() {}
@Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
log.info("触发拒绝策略");
}
}
常见的线程池
- newCachedThreadPool:没有核心线程,60秒回收空闲线程,不存储元素的阻塞队列。结合来看,适合执行时间很短量很大的情况,能够很有的复用线程。
/**
* Creates a thread pool that creates new threads as needed, but
* will reuse previously constructed threads when they are
* available. These pools will typically improve the performance
* of programs that execute many short-lived asynchronous tasks.
* Calls to {@code execute} will reuse previously constructed
* threads if available. If no existing thread is available, a new
* thread will be created and added to the pool. Threads that have
* not been used for sixty seconds are terminated and removed from
* the cache. Thus, a pool that remains idle for long enough will
* not consume any resources. Note that pools with similar
* properties but different details (for example, timeout parameters)
* may be created using {@link ThreadPoolExecutor} constructors.
*
* @return the newly created thread pool
*/
public static ExecutorService newCachedThreadPool() {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
- newFixedThreadPool:核心线程=总线程,链表阻塞队列。结合来看就是固定几个线程做事,其他的放入阻塞队列。
/**
* Creates a thread pool that reuses a fixed number of threads
* operating off a shared unbounded queue. At any point, at most
* {@code nThreads} threads will be active processing tasks.
* If additional tasks are submitted when all threads are active,
* they will wait in the queue until a thread is available.
* If any thread terminates due to a failure during execution
* prior to shutdown, a new one will take its place if needed to
* execute subsequent tasks. The threads in the pool will exist
* until it is explicitly {@link ExecutorService#shutdown shutdown}.
*
* @param nThreads the number of threads in the pool
* @return the newly created thread pool
* @throws IllegalArgumentException if {@code nThreads <= 0}
*/
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
- newScheduledThreadPool:定时任务线程池
public static void main(String agrs[]) throws Exception {
ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
scheduledExecutorService.schedule(()->System.out.println("1秒一次"),1,TimeUnit.SECONDS);
}
- newSingleThreadExecutor:newFixedThreadPool(1)
/**
* Creates an Executor that uses a single worker thread operating
* off an unbounded queue. (Note however that if this single
* thread terminates due to a failure during execution prior to
* shutdown, a new one will take its place if needed to execute
* subsequent tasks.) Tasks are guaranteed to execute
* sequentially, and no more than one task will be active at any
* given time. Unlike the otherwise equivalent
* {@code newFixedThreadPool(1)} the returned executor is
* guaranteed not to be reconfigurable to use additional threads.
*
* @return the newly created single-threaded Executor
*/
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}
- newWorkStealingPool:JDK估算所需线程数后向操作系统申请达到快速运算的目的
/**
* Creates a work-stealing thread pool using all
* {@link Runtime#availableProcessors available processors}
* as its target parallelism level.
* @return the newly created thread pool
* @see #newWorkStealingPool(int)
* @since 1.8
*/
public static ExecutorService newWorkStealingPool() {
return new ForkJoinPool
(Runtime.getRuntime().availableProcessors(),
ForkJoinPool.defaultForkJoinWorkerThreadFactory,
null, true);
}
- 阿里规范(p3c):用线程池!!!用自定义参数的线程池!!!
线程的生命周期
| 状态 | 描述 |
|---|---|
| 新建状态New | new Thread() |
| 就绪状态Runnable | .start() |
| 运行状态Running | 得到CPU开始执行run方法 |
| 阻塞状态Blocked | 主动/被动放弃CPU,进入阻塞状态 |
| 死亡状态Dead | 正常结束/异常退出/手动结束(stop不建议) |
阻塞
- 等待阻塞:wait
- 同步阻塞:同步锁
- 其他阻塞:sleep、join(等另一个完成)、i/o
线程的常见方法
- 线程等待wait:Object的方法,释放锁,进入阻塞
- 线程唤醒notify/notifyAll:Object的方法,【notify】随机唤起一个对象监视上线程,【notifyAll】:唤起所有的
- 线程睡眠sleep:Thread的方法,不释放锁
- 线程让步yield:让出CPU,和大家一起竞争
- 线程中断interrupt:interrupt改变中断标识,停止是程序根据中断标识判定的结果。
@Slf4j
public class SafeInterruptThread extends Thread {
@Override
public void run() {
if (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000);
log.info("业务逻辑1");
Thread.sleep(2000);
log.info("业务逻辑2");
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
} else {
log.info("资源释放");
}
}
public static void main(String agrs[]) throws Exception {
SafeInterruptThread safeInterruptThread = new SafeInterruptThread();
safeInterruptThread.start();
Thread.sleep(2000);
log.info("执行中断");
safeInterruptThread.interrupt();
}
}
- 线程加入join:在当前线程A内调用线程B.join()则A阻塞等B完成后进入就绪
- 守护线程setDaemon:没有用户线程可服务结束如垃圾回收线程
sleep和wait区别
- Thread.sleep,Object.wait
- sleep暂停指定时间,让出CPU,不释放锁,保持监控状态,指定时间结束后,进入就绪状态
- wait释放锁,让出CPU,进入等待锁池,等待notify到自己后进入对象锁池获取锁,进入就绪状态
synchronized & volatile
几个概念
- 可见性:一个线程对共享变量值的修改,能够及时地被其他线程看到
- 共享变量:如果一个变量在多个线程的工作内存中都存在副本,那么这个变量就是这几个线程的共享变量
- JAVA内存模型:描述了Java程序中各种变量的访问规则,以及在JVM中将变量存储到内存和从内存中读取出变量这样的底层实现
- 规则
- 线程对共享变量的所有操作都必须在自己的工作内存中进行,不能直接从主内存中读写
- 不同线程之间无法直接访问其他线程工作内存中的变量,线程间变量值的传递需要通过主内存来完成
- 规则
不可见原因
- 线程的交叉执行
- 重排序(代码书写顺序和执行顺序不同,是编译器/处理器为了优化性能而做)+线程的交叉执行
- 共享变量的值没有在工作内存和主内存间及时更新
synchronized
- 规则
- 线程解锁前,必须把共享变量的最新值刷新到主内存中
- 线程加锁时,将清空工作内存中共享变量的值,从而使用共享变量时需要从主内存中重新获取最新的值
- 过程
- 获取互斥锁
- 清空工作内存
- 从主内存拷贝变量的最新副本到工作内存中
- 执行代码
- 将更改后的共享变量刷新到主内存
- 释放互斥锁
volatile
- 通过加入内存屏障和禁止重排序优化来实现
- 对volatile变量执行写操作时,会在写操作后加入一条store屏障指令
- 对volatile变量执行读操作时,会在读操作后加入一条load屏障指令
- volatile变量每次被线程访问时,都强迫从主内存中重读该变量的值,而当该变量发生变化时,又会强迫线程将最新的值刷新到主内存,而当该变量发生变化时,又会强迫线程将最新的值刷新到主内存
- volatile无法保证原子性
private volatile int number = 0;
1.volatile无法保证原子性(所以AQS需要通过CAS修改state)
number++;//1.读取 2.加1 3.写入
2.synchronized可以保证原子性+可见性
synchronized(this){
number++;
}
3.private Lock lock = new ReentrantLock();
lock.lock();
try{
number++;
}catch(Exception e){
e.printStackTrace();
}finally{
lock.unlock();
}
4. AtomicInteger
- volatile适合场景
- 对变量的写入不依赖当前值
- a++,a=a*5不满足
- Boolean满足
- 该变量没有包含在具有其他变量的不变式中
- low<up不满足
- 对变量的写入不依赖当前值
AQS
AQS即抽象队列同步器,它的实现依赖volatile修饰的int变量state以及一个FIFO队列构成等待队列。除了synchronized之外,大部分锁如ReentrantLock都是基于AQS实现的。
同步状态(state)
- 定义
private volatile int state;
- 方法
| 方法 | 描述 |
|---|---|
| protected final int getState() | 获取state值 |
| protected final void setState(int newState) | 设置state值 |
| protected final boolean compareAndSetState(int expect, int update) | CAS修改state的值 |
独占模式下必须重写的方法
| 方法 | 描述 |
|---|---|
| protected boolean tryAcquire(int arg) | 获取独占锁 |
| protected boolean tryRelease(int arg) | 释放独占锁 |
| protected boolean isHeldExclusively() | 判断当前线程是否占有锁 |
独占锁案例
@Slf4j
public class MyLock {
private static class Sync extends AbstractQueuedSynchronizer {
/**
* 独占锁获取,CAS更新成功(前提是state=0)即获得锁返回true 如果state此时是0,CAS更新成1,返回true获取锁成功
*
* @param arg state更新值
* @return 布尔
*/
@Override
protected boolean tryAcquire(int arg) {
return compareAndSetState(0, 1);
}
/**
* 独占锁释放,state=0
*
* @param arg state更新值
* @return 布尔
*/
@Override
protected boolean tryRelease(int arg) {
setState(0);
return true;
}
/**
* 当前同步器是否在独占模式下被占用
*
* @return 布尔
*/
@Override
protected boolean isHeldExclusively() {
return getState() == 1;
}
}
private final static Sync sync = new Sync();
public void lock() {
sync.acquire(1);
}
public void unlock() {
sync.release(1);
}
}
共享模式下必须重写的方法
| 方法 | 描述 |
|---|---|
| protected int tryAcquireShared(int arg) | 获取共享锁(state >= 0) |
| protected boolean tryReleaseShared(int arg) | 释放共享锁 |
共享锁案例
@Slf4j
public class MySharedLock {
private static class Sync extends AbstractQueuedSynchronizer {
public Sync() {
super();
// 设置同步状态初始值
setState(2);
}
/**
* 共享锁获取,CAS更新完state>=0,则获取成功
*
* @param arg
* @return
*/
@Override
protected int tryAcquireShared(int arg) {
while (true) {
int cur = getState();
int next = getState() - arg;
if (next < 0) {
return -1;
}
if (compareAndSetState(cur, next)) {
log.info("获取锁后state:{}", next);
return next;
}
}
}
/**
* 共享锁释放,CAS更新state使其+1成功则释放成功
*
* @param arg
* @return
*/
@Override
protected boolean tryReleaseShared(int arg) {
while (true) {
int cur = getState();
int next = cur + arg;
if (compareAndSetState(cur, next)) {
return true;
}
}
}
}
private final static Sync sync = new Sync();
public void lock() {
sync.acquireShared(1);
}
public void unlock() {
boolean b = sync.releaseShared(1);
}
}
Node节点
静态内部类Node的主要参数
volatile int waitStatus;
volatile Node prev;
volatile Node next;
volatile Thread thread;
Node nextWaiter;
waitStatus参数
| 静态常量名称 | 值 | 含义 |
|---|---|---|
| CANCELLED | 1 | thread has cancelled |
| 初始值 | 0 | 初始值 |
| SIGNAL | -1 | successor's thread needs unparking |
| CONDITION | -2 | thread is waiting on condition |
| PROPAGATE | -3 | the next acquireShared should unconditionally propagate |
head节点的头尾指针
/**
* Head of the wait queue, lazily initialized. Except for
* initialization, it is modified only via method setHead. Note:
* If head exists, its waitStatus is guaranteed not to be
* CANCELLED.
*/
private transient volatile Node head;
/**
* Tail of the wait queue, lazily initialized. Modified only via
* method enq to add new wait node.
*/
private transient volatile Node tail;
上文独占锁详解
- 加锁
public void lock() {
sync.acquire(1);
}
- 调用acquire
public final void acquire(int arg) {
if (!tryAcquire(arg) &&
acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
selfInterrupt();
}
- acquire方法一个短路与,即如果tryAcquire(arg)为true则获得锁整个过程结束,下面讨论获取失败的情形
- 调用addWaiter,这个方法是用来将当前线程封装为Node的形式加入FIFO队列,我们首先考虑头节点不存在的情况
private Node addWaiter(Node mode) {
Node node = new Node(Thread.currentThread(), mode);
// Try the fast path of enq; backup to full enq on failure
Node pred = tail;
if (pred != null) {
node.prev = pred;
if (compareAndSetTail(pred, node)) {
pred.next = node;
return node;
}
}
enq(node);
return node;
}
- 执行enq,如果头节点不存在则new一个头节点,然后第二次进入循环头节点已经存在把当前线程作为FIFO的第二个节点
private Node enq(final Node node) {
for (;;) {
Node t = tail;
if (t == null) { // Must initialize
if (compareAndSetHead(new Node()))
tail = head;
} else {
node.prev = t;
if (compareAndSetTail(t, node)) {
t.next = node;
return t;
}
}
}
}
- 执行完enq整个FIFO如下图所示
- 到这里整个addWaiter方法执行完毕,下面执行acquireQueued,定义如果自旋竞争锁
- 调用acquireQueued
final boolean acquireQueued(final Node node, int arg) {
boolean failed = true;
try {
boolean interrupted = false;
for (;;) {
final Node p = node.predecessor();
if (p == head && tryAcquire(arg)) {
setHead(node);
p.next = null; // help GC
failed = false;
return interrupted;
}
if (shouldParkAfterFailedAcquire(p, node) &&
parkAndCheckInterrupt())
interrupted = true;
}
} finally {
if (failed)
cancelAcquire(node);
}
}
- 获得当前节点的前继节点,如果是头节点,获取锁,执行setHead方法
private void setHead(Node node) {
head = node;
node.thread = null;
node.prev = null;
}
- 当前节点变成头节点,调用shouldParkAfterFailedAcquire,前继节点的waitStatus=0,初始值,此时执行compareAndSetWaitStatus,通过CAS变更前继节点为SIGNAL,第二次循环后再次到达这里此时返回true,执行parkAndCheckInterrupt
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
int ws = pred.waitStatus;
if (ws == Node.SIGNAL)
/*
* This node has already set status asking a release
* to signal it, so it can safely park.
*/
return true;
if (ws > 0) {
/*
* Predecessor was cancelled. Skip over predecessors and
* indicate retry.
*/
do {
node.prev = pred = pred.prev;
} while (pred.waitStatus > 0);
pred.next = node;
} else {
/*
* waitStatus must be 0 or PROPAGATE. Indicate that we
* need a signal, but don't park yet. Caller will need to
* retry to make sure it cannot acquire before parking.
*/
compareAndSetWaitStatus(pred, ws, Node.SIGNAL);
}
return false;
}
- 调用parkAndCheckInterrupt,当前线程获得锁,允许执行
private final boolean parkAndCheckInterrupt() {
LockSupport.park(this);
return Thread.interrupted();
}
- 自旋失败,执行cancelAcquire,放弃当前节点
/**
* Cancels an ongoing attempt to acquire.
*
* @param node the node
*/
private void cancelAcquire(Node node) {
// Ignore if node doesn't exist
if (node == null)
return;
node.thread = null;
// Skip cancelled predecessors
Node pred = node.prev;
while (pred.waitStatus > 0)
node.prev = pred = pred.prev;
// predNext is the apparent node to unsplice. CASes below will
// fail if not, in which case, we lost race vs another cancel
// or signal, so no further action is necessary.
Node predNext = pred.next;
// Can use unconditional write instead of CAS here.
// After this atomic step, other Nodes can skip past us.
// Before, we are free of interference from other threads.
node.waitStatus = Node.CANCELLED;
// If we are the tail, remove ourselves.
if (node == tail && compareAndSetTail(node, pred)) {
compareAndSetNext(pred, predNext, null);
} else {
// If successor needs signal, try to set pred's next-link
// so it will get one. Otherwise wake it up to propagate.
int ws;
if (pred != head &&
((ws = pred.waitStatus) == Node.SIGNAL ||
(ws <= 0 && compareAndSetWaitStatus(pred, ws, Node.SIGNAL))) &&
pred.thread != null) {
Node next = node.next;
if (next != null && next.waitStatus <= 0)
compareAndSetNext(pred, predNext, next);
} else {
unparkSuccessor(node);
}
node.next = node; // help GC
}
}
- 释放锁
public void unlock() {
sync.release(1);
}
- 调用release方法
public final boolean release(int arg) {
if (tryRelease(arg)) {
Node h = head;
if (h != null && h.waitStatus != 0)
unparkSuccessor(h);
return true;
}
return false;
}
- 调用unparkSuccessor方法,让下一个没被取消的Node中的满足要求得以执行
private void unparkSuccessor(Node node) {
/*
* If status is negative (i.e., possibly needing signal) try
* to clear in anticipation of signalling. It is OK if this
* fails or if status is changed by waiting thread.
*/
int ws = node.waitStatus;
if (ws < 0)
compareAndSetWaitStatus(node, ws, 0);
/*
* Thread to unpark is held in successor, which is normally
* just the next node. But if cancelled or apparently null,
* traverse backwards from tail to find the actual
* non-cancelled successor.
*/
Node s = node.next;
if (s == null || s.waitStatus > 0) {
s = null;
for (Node t = tail; t != null && t != node; t = t.prev)
if (t.waitStatus <= 0)
s = t;
}
if (s != null)
LockSupport.unpark(s.thread);
}
Java中的锁
锁的分类
- 乐观/悲观:乐观锁/悲观锁
- 是否公平:公平锁/非公平锁
- 共享资源:共享锁/独占锁
- 锁的状态:偏向锁/轻量级锁/重量级锁
- 其他:自旋锁等
乐观锁
每次读取数据的适合默认别人不会改,只有在更新是时候判断在此期间是否有人更新了这条数据,存在ABA问题,一般可以通过版本号机制解决
悲观锁
每次读写默认别人都会修改数据,所以读写都上锁,Java中大部分悲观锁都是基于AQS实现,首先通过CAS机制获取锁,如果获取失败则转为悲观锁
自旋锁
默认持有锁的线程能够很快的释放锁资源,当前只需要等一等即可
- 优点:可以减少CPU上下文切换,锁竞争不激烈的情况下可以大幅度提升性能(因为这种情况,自旋CPU耗时明显小于线程阻塞挂起,再次唤醒两次CPU上下文切换的时间)
- 缺点:锁竞争激烈的情况下,长时间自旋获取不到锁,造成CPU资源的浪费
- 自旋时间阈值:JDK1.5为固定的时间,1.6之后产生适应性自旋锁,由上次自旋时间和锁的拥有者的状态来决定
synchronized
- synchronized作用非静态方法/成员变量,锁住实例对象
@Slf4j
public class SynchronizedDemo {
public synchronized void test1() {
log.info("执行方法1");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
log.info("方法1完毕");
}
public synchronized void test2() {
log.info("执行方法2");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
log.info("方法2完毕");
}
public static void main(String args[]) {
SynchronizedDemo synchronizedDemo = new SynchronizedDemo();
new Thread(new Runnable() {
@Override
public void run() {
synchronizedDemo.test1();
}
}).start();
new Thread(new Runnable() {
@Override
public void run() {
synchronizedDemo.test2();
}
}).start();
}
}
结果
- synchronized作用静态方法,锁住Class
@Slf4j
public class SynchronizedDemo {
public static synchronized void test1() {
log.info("执行方法1");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
log.info("方法1完毕");
}
public static synchronized void test2() {
log.info("执行方法2");
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
log.info("方法2完毕");
}
public static void main(String args[]) {
new Thread(new Runnable() {
@Override
public void run() {
test1();
}
}).start();
new Thread(new Runnable() {
@Override
public void run() {
test2();
}
}).start();
}
}
结果
- synchronized作用于代码块synchronized(){},锁住()中的对象
实现原理
| 数据区域 | 描述 |
|---|---|
| ContentionList | 锁竞争队列,所有请求锁的线程都被放在竞争队列 |
| EntryList | 竞争候选列表,在ContentionList中有资格竞争锁的候选者 |
| WaitSet | 等待集合,wait阻塞的线程 |
| OnDesk | 竞争候选者,在同一时刻最多只有一个线程在竞争锁资源,该线程的状态被称为OnDesk |
| Owner | 持有锁资源的线程 |
| !Owner | 释放锁资源的线程 |
线程被释放的时候将部分ContentionList移动到EntryList并指定一个线程为OnDesk,然后由OnDesk线程和刚刚开始调用的线程(每个线程初始自旋获取锁失败进入ContentionList)一起竞争锁资源
ReentrantLock
ReentrantLock是基于AQS实现的独占锁,同时也是可重入锁,加锁数量和释放数量一致即可。AQS支持响应中断、定时锁,而ReentrantLock也支持且可以通过这种方式有效避免死锁。
| 方法 | 描述 |
|---|---|
| tryLock | 尝试获取锁 |
| tryLock(long timeout, TimeUnit unit) | 在指定时间内尝试获取锁 |
| lock | 一直等待直到获取锁 |
| lockInterruptibly | 锁中断抛出异常 |
public static void main(String args[]) {
ReentrantLock lock1 = new ReentrantLock();
ReentrantLock lock2 = new ReentrantLock();
new Thread(
new Runnable() {
@Override
public void run() {
try {
log.info("线程1执行");
lock1.lockInterruptibly();
Thread.sleep(1000);
lock2.lockInterruptibly();
log.info("线程1完毕");
} catch (Exception e) {
e.printStackTrace();
} finally {
if (lock1.isHeldByCurrentThread()) {
lock1.unlock();
}
if (lock2.isHeldByCurrentThread()) {
lock2.unlock();
}
}
}
})
.start();
Thread thread =
new Thread(
new Runnable() {
@Override
public void run() {
try {
log.info("线程2执行");
lock2.lockInterruptibly();
Thread.sleep(1000);
lock1.lockInterruptibly();
log.info("线程1完毕");
} catch (Exception e) {
e.printStackTrace();
} finally {
if (lock2.isHeldByCurrentThread()) {
lock2.unlock();
}
if (lock1.isHeldByCurrentThread()) {
lock1.unlock();
}
}
}
});
thread.start();
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
thread.interrupt();
}
结果
synchronized & ReentrantLock
相同
- 都是可重入锁 不同
- synchronized是JVM级别,ReentrantLock是API级别
- synchronized隐式获取/释放锁,ReentrantLock显式获取/释放锁,提供公平锁以及可响应中断/定时等更灵活的机制
- ReentrantLock可以通过Condition绑定多个条件//todo
Semaphore
基于AQS实现的共享锁,创建的时候指定资源数,acquire获取锁,release释放锁
ReentrantReadWriteLock
读写互斥,读之间共享,写之间互斥