GCD 底层源码探索 (下)

673 阅读17分钟

标签:多线程 GCD GCD源码


本章节 主要介绍以下GCD底层源码

  1. GCD 栅栏函数 底层原理探索
  2. GCD 信号量 底层原理探索
  3. GCD 调度组 底层原理探索
  4. GCD dispatch_source

准备工作 GCD源码下载

1 GCD 栅栏函数 底层原理探索

栅栏函数最直接的作用就是 控制任务执行顺序,使同步执行。 GCD中常用的栅栏函数,主要有两种

  • 同步栅栏函数dispatch_barrier_sync(在主线程中执行):前面的任务执行完毕才会来到这里,但是同步栅栏函数会堵塞线程,影响后面的任务执行
  • 异步栅栏函数dispatch_barrier_async:前面的任务执行完毕才会来到这里

栅栏函数 注意点

  • 栅栏函数只能控制同一并发队列
  • 同步栅栏添加进入队列的时候,当前线程会被锁死,直到同步栅栏之前的任务和同步栅栏任务本身执行完毕时,当前线程才会打开然后继续执行下一句代码。
  • 在使用栅栏函数时.使用自定义队列才有意义,如果用的是串行队列或者系统提供的全局并发队列,这个栅栏函数的作用等同于一个同步函数的作用,没有任何意义

举个代码例子 总共有4个任务,其中前2个任务有依赖关系,即任务1执行完,执行任务2,此时可以使用栅栏函数

  • 异步栅栏函数 不会阻塞主线程 ,异步 堵塞 的是队列
    dispatch_queue_t concurrentQueue = dispatch_queue_create("com.ypy.barrier", DISPATCH_QUEUE_CONCURRENT);
    dispatch_async(concurrentQueue, ^{
        NSLog(@"task 1");
    });
    dispatch_barrier_async(concurrentQueue, ^{
        NSLog(@"dispatch_barrier_async task 2");
    });
    dispatch_async(concurrentQueue, ^{
        NSLog(@"task 3");
    });
    NSLog(@"task 4");
2021-08-04 15:54:36.465715+0800 001---函数与队列[34852:1997051] task 4
2021-08-04 15:54:36.465721+0800 001---函数与队列[34852:1997229] task 1
2021-08-04 15:54:36.465834+0800 001---函数与队列[34852:1997229] dispatch_barrier_async task 2
2021-08-04 15:54:36.465937+0800 001---函数与队列[34852:1997229] task 3

同步栅栏函数 会堵塞主线程,同步 堵塞 是当前的线程

  dispatch_queue_t concurrentQueue = dispatch_queue_create("com.ypy.barrier", DISPATCH_QUEUE_CONCURRENT);
    dispatch_async(concurrentQueue, ^{
        NSLog(@"task 1");
    });
    dispatch_barrier_sync(concurrentQueue, ^{
        NSLog(@"dispatch_barrier_sync task 2");
    });
    dispatch_async(concurrentQueue, ^{
        NSLog(@"task 3");
    });
    NSLog(@"task 4");
2021-08-04 15:57:38.095946+0800 001---函数与队列[34897:2000657] task 1
2021-08-04 15:57:38.096087+0800 001---函数与队列[34897:2000511] dispatch_barrier_sync task 2
2021-08-04 15:57:38.096253+0800 001---函数与队列[34897:2000511] task 4
2021-08-04 15:57:38.096269+0800 001---函数与队列[34897:2000657] task 3

总结 异步栅栏函数阻塞的是队列,而且必须是自定义的并发队列,不影响主线程任务的执行 同步栅栏函数阻塞的是线程,且是主线程,会影响主线程其他任务的执行 使用场景 栅栏函数除了用于任务有依赖关系时,同时还可以用于数据安全 像下面这样操作,会崩溃

   NSMutableArray *array = [[NSMutableArray alloc] init];
        dispatch_queue_t concurrentQueue = dispatch_queue_create("com.ypy.barrier", DISPATCH_QUEUE_CONCURRENT);
    for (NSInteger index = 0; index < 1000; index++) {
        dispatch_async(concurrentQueue, ^{
            [array addObject:[NSString stringWithFormat:@"%ld",(long)index]];
        });
    }

此处输入图片的描述 解决崩溃

  • 通过加栅栏函数
    NSMutableArray *array = [[NSMutableArray alloc] init];
        dispatch_queue_t concurrentQueue = dispatch_queue_create("com.ypy.barrier", DISPATCH_QUEUE_CONCURRENT);
    for (NSInteger index = 0; index < 1000; index++) {
        dispatch_async(concurrentQueue, ^{
            dispatch_barrier_async(concurrentQueue, ^{
                [array addObject:[NSString stringWithFormat:@"%ld",(long)index]];
            });
        });
    }
  • 使用互斥锁@synchronized (self) {}
     NSMutableArray *array = [[NSMutableArray alloc] init];
        dispatch_queue_t concurrentQueue = dispatch_queue_create("com.ypy.barrier", DISPATCH_QUEUE_CONCURRENT);
    for (NSInteger index = 0; index < 1000; index++) {
        dispatch_async(concurrentQueue, ^{
            @synchronized (self) {
                [array addObject:[NSString stringWithFormat:@"%ld",(long)index]];
            };
        });
    }

注意

  • 如果栅栏函数中使用 全局队列,运行会崩溃,原因是系统也在用全局并发队列,使用栅栏同时会拦截系统的,所以会崩溃
  • 如果将自定义并发队列改为串行队列,即serial ,串行队列本身就是有序同步 此时加栅栏,会浪费性能
  • 栅栏函数只会阻塞一次

1.1 dispatch_barrier_async 底层分析

#ifdef __BLOCKS__
void
dispatch_barrier_async(dispatch_queue_t dq, dispatch_block_t work)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_BARRIER;
    dispatch_qos_t qos;

    qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
    _dispatch_continuation_async(dq, dc, qos, dc_flags);
}
#endif

进入dispatch_barrier_async源码实现,其底层的实现与dispatch_async类似,这里就不再做分析了,有兴趣的可以自行探索下

1.2 dispatch_barrier_sync 底层分析

进入dispatch_barrier_sync源码,实现如下

void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
        return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
    }
    _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
DISPATCH_NOINLINE
static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	_dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}

1.2.1 _dispatch_barrier_sync_f_inline

进入_dispatch_barrier_sync_f -> _dispatch_barrier_sync_f_inline源码

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    dispatch_tid tid = _dispatch_tid_self();//获取线程的id,即线程的唯一标识
    
    ...
    
    //判断线程状态,需不需要等待,是否回收
    if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {//栅栏函数也会死锁
        return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,//没有回收
                DC_FLAG_BARRIER | dc_flags);
    }
    //验证target是否存在,如果存在,加入栅栏函数的递归查找 是否等待
    if (unlikely(dl->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func,
                DC_FLAG_BARRIER | dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
            DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
                    dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));//执行
}

源码主要有分为以下几部分

  • 通过_dispatch_tid_self获取线程ID
  • 通过_dispatch_queue_try_acquire_barrier_sync判断线程状态
 DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync(dispatch_queue_class_t dq, uint32_t tid)
{
	return _dispatch_queue_try_acquire_barrier_sync_and_suspend(dq._dl, tid, 0);
}
  • 进入_dispatch_queue_try_acquire_barrier_sync_and_suspend,在这里进行释放
/* Used by _dispatch_barrier_{try,}sync
 *
 * Note, this fails if any of e:1 or dl!=0, but that allows this code to be a
 * simple cmpxchg which is significantly faster on Intel, and makes a
 * significant difference on the uncontended codepath.
 *
 * See discussion for DISPATCH_QUEUE_DIRTY in queue_internal.h
 *
 * Initial state must be `completely idle`
 * Final state forces { ib:1, qf:1, w:0 }
 */
DISPATCH_ALWAYS_INLINE DISPATCH_WARN_RESULT
static inline bool
_dispatch_queue_try_acquire_barrier_sync_and_suspend(dispatch_lane_t dq,
		uint32_t tid, uint64_t suspend_count)
{
	uint64_t init  = DISPATCH_QUEUE_STATE_INIT_VALUE(dq->dq_width);
	uint64_t value = DISPATCH_QUEUE_WIDTH_FULL_BIT | DISPATCH_QUEUE_IN_BARRIER |
			_dispatch_lock_value_from_tid(tid) |
			(suspend_count * DISPATCH_QUEUE_SUSPEND_INTERVAL);
	uint64_t old_state, new_state;

	return os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, acquire, {
		uint64_t role = old_state & DISPATCH_QUEUE_ROLE_MASK;
		if (old_state != (init | role)) {
			os_atomic_rmw_loop_give_up(break);//先释放掉
		}
		new_state = value | role;
	});
}

  • 通过 _dispatch_sync_recurse 递归查找栅栏函数的target
DISPATCH_NOINLINE
static void
_dispatch_sync_recurse(dispatch_lane_t dq, void *ctxt,
   	dispatch_function_t func, uintptr_t dc_flags)
{
   dispatch_tid tid = _dispatch_tid_self();
   dispatch_queue_t tq = dq->do_targetq;

   do {
   	if (likely(tq->dq_width == 1)) {
   		if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(tq, tid))) {
   			return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq,
   					DC_FLAG_BARRIER);
   		}
   	} else {
   		dispatch_queue_concurrent_t dl = upcast(tq)._dl;
   		if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
   			return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq, 0);
   		}
   	}
   	tq = tq->do_targetq;
   } while (unlikely(tq->do_targetq));

   _dispatch_introspection_sync_begin(dq);
   _dispatch_sync_invoke_and_complete_recurse(dq, ctxt, func, dc_flags
   		DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
   				dq, ctxt, func, dc_flags)));
}
  • 通过 _dispatch_introspection_sync_begin 对向前信息进行处理
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_introspection_sync_begin(dispatch_queue_class_t dq)
{
	if (!_dispatch_introspection.debug_queue_inversions) return;
	_dispatch_introspection_order_record(dq._dq);//排序记录
}
  • 通过_dispatch_lane_barrier_sync_invoke_and_complete执行block并释放
/*
 - For queues we can cheat and inline the unlock code, which is invalid
 - for objects with a more complex state machine (sources or mach channels)
 */
DISPATCH_NOINLINE
static void
_dispatch_lane_barrier_sync_invoke_and_complete(dispatch_lane_t dq,
		void *ctxt, dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
	_dispatch_sync_function_invoke_inline(dq, ctxt, func);
	_dispatch_trace_item_complete(dc);
	if (unlikely(dq->dq_items_tail || dq->dq_width > 1)) {
		return _dispatch_lane_barrier_complete(dq, 0, 0);
	}

	// Presence of any of these bits requires more work that only
	// _dispatch_*_barrier_complete() handles properly
	//
	// Note: testing for RECEIVED_OVERRIDE or RECEIVED_SYNC_WAIT without
	// checking the role is sloppy, but is a super fast check, and neither of
	// these bits should be set if the lock was never contended/discovered.
	const uint64_t fail_unlock_mask = DISPATCH_QUEUE_SUSPEND_BITS_MASK |
			DISPATCH_QUEUE_ENQUEUED | DISPATCH_QUEUE_DIRTY |
			DISPATCH_QUEUE_RECEIVED_OVERRIDE | DISPATCH_QUEUE_SYNC_TRANSFER |
			DISPATCH_QUEUE_RECEIVED_SYNC_WAIT;
	uint64_t old_state, new_state;

	// similar to _dispatch_queue_drain_try_unlock
	os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
		new_state  = old_state - DISPATCH_QUEUE_SERIAL_DRAIN_OWNED;
		new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
		new_state &= ~DISPATCH_QUEUE_MAX_QOS_MASK;
		if (unlikely(old_state & fail_unlock_mask)) {
			os_atomic_rmw_loop_give_up({
				return _dispatch_lane_barrier_complete(dq, 0, 0);//告诉barrier执行完毕了
			});
		}
	});
	if (_dq_state_is_base_wlh(old_state)) {
		_dispatch_event_loop_assert_not_owned((dispatch_wlh_t)dq);
	}
}

GCD 信号量 底层原理探索

信号量的作用一般是用来使任务同步执行,类似于互斥锁,用户可以根据需要控制GCD最大并发数,一般是这样使用的

//信号量
dispatch_semaphore_t sem = dispatch_semaphore_create(1);
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
dispatch_semaphore_signal(sem);

2.1 dispatch_semaphore_create 创建

该函数的底层实现如下,主要是初始化信号量,并设置GCD的最大并发数,其最大并发数必须大于0

dispatch_semaphore_t
dispatch_semaphore_create(long value)
{
    dispatch_semaphore_t dsema;

    // If the internal value is negative, then the absolute of the value is
    // equal to the number of waiting threads. Therefore it is bogus to
    // initialize the semaphore with a negative value.
    if (value < 0) {
        return DISPATCH_BAD_INPUT;
    }

    dsema = _dispatch_object_alloc(DISPATCH_VTABLE(semaphore),
            sizeof(struct dispatch_semaphore_s));
    dsema->do_next = DISPATCH_OBJECT_LISTLESS;
    dsema->do_targetq = _dispatch_get_default_queue(false);
    dsema->dsema_value = value;
    _dispatch_sema4_init(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
    dsema->dsema_orig = value;
    return dsema;
}

2.2 dispatch_semaphore_wait 加锁

该函数的源码实现如下,其主要作用是对信号量dsema通过os_atomic_dec2o进行了--操作,其内部是执行的C++的atomic_fetch_sub_explicit方法

  • 如果value 大于等于0,表示操作无效,即执行成功
  • 如果value 等于LONG_MIN,系统会抛出一个crash
  • 如果value 小于0,则进入长等待
long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
    // dsema_value 进行 -- 操作
    long value = os_atomic_dec2o(dsema, dsema_value, acquire);
    if (likely(value >= 0)) {//表示执行操作无效,即执行成功
        return 0;
    }
    return _dispatch_semaphore_wait_slow(dsema, timeout);//长等待
}

其中os_atomic_dec2o的宏定义转换如下

#define os_atomic_dec2o(p, f, m) \
		os_atomic_sub2o(p, f, 1, m)

#define os_atomic_sub2o(p, f, v, m) \
		os_atomic_sub(&(p)->f, (v), m)	

#define os_atomic_sub(p, v, m) \
		_os_atomic_c11_op((p), (v), m, sub, -)
		
#define _os_atomic_c11_op(p, v, m, o, op) \
		({ _os_atomic_basetypeof(p) _v = (v), _r = \
		atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v, \
		memory_order_##m); (__typeof__(_r))(_r op _v); })

将具体的值代入为

os_atomic_dec2o(dsema, dsema_value, acquire);

os_atomic_sub2o(dsema, dsema_value, 1, m)

os_atomic_sub(dsema->dsema_value, 1, m)

_os_atomic_c11_op(dsema->dsema_value, 1, m, sub, -)

_r = atomic_fetch_sub_explicit(dsema->dsema_value, 1),
等价于 dsema->dsema_value - 1
  • 进入_dispatch_semaphore_wait_slow的源码实现,当value小于0时,根据等待事件timeout做出不同操作
DISPATCH_NOINLINE
static long
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
		dispatch_time_t timeout)
{
	long orig;

	_dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
	switch (timeout) {
	default:
		if (!_dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) {
			break;
		}
		// Fall through and try to undo what the fast path did to
		// dsema->dsema_value
	case DISPATCH_TIME_NOW:
		orig = dsema->dsema_value;
		while (orig < 0) {
			if (os_atomic_cmpxchgvw2o(dsema, dsema_value, orig, orig + 1,
					&orig, relaxed)) {
				return _DSEMA4_TIMEOUT();
			}
		}
		// Another thread called semaphore_signal().
		// Fall through and drain the wakeup.
	case DISPATCH_TIME_FOREVER:
		_dispatch_sema4_wait(&dsema->dsema_sema);
		break;
	}
	return 0;
}

2.3 dispatch_semaphore_signal 解锁

该函数的源码实现如下,其核心也是通过os_atomic_inc2o函数对value进行了++操作,os_atomic_inc2o内部是通过C++的atomic_fetch_add_explicit

  • 如果value 大于 0,表示操作无效,即执行成功
  • 如果value 等于0,则进入长等待
long
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
    //signal 对 value是 ++
    long value = os_atomic_inc2o(dsema, dsema_value, release);
    if (likely(value > 0)) {//返回0,表示当前的执行操作无效,相当于执行成功
        return 0;
    }
    if (unlikely(value == LONG_MIN)) {
        DISPATCH_CLIENT_CRASH(value,
                "Unbalanced call to dispatch_semaphore_signal()");
    }
    return _dispatch_semaphore_signal_slow(dsema);//进入长等待
}

其中os_atomic_dec2o的宏定义转换如下

#define os_atomic_inc2o(p, f, m) \
		os_atomic_add2o(p, f, 1, m)
		
#define os_atomic_add2o(p, f, v, m) \
		os_atomic_add(&(p)->f, (v), m)	
		
#define os_atomic_add(p, v, m) \
		_os_atomic_c11_op((p), (v), m, add, +)

#define _os_atomic_c11_op(p, v, m, o, op) \
		({ _os_atomic_basetypeof(p) _v = (v), _r = \
		atomic_fetch_##o##_explicit(_os_atomic_c11_atomic(p), _v, \
		memory_order_##m); (__typeof__(_r))(_r op _v); })		

将具体的值代入为

os_atomic_inc2o(dsema, dsema_value, release);

os_atomic_add2o(dsema, dsema_value, 1, m) 

os_atomic_add(&(dsema)->dsema_value, (1), m)

_os_atomic_c11_op((dsema->dsema_value), (1), m, add, +)

_r = atomic_fetch_add_explicit(dsema->dsema_value, 1),
等价于 dsema->dsema_value + 1

2.4 总结

  • dispatch_semaphore_create 主要就是初始化限号量
  • dispatch_semaphore_wait是对信号量的value进行--,即加锁操作
  • dispatch_semaphore_signal 是对信号量的value进行++,即解锁操作 所以,综上所述,信号量相关函数的底层操作如图所示 此处输入图片的描述

3 GCD 调度组 底层原理探索

调度组的最直接作用是控制任务执行顺序,常见方式如下

dispatch_group_create 创建组 
dispatch_group_async 进组任务 
dispatch_group_notify 进组任务执行完毕通知 dispatch_group_wait 进组任务执行等待时间

//进组和出组一般是成对使用的
dispatch_group_enter 进组 
dispatch_group_leave 出组

假设目前有两个任务,需要等待这两个任务都执行完毕,才会更新UI,可以使用调度组

  dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"task 1");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        NSLog(@"task 2");
        dispatch_group_leave(group);
    });
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        NSLog(@"dispatch_group_notify task 3");
    });
    
    NSLog(@"main thread task 4");
2021-08-04 17:14:46.485864+0800 001---函数与队列[35918:2084037] task 2
2021-08-04 17:14:46.485864+0800 001---函数与队列[35918:2083991] main thread task 4
2021-08-04 17:14:47.490749+0800 001---函数与队列[35918:2084039] task 1
2021-08-04 17:14:47.491021+0800 001---函数与队列[35918:2083991] dispatch_group_notify task 3
  • 【修改一】如果将dispatch_group_notify移动到最前面,能否执行?
    dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        NSLog(@"dispatch_group_notify task 3");
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"task 1");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        NSLog(@"task 2");
        dispatch_group_leave(group);
    });

    
    NSLog(@"main thread task 4");
2021-08-04 17:17:27.918027+0800 001---函数与队列[35951:2086835] main thread task 4
2021-08-04 17:17:27.918031+0800 001---函数与队列[35951:2086993] task 2
2021-08-04 17:17:27.927527+0800 001---函数与队列[35951:2086835] dispatch_group_notify task 3
2021-08-04 17:17:28.921646+0800 001---函数与队列[35951:2086996] task 1

能执行,但是是只要有enter-leave成对匹配,notify就会执行,不会等两个组都执行完。意思就是只要enter-leave成对就可以执行

  • 【修改二】再加一个enter,即enter:wait 是 3:2,能否执行notify?
 dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"task 1");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        NSLog(@"task 2");
        dispatch_group_leave(group);
    });
    dispatch_group_enter(group);
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        NSLog(@"dispatch_group_notify task 3");
    });
    
    NSLog(@"main thread task 4");
2021-08-04 17:19:54.681371+0800 001---函数与队列[35985:2090360] main thread task 4
2021-08-04 17:19:54.681373+0800 001---函数与队列[35985:2090502] task 2
2021-08-04 17:19:55.684775+0800 001---函数与队列[35985:2090500] task 1

不能,会一直等待,等一个leave,才会执行notify

  • 【修改三】如果是 enter:wait 是 2:3,能否执行notify?
  dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        sleep(1);
        NSLog(@"task 1");
        dispatch_group_leave(group);
    });
    
    dispatch_group_enter(group);
    dispatch_async(queue, ^{
        NSLog(@"task 2");
        dispatch_group_leave(group);
    });
    dispatch_group_leave(group);
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        NSLog(@"dispatch_group_notify task 3");
    });
    
    NSLog(@"main thread task 4");

此处输入图片的描述 会崩溃,因为enter-leave不成对,崩溃在里面是因为async有延迟

3.1 dispatch_group_create 创建组

主要是创建group,并设置属性,此时的group的value为0

  • 进入dispatch_group_create源码
dispatch_group_t
dispatch_group_create(void)
{
    return _dispatch_group_create_with_count(0);
}
  • 进入_dispatch_group_create_with_count源码,其中是对group对象属性赋值,并返回group对象,其中的n等于0
DISPATCH_ALWAYS_INLINE
static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
    //创建group对象,类型为OS_dispatch_group
    dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
            sizeof(struct dispatch_group_s));
    //group对象赋值
    dg->do_next = DISPATCH_OBJECT_LISTLESS;
    dg->do_targetq = _dispatch_get_default_queue(false);
    if (n) {
        os_atomic_store2o(dg, dg_bits,
                (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
        os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
    }
    return dg;
}

3.2 dispatch_group_enter 进组

进入dispatch_group_enter源码,通过os_atomic_sub_orig2o对dg->dg.bits 作 --操作,对数值进行处理

void
dispatch_group_enter(dispatch_group_t dg)
{
    // The value is decremented on a 32bits wide atomic so that the carry
    // for the 0 -> -1 transition is not propagated to the upper 32bits.
    uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,//原子递减 0 -> -1
            DISPATCH_GROUP_VALUE_INTERVAL, acquire);
    uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
    if (unlikely(old_value == 0)) {//如果old_value
        _dispatch_retain(dg); // <rdar://problem/22318411>
    }
    if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {//到达临界值,会报crash
        DISPATCH_CLIENT_CRASH(old_bits,
                "Too many nested calls to dispatch_group_enter()");
    }
}

3.3 dispatch_group_leave 出组

进入dispatch_group_leave源码

  • -1 到 0,即++操作
  • 根据状态,do-while循环,唤醒执行block任务
  • 如果0 + 1 = 1,enter-leave不平衡,即leave多次调用,会crash
void
dispatch_group_leave(dispatch_group_t dg)
{
    // The value is incremented on a 64bits wide atomic so that the carry for
    // the -1 -> 0 transition increments the generation atomically.
    uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,//原子递增 ++
            DISPATCH_GROUP_VALUE_INTERVAL, release);
    uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);
    //根据状态,唤醒
    if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {
        old_state += DISPATCH_GROUP_VALUE_INTERVAL;
        do {
            new_state = old_state;
            if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
                new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            } else {
                // If the group was entered again since the atomic_add above,
                // we can't clear the waiters bit anymore as we don't know for
                // which generation the waiters are for
                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
            }
            if (old_state == new_state) break;
        } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
                old_state, new_state, &old_state, relaxed)));
        return _dispatch_group_wake(dg, old_state, true);//唤醒
    }
    //-1 -> 0, 0+1 -> 1,即多次leave,会报crash,简单来说就是enter-leave不平衡
    if (unlikely(old_value == 0)) {
        DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
                "Unbalanced call to dispatch_group_leave()");
    }
}
  • 进入_dispatch_group_wake源码,do-while循环进行异步命中,调用_dispatch_continuation_async执行
DISPATCH_NOINLINE
static void
_dispatch_group_wake(dispatch_group_t dg, uint64_t dg_state, bool needs_release)
{
    uint16_t refs = needs_release ? 1 : 0; // <rdar://problem/22318411>

    if (dg_state & DISPATCH_GROUP_HAS_NOTIFS) {
        dispatch_continuation_t dc, next_dc, tail;

        // Snapshot before anything is notified/woken <rdar://problem/8554546>
        dc = os_mpsc_capture_snapshot(os_mpsc(dg, dg_notify), &tail);
        do {
            dispatch_queue_t dsn_queue = (dispatch_queue_t)dc->dc_data;
            next_dc = os_mpsc_pop_snapshot_head(dc, tail, do_next);
            _dispatch_continuation_async(dsn_queue, dc,
                    _dispatch_qos_from_pp(dc->dc_priority), dc->dc_flags);//block任务执行
            _dispatch_release(dsn_queue);
        } while ((dc = next_dc));//do-while循环,进行异步任务的命中

        refs++;
    }

    if (dg_state & DISPATCH_GROUP_HAS_WAITERS) {
        _dispatch_wake_by_address(&dg->dg_gen);//地址释放
    }

    if (refs) _dispatch_release_n(dg, refs);//引用释放
}
  • 进入_dispatch_continuation_async源码
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
        dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
    if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
        _dispatch_trace_item_push(dqu, dc);//跟踪日志
    }
#else
    (void)dc_flags;
#endif
    return dx_push(dqu._dq, dc, qos);//与dx_invoke一样,都是宏
}

这步与异步函数的block回调执行是一致的,这里不再作说明

3.4 dispatch_group_notify 通知

进入dispatch_group_notify源码,如果old_state等于0,就可以进行释放了

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_continuation_t dsn)
{
    uint64_t old_state, new_state;
    dispatch_continuation_t prev;

    dsn->dc_data = dq;
    _dispatch_retain(dq);
    //获取dg底层的状态标识码,通过os_atomic_store2o获取的值,即从dg的状态码 转成了 os底层的state
    prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
    if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
    os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
    if (os_mpsc_push_was_empty(prev)) {
        os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
            new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
            if ((uint32_t)old_state == 0) { //如果等于0,则可以进行释放了
                os_atomic_rmw_loop_give_up({
                    return _dispatch_group_wake(dg, new_state, false);//唤醒
                });
            }
        });
    }
}

除了leave可以通过_dispatch_group_wake唤醒,其中dispatch_group_notify也是可以唤醒的

  • 其中os_mpsc_push_update_tail是宏定义,用于获取dg的状态码
#define os_mpsc_push_update_tail(Q, tail, _o_next)  ({ \
    os_mpsc_node_type(Q) _tl = (tail); \
    os_atomic_store2o(_tl, _o_next, NULL, relaxed); \
    os_atomic_xchg(_os_mpsc_tail Q, _tl, release); \
})

3.5 dispatch_group_async

进入dispatch_group_async 源码,主要是包装任务和异步处理任务

#ifdef __BLOCKS__
void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_block_t db)
{
    
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
    dispatch_qos_t qos;
    //任务包装器
    qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
    //处理任务
    _dispatch_continuation_group_async(dg, dq, dc, qos);
}
#endif

进入_dispatch_continuation_group_async源码,主要是封装了dispatch_group_enter进组操作

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
        dispatch_continuation_t dc, dispatch_qos_t qos)
{
    dispatch_group_enter(dg);//进组
    dc->dc_data = dg;
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);//异步操作
}

进入_dispatch_continuation_async源码,执行常规的异步函数底层操作。既然有了enter,肯定有leave,我们猜测block执行之后隐性的执行leave,通过断点调试,打印堆栈信息 此处输入图片的描述 搜索_dispatch_client_callout的调用,在_dispatch_continuation_with_group_invoke中

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
    struct dispatch_object_s *dou = dc->dc_data;
    unsigned long type = dx_type(dou);
    if (type == DISPATCH_GROUP_TYPE) {//如果是调度组类型
        _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);//block回调
        _dispatch_trace_item_complete(dc);
        dispatch_group_leave((dispatch_group_t)dou);//出组
    } else {
        DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
    }
}

所以,完美的印证dispatch_group_async底层封装的是enter-leave

3.6总结

  • enter-leave只要成对就可以,不管远近
  • dispatch_group_enter在底层是通过C++函数,对group的value进行--操作(即0 -> -1)
  • dispatch_group_leave在底层是通过C++函数,对group的value进行++操作(即-1 -> 0)
  • dispatch_group_notify在底层主要是判断group的state是否等于0,当等于0时,就通知
  • block任务的唤醒,可以通过dispatch_group_leave,也可以通过dispatch_group_notify
  • dispatch_group_async 等同于enter - leave,其底层的实现就是enter-leave

所以综上所述,调度组的底层分析流程如下图所示 此处输入图片的描述

4 GCD dispatch_source

简述

  • dispatch_source是基础数据类型,用于协调特定底层系统事件的处理。
  • dispatch_source替代了异步回调函数,来处理系统相关的事件,当配置一个dispatch时,你需要指定监测的事件、dispatch queue、以及处理事件的代码(block或函数)。当事件发生时,dispatch source会提交你的block或函数到指定的queue去执行。
  • 大致流程为:在任一线程上调用它的一个函数dispatch_source_merge_data后,会执行Dispatch Source事先定义好的句柄(可以把句柄简单理解为一个block),这个过程叫 Custom event ,用户事件。是 dispatch source 支持处理的一种事件。

特点

  • 其CPU负荷非常小,几乎不占用资源

  • 任何线程调用它的函数dispatch_source_merge_data后,会执行DispatchSource事先定义好的句柄(可以把句柄简单理解为一个block),这个过程叫custom event,用户事件。是dispatch_source支持处理的一种事件。

    句柄是一种指向指针的指针。它指向的是一个类或结构,它和系统有很密切的关系。这当中还有一个通用的句柄,就是HANDLE,句柄分一下四种 ,1.实例句柄 HINSTANCE 、2.位图句柄 HBITMAP、3.设备表句柄 HDC、4.图标句柄 HICON

4.1 dispatch_source_create

  • 参数 说明
/*
type	dispatch源可处理的事件
handle	可以理解为句柄、索引或id,假如要监听进程,需要传入进程的ID
mask	可以理解为描述,提供更详细的描述,让它知道具体要监听什么
queue	自定义源需要的一个队列,用来处理所有的响应句柄
*/
dispatch_source_t source = dispatch_source_create(dispatch_source_type_t type, uintptr_t handle, unsigned long mask, dispatch_queue_t queue)
  • Dispatch Source 种类

其中type的类型有以下几种

DISPATCH_SOURCE_TYPE_DATA_ADD	自定义的事件,变量增加
DISPATCH_SOURCE_TYPE_DATA_OR	自定义的事件,变量OR
DISPATCH_SOURCE_TYPE_MACH_SEND	MACH端口发送
DISPATCH_SOURCE_TYPE_MACH_RECV	MACH端口接收
DISPATCH_SOURCE_TYPE_MEMORYPRESSURE	内存压力 (注:iOS8后可用)
DISPATCH_SOURCE_TYPE_PROC	进程监听,如进程的退出、创建一个或更多的子线程、进程收到UNIX信号
DISPATCH_SOURCE_TYPE_READ	IO操作,如对文件的操作、socket操作的读响应
DISPATCH_SOURCE_TYPE_SIGNAL	接收到UNIX信号时响应
DISPATCH_SOURCE_TYPE_TIMER	定时器
DISPATCH_SOURCE_TYPE_VNODE	文件状态监听,文件被删除、移动、重命名
DISPATCH_SOURCE_TYPE_WRITE	IO操作,如对文件的操作、socket操作的写响应

注意:

  • DISPATCH_SOURCE_TYPE_DATA_ADD 当同一时间,一个事件的的触发频率很高,那么Dispatch Source会将这些响应以ADD的方式进行累积,然后等系统空闲时最终处理,如果触发频率比较零散,那么Dispatch Source会将这些事件分别响应。

  • DISPATCH_SOURCE_TYPE_DATA_OR 和上面的一样,是自定义的事件,但是它是以OR的方式进行累积

  • 常用方法

//挂起队列
dispatch_suspend(queue) 

//分派源创建时默认处于暂停状态,在分派源分派处理程序之前必须先恢复
dispatch_resume(source) 

//向分派源发送事件,需要注意的是,不可以传递0值(事件不会被触发),同样也不可以传递负数。
dispatch_source_merge_data 

//设置响应分派源事件的block,在分派源指定的队列上运行
dispatch_source_set_event_handler 

//得到分派源的数据
dispatch_source_get_data 

//得到dispatch源创建,即调用dispatch_source_create的第二个参数
uintptr_t dispatch_source_get_handle(dispatch_source_t source); 

//得到dispatch源创建,即调用dispatch_source_create的第三个参数
unsigned long dispatch_source_get_mask(dispatch_source_t source); 

////取消dispatch源的事件处理--即不再调用block。如果调用dispatch_suspend只是暂停dispatch源。
void dispatch_source_cancel(dispatch_source_t source); 

//检测是否dispatch源被取消,如果返回非0值则表明dispatch源已经被取消
long dispatch_source_testcancel(dispatch_source_t source); 

//dispatch源取消时调用的block,一般用于关闭文件或socket等,释放相关资源
void dispatch_source_set_cancel_handler(dispatch_source_t source, dispatch_block_t cancel_handler); 

//可用于设置dispatch源启动时调用block,调用完成后即释放这个block。也可在dispatch源运行当中随时调用这个函数。
void dispatch_source_set_registration_handler(dispatch_source_t source, dispatch_block_t registration_handler); 

通过案例熟悉一下: (源类型为DISPATCH_SOURCE_TYPE_TIMER)

 //倒计时时间
    __block int timeout = 3;
    
    //创建队列
    dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0);
    
    //创建timer
    dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, globalQueue);
    
    //设置1s触发一次,0s的误差
    /*
     - source 分派源
     - start 数控制计时器第一次触发的时刻。参数类型是 dispatch_time_t,这是一个opaque类型,我们不能直接操作它。我们得需要 dispatch_time 和 dispatch_walltime 函数来创建它们。另外,常量 DISPATCH_TIME_NOW 和 DISPATCH_TIME_FOREVER 通常很有用。
     - interval 间隔时间
     - leeway 计时器触发的精准程度
     */
    dispatch_source_set_timer(timer,dispatch_walltime(NULL, 0),1.0*NSEC_PER_SEC, 0);
    
     //触发的事件
    dispatch_source_set_event_handler(timer, ^{
        //倒计时结束,关闭
        if (timeout <= 0) {
            //取消dispatch源
            dispatch_source_cancel(timer);
        }else{
            timeout--;
            
            dispatch_async(dispatch_get_main_queue(), ^{
                //更新主界面的操作
                NSLog(@"倒计时 - %d", timeout);
            });
        }
    });
    
    //开始执行dispatch源
    dispatch_resume(timer);

Q:Dispatch_source_t的计时器与NSTimer、CADisplayLink比较?

  • NSTimer
    • 存在延迟,与RunLoop和RunLoop Mode有关
    • (如果Runloop正在执行一个连续性运算,timer会被延时触发) 需要手动加入RunLoop,且Model需要设置为forMode:NSCommonRunLoopMode (NSDefaultRunLoopMode模式,触摸事件会让计时器暂停)
  • CADisplayLink
    • 屏幕刷新时调用CADisplayLink,以和屏幕刷新频率同步的频率将特定内容画在屏幕上的定时器类。
    • CADisplayLink在正常情况下会在每次刷新结束都被调用,精确度相当高。
    • 如果调用的方法比较耗时,超过了屏幕刷新周期,就会导致跳过若干次回调调用机会。
  • dispatch_source_t 计时器
    • 时间准确,可以使用子线程,解决跑在主线程上卡UI的问题
    • 不依赖runloop,基于系统内核进行处理,准确性非常高

4.1 dispatch_source_create 源码

dispatch_source_t
dispatch_source_create(dispatch_source_type_t dst, uintptr_t handle,
		unsigned long mask, dispatch_queue_t dq)
{
	dispatch_source_refs_t dr;
	dispatch_source_t ds;

	dr = dux_create(dst, handle, mask)._dr;
	if (unlikely(!dr)) {
		return DISPATCH_BAD_INPUT;
	}

	ds = _dispatch_queue_alloc(source,
			dux_type(dr)->dst_strict ? DSF_STRICT : DQF_MUTABLE, 1,
			DISPATCH_QUEUE_INACTIVE | DISPATCH_QUEUE_ROLE_INNER)._ds;
	ds->dq_label = "source";
	ds->ds_refs = dr;
	dr->du_owner_wref = _dispatch_ptr2wref(ds);

	if (unlikely(!dq)) {
		dq = _dispatch_get_default_queue(true);
	} else {
		_dispatch_retain((dispatch_queue_t _Nonnull)dq);
	}
	ds->do_targetq = dq;
	if (dr->du_is_timer && (dr->du_timer_flags & DISPATCH_TIMER_INTERVAL)) {
		dispatch_source_set_timer(ds, DISPATCH_TIME_NOW, handle, UINT64_MAX);
	}
	_dispatch_object_debug(ds, "%s", __func__);
	return ds;
}

4.2 dispatch_source_set_timer 源码

DISPATCH_NOINLINE
void
dispatch_source_set_timer(dispatch_source_t ds, dispatch_time_t start,
		uint64_t interval, uint64_t leeway)
{
	dispatch_timer_source_refs_t dt = ds->ds_timer_refs;
	dispatch_timer_config_t dtc;

	if (unlikely(!dt->du_is_timer)) {
		DISPATCH_CLIENT_CRASH(ds, "Attempt to set timer on a non-timer source");
	}

	if (dt->du_timer_flags & DISPATCH_TIMER_INTERVAL) {
		dtc = _dispatch_interval_config_create(start, interval, leeway, dt);
	} else {
		dtc = _dispatch_timer_config_create(start, interval, leeway, dt);
	}
	if (_dispatch_timer_flags_to_clock(dt->du_timer_flags) != dtc->dtc_clock &&
			dt->du_filter == DISPATCH_EVFILT_TIMER_WITH_CLOCK) {
		DISPATCH_CLIENT_CRASH(0, "Attempting to modify timer clock");
	}

	_dispatch_source_timer_telemetry(ds, dtc->dtc_clock, &dtc->dtc_timer);
	dtc = os_atomic_xchg2o(dt, dt_pending_config, dtc, release);
	if (dtc) free(dtc);
	dx_wakeup(ds, 0, DISPATCH_WAKEUP_MAKE_DIRTY);
}

4.3 dispatch_source_set_event_handler 源码

#ifdef __BLOCKS__
void
dispatch_source_set_event_handler(dispatch_source_t ds,
		dispatch_block_t handler)
{
	_dispatch_source_set_handler(ds, handler, DS_EVENT_HANDLER, true);
}
DISPATCH_NOINLINE
static void
_dispatch_source_set_handler(dispatch_source_t ds, void *func,
		uintptr_t kind, bool is_block)
{
	dispatch_continuation_t dc;

	dc = _dispatch_source_handler_alloc(ds, func, kind, is_block);

	if (_dispatch_lane_try_inactive_suspend(ds)) {
		_dispatch_source_handler_replace(ds, kind, dc);
		return _dispatch_lane_resume(ds, DISPATCH_RESUME);
	}

	dispatch_queue_flags_t dqf = _dispatch_queue_atomic_flags(ds);
	if (unlikely(dqf & DSF_STRICT)) {
		DISPATCH_CLIENT_CRASH(kind, "Cannot change a handler of this source "
				"after it has been activated");
	}
	// Ignore handlers mutations past cancelation, it's harmless
	if ((dqf & DSF_CANCELED) == 0) {
		_dispatch_ktrace1(DISPATCH_PERF_post_activate_mutation, ds);
		if (kind == DS_REGISTN_HANDLER) {
			_dispatch_bug_deprecated("Setting registration handler after "
					"the source has been activated");
		} else if (func == NULL) {
			_dispatch_bug_deprecated("Clearing handler after "
					"the source has been activated");
		}
	}
	dc->dc_data = (void *)kind;
	_dispatch_barrier_trysync_or_async_f(ds, dc,
			_dispatch_source_set_handler_slow, 0);
}

4.4 dispatch_resume 源码

void
dispatch_resume(dispatch_object_t dou)
{
	DISPATCH_OBJECT_TFB(_dispatch_objc_resume, dou);
	if (unlikely(_dispatch_object_is_global(dou) ||
			_dispatch_object_is_root_or_base_queue(dou))) {
		return;
	}
	if (dx_cluster(dou._do) == _DISPATCH_QUEUE_CLUSTER) {
		_dispatch_lane_resume(dou._dl, DISPATCH_RESUME);
	}
}

4.5 dispatch_source_cancel 源码

void
dispatch_source_cancel(dispatch_source_t ds)
{
	_dispatch_object_debug(ds, "%s", __func__);
	// Right after we set the cancel flag, someone else
	// could potentially invoke the source, do the cancellation,
	// unregister the source, and deallocate it. We would
	// need to therefore retain/release before setting the bit
	_dispatch_retain_2(ds);

	if (_dispatch_queue_atomic_flags_set_orig(ds, DSF_CANCELED) & DSF_CANCELED){
		_dispatch_release_2_tailcall(ds);
	} else {
		dx_wakeup(ds, 0, DISPATCH_WAKEUP_MAKE_DIRTY | DISPATCH_WAKEUP_CONSUME_2);
	}
}