【iOS多线程】DispatchQueue原理解析之异步执行

946 阅读12分钟

本文是作者在学习iOS底层代码时所著,文章中会尽量使用确定性的词汇,旨在帮助想要轻度深入的读者能够快速了解iOS底层实现。文章中内容的逻辑自洽,但内容的正确性需要读者自行甄别,仅供参考。

上篇文章中介绍了GCD中的队列是如何创建的,在这篇文章中,我们来看看异步任务是如何被添加到队列,又是如何被执行的。

dispatch_async

dispatch_async的处理逻辑,根据传入的dq的类型的不同而不同,自定义并行队列执行异步任务是其中的一种常见用法,

let queue = DispatchQueue(label: "com.concurrent", attributes: .concurrent)
queue.async {
    // do something
}

这篇文章将以这种情况为主线,解析底层的具体实现,在这个过程中,我们会逐步发现,主队列、全局队列、自定义串行队列的执行过程,是自定义并行队列执行过程的子集

主线流程

// work是任务,dq是添加任务的队列
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	// 为dc分配内存
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME;
	dispatch_qos_t qos;

	// 为dc的各个变量赋值,这里需要记住dc持有了work,最终执行任务的是dc,
    // 并且从队列中获取优先级(qos)
	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}

继续看_dispatch_continuation_async

static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
		dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
	if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
		_dispatch_trace_item_push(dqu, dc);
	}
#else
	(void)dc_flags;
#endif
	// #define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)
	// #define dx_vtable(x) (&(x)->do_vtable->_os_obj_vtable)
	// 根据宏定义dx_push最终执行的是_dq中的dq_push(x, y, z)
    // 这三个参数分别为队列(_dq),任务(dc),优先级(qos)
	return dx_push(dqu._dq, dc, qos);
}

在上一篇队列创建原理解析中,我们提到队列有一个成员变量vtable,这里_dq中执行dq_push函数的,就是之前存入队列的vtable,我们来看看vtable中的dq_push做了什么,

// 这个是自定义并行队列的vtable实例,如果是其他类型的队列,成员变量的值会有差异,
// 在后面我们会看到更多vtable的实例,
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
	.do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,
	.do_dispose     = _dispatch_lane_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_lane_activate,
	.dq_wakeup      = _dispatch_lane_wakeup,
	.dq_push        = _dispatch_lane_concurrent_push,
);

那么dx_push最终执行的是_dispatch_lane_concurrent_push,我们继续看看,

void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	// <rdar://problem/24738102&24743140> reserving non barrier width
	// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
	// width equivalent), so we have to check that this thread hasn't
	// enqueued anything ahead of this call or we can break ordering

	// 如果dq队列当前为空,并且dq没有执行任务
	if (dq->dq_items_tail == NULL &&
			!_dispatch_object_is_waiter(dou) &&
			!_dispatch_object_is_barrier(dou) &&
			_dispatch_queue_try_acquire_async(dq)) {
        // 将任务交给dq的target队列(也就是root队列)去处理
		return _dispatch_continuation_redirect_push(dq, dou, qos);
	}

	// 将任务保存在dq中
	_dispatch_lane_push(dq, dou, qos);
}

这里的处理逻辑是,如果当前队列中没有任务,那么交给目标队列处理,如果有任务正在处理,那么_dispatch_queue_try_acquire_async(dq)应该会返回false,那么就将任务保存在当前队列中。

可以想象,交给目标队列之后,目标队列也只是将任务保存,然后在一段时间之后,该任务才会被执行。

至此,dispatch_async主线的流程我们已经梳理完了,

  1. 将任务(work)封装到dc(dispatch_continuation_t)中,
  2. 从队列中获取优先级(qos),
  3. 将任务交给root队列去处理(主队列、全局队列的处理流程),或者将任务保存在当前队列(dq)中(自定义串行队列的处理流程)

对于第三点,我们接下来会详细解析,

将任务交给root队列去处理

继续看_dispatch_continuation_redirect_push的源码,

static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
		dispatch_object_t dou, dispatch_qos_t qos)
{
	// likely大概率会执行
	if (likely(!_dispatch_object_is_redirection(dou))) {
		// 这里重新生成了一个dc,dc的vtable被替换为DC_VTABLE(ASYNC_REDIRECT)
		// dou依然是持有任务(work)的结构体,
		dou._dc = _dispatch_async_redirect_wrap(dl, dou);
	} else if (!dou._dc->dc_ctxt) {
		// find first queue in descending target queue order that has
		// an autorelease frequency set, and use that as the frequency for
		// this continuation.
		dou._dc->dc_ctxt = (void *)
		(uintptr_t)_dispatch_queue_autorelease_frequency(dl);
	}

	// 这里很重要,dq变了!
	// 之前的dq(也就是dl),是我们的自定义并行队列,从这里开始,我们的dq变成了targettq,也就是root队列
	dispatch_queue_t dq = dl->do_targetq;
	// 获取qos
	if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
	// dx_push已经是我们的老朋友了,当我们的dq还是自定义并行队列时,我们就执行过dx_push,
	// 现在它又来了,但是dq已经变了,现在我们的dq是root队列
	// dou是持有任务的结构体,qos是优先级
	dx_push(dq, dou, qos);
}

为了知道这次的dx_push执行的是什么,我们再次贴出vtable实例的代码,不过这次是queue_global的,

为什么root队列跟queue_global有关呢,回到上一篇队列创建原理解析中,搜索DISPATCH_GLOBAL_OBJECT_HEADER(queue_global),可以知道答案,root队列定义时就是用的queue_global这个名称。

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
	.do_type        = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
	.do_dispose     = _dispatch_object_no_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_object_no_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_root_queue_wakeup,
	.dq_push        = _dispatch_root_queue_push,
);

继续看_dispatch_root_queue_push

void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	// 这里删除了一堆条件编译代码
    
	// 我们直接看这句代码
	// rp为队列,dou为任务,1不用管
	_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}

继续看_dispatch_root_queue_push_inline

static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
		dispatch_object_t _head, dispatch_object_t _tail, int n)
{
	// 可以理解为hd和tl都是上个函数中的dou,也都是代表同样的我们最初传入的任务(work)
	struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
	// 要理解os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))代码,我们先要知道dq的结构,
	// 关于dq我们知道它是dispatch_queue_t就行,它其中两个变量为dq_items_head和dq_items_tail,
	// dq_items_head就是链表,它保存着整个队列的任务;dq_items_tail的结构是一个链表,但没当链表用,它的next一直都为空,
	// 我的理解是,dq_items_tail就是保存最新进入队列(最后进入队列)的任务的变量
	// 基于以上的内容,os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))所做的事情就是,
	// 让dq_items_tail指向当前的任务,然后让dq_items_head中最后的任务的next也指向当前的任务
	// 最后判断dq_items_tail之前指向的任务是否为空,我猜测app启动,第一次走这个代码时,这里会拿到空,如果为空,表达式成立,执行表达式中的代码,
	// 以后就不会拿到具体的值,表达式为false
	if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
		return _dispatch_root_queue_poke(dq, n, 0);
	}
}

继续看_dispatch_root_queue_poke

void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
{
	// 这里检测一下,是否队列真的为空,像二次检查,因为之前队列为空才会进入这个函数,
	// 这里如果发现队列不为空,则直接返回
	if (!_dispatch_queue_class_probe(dq)) {
		return;
	}

	// n为1,floor为0
	return _dispatch_root_queue_poke_slow(dq, n, floor);
}

继续看_dispatch_root_queue_poke_slow

static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
	int remaining = n;
#if !defined(_WIN32)
	int r = ENOSYS;
#endif
	// 初始化root队列
	_dispatch_root_queues_init();
	// debug时执行,获取debug信息
	_dispatch_debug_root_queue(dq, __func__);
	// #define _dispatch_trace_runtime_event(evt, ptr, value) \
	// 	 do { (void)(ptr); (void)(value); } while(0)
	// 感觉像是执行了dq所指向的地址的代码
	// 这里只能从字面意思上去理解,可能追踪了workder_request的方法
	_dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);

#if !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
	if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)
#endif
	{
		_dispatch_root_queue_debug("requesting new worker thread for global "
				"queue: %p", dq);
		// 这里是第一次往root队列push任务时,会执行的逻辑,从字面意思理解,应该是创建一个线程,
		// 不知道是不是是不是将来要执行任务的线程,
		r = _pthread_workqueue_addthreads(remaining,
				_dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
		(void)dispatch_assume_zero(r);
		return;
	}
#endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
}

这段代码看的不是很懂,仅供参考,

另外,还有一段代码需要看看,_dispatch_root_queues_init

static inline void
_dispatch_root_queues_init(void)
{
	dispatch_once_f(&_dispatch_root_queues_pred, NULL,
			_dispatch_root_queues_init_once);
}

继续看需要执行的函数,_dispatch_root_queues_init_once

static void
_dispatch_root_queues_init_once(void *context DISPATCH_UNUSED)
{
	// 这里就看这一句代码,看起来是在初始化workqueue,其中包含了一个函数_dispatch_worker_thread2,
    // 这里我们需要参考Xcode中的调用栈来猜测workqueue的作用,
	if (unlikely(!_dispatch_kevent_workqueue_enabled)) {
		r = _pthread_workqueue_init(_dispatch_worker_thread2,
				offsetof(struct dispatch_queue_s, dq_serialnum), 0);
    }
}

从调用栈信息中,我们可以知道start_wqthread方法被唤起,然后执行_pthread_wqthread,接着就是_dispatch_worker_thread2,于是我们可以猜测,工作队列(workqueue)与内核有关,内核先唤起工作队列,工作队列再调用初始化时指定的_dispatch_worker_thread2,再接着执行任务,这一块异步任务执行的内容,后面会详细讲解。

到这里我们完全清楚了root队列是如何处理异步任务的,

  1. 更新队列中的tail数据;
  2. 更新队列中的head数据;
  3. 如果之前没有push过数据,则创建线程,注册唤醒时的调用方法;

简单地说,就是将异步任务存起来,就行了。执行任务是另一套逻辑,这里注册好入口函数就行。

将任务保存在当前队列中

接下来是队列正在处理任务的情况,来看_dispatch_lane_push

DISPATCH_NOINLINE
void
_dispatch_lane_push(dispatch_lane_t dq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	dispatch_wakeup_flags_t flags = 0;
	struct dispatch_object_s *prev;

	// 这些等待的情况,我们都不考虑
	if (unlikely(_dispatch_object_is_waiter(dou))) {
		return _dispatch_lane_push_waiter(dq, dou._dsc, qos);
	}

	dispatch_assert(!_dispatch_object_is_global(dq));
	qos = _dispatch_queue_push_qos(dq, qos);

	// If we are going to call dx_wakeup(), the queue must be retained before
	// the item we're pushing can be dequeued, which means:
	// - before we exchange the tail if we have to override
	// - before we set the head if we made the queue non empty.
	// Otherwise, if preempted between one of these and the call to dx_wakeup()
	// the blocks submitted to the queue may release the last reference to the
	// queue when invoked by _dispatch_lane_drain. <rdar://problem/6932776>

	// 这个宏的含义是,
	// let prev = dq->dq_items_tail
	// dq->dq_items_tail = dou._do
	// dq.do_next = NULL
	// return prev
	// 这个应该很好理解,就是把tail替换了
	prev = os_mpsc_push_update_tail(os_mpsc(dq, dq_items), dou._do, do_next);
	if (unlikely(os_mpsc_push_was_empty(prev))) {
		_dispatch_retain_2_unsafe(dq);
		flags = DISPATCH_WAKEUP_CONSUME_2 | DISPATCH_WAKEUP_MAKE_DIRTY;
	} else if (unlikely(_dispatch_queue_need_override(dq, qos))) {
		// There's a race here, _dispatch_queue_need_override may read a stale
		// dq_state value.
		//
		// If it's a stale load from the same drain streak, given that
		// the max qos is monotonic, too old a read can only cause an
		// unnecessary attempt at overriding which is harmless.
		//
		// We'll assume here that a stale load from an a previous drain streak
		// never happens in practice.
		_dispatch_retain_2_unsafe(dq);
		flags = DISPATCH_WAKEUP_CONSUME_2;
	}
	// 这个宏的含义是,
	// 更新dq_items_head链表的数据,挂载dou._do
	os_mpsc_push_update_prev(os_mpsc(dq, dq_items), prev, dou._do, do_next);

	// 一般flags为false
	if (flags) {
		return dx_wakeup(dq, qos, flags);
	}
}

这里做的事情也仅仅是将任务保存到当前队列中,至此dispatch_async保存异步任务的部分就结束了。接下来我们再看看这些任务是如何被执行的。

异步任务的执行

任务被保存之后,在一段时间之后会执行,我们在异步任务执行逻辑中加上代码,查看调用栈信息,

根据网上的回答,我们大概知道start_wqthread是由硬件控制的,用来启动GCD创建的线程,然后执行_pthread_wqthred,然后调用_dispatch_worker_thread2,这个函数我们前面提到过,是root队列第一次添加任务时,注册进工作队列的。

至此,执行任务的线程被唤起,接着来看_dispatch_worker_thread2的源码,看看我们的任务(work)是如何被执行的,

static void
_dispatch_worker_thread2(pthread_priority_t pp)
{
	// 判断是否需要过载(overcommit)
	bool overcommit = pp & _PTHREAD_PRIORITY_OVERCOMMIT_FLAG;
	dispatch_queue_global_t dq;

	// 更新pp的值,用于后面队列的获取
	pp &= _PTHREAD_PRIORITY_OVERCOMMIT_FLAG | ~_PTHREAD_PRIORITY_FLAGS_MASK;
	_dispatch_thread_setspecific(dispatch_priority_key, (void *)(uintptr_t)pp);
	// _dispatch_get_root_queue我们已经很熟悉了,根据优先级和是否过载来获取root队列
	dq = _dispatch_get_root_queue(_dispatch_qos_from_pp(pp), overcommit);

	_dispatch_introspection_thread_add();
	_dispatch_trace_runtime_event(worker_unpark, dq, 0);

	int pending = os_atomic_dec2o(dq, dgq_pending, relaxed);
	dispatch_assert(pending >= 0);

	// 继续看这里
	_dispatch_root_queue_drain(dq, dq->dq_priority,
			DISPATCH_INVOKE_WORKER_DRAIN | DISPATCH_INVOKE_REDIRECTING_DRAIN);
	_dispatch_voucher_debug("root queue clear", NULL);
	_dispatch_reset_voucher(NULL, DISPATCH_THREAD_PARK);
	_dispatch_trace_runtime_event(worker_park, NULL, 0);
}

继续看_dispatch_root_queue_drain

static void
_dispatch_root_queue_drain(dispatch_queue_global_t dq,
		dispatch_priority_t pri, dispatch_invoke_flags_t flags)
{
#if DISPATCH_DEBUG
	dispatch_queue_t cq;
	if (unlikely(cq = _dispatch_queue_get_current())) {
		DISPATCH_INTERNAL_CRASH(cq, "Premature thread recycling");
	}
#endif
	_dispatch_queue_set_current(dq);
	_dispatch_init_basepri(pri);
	_dispatch_adopt_wlh_anon();

	struct dispatch_object_s *item;
	bool reset = false;
	dispatch_invoke_context_s dic = { };
#if DISPATCH_COCOA_COMPAT
	_dispatch_last_resort_autorelease_pool_push(&dic);
#endif // DISPATCH_COCOA_COMPAT
	_dispatch_queue_drain_init_narrowing_check_deadline(&dic, pri);
	_dispatch_perfmon_start();
	// 根据FIFO原则,取出dq中的head
	while (likely(item = _dispatch_root_queue_drain_one(dq))) {
		if (reset) _dispatch_wqthread_override_reset();
        // 接着执行这里的代码
		_dispatch_continuation_pop_inline(item, &dic, flags, dq);
		reset = _dispatch_reset_basepri_override();
		if (unlikely(_dispatch_queue_drain_should_narrow(&dic))) {
			break;
		}
	}

	// overcommit or not. worker thread
	if (pri & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
		_dispatch_perfmon_end(perfmon_thread_worker_oc);
	} else {
		_dispatch_perfmon_end(perfmon_thread_worker_non_oc);
	}

#if DISPATCH_COCOA_COMPAT
	_dispatch_last_resort_autorelease_pool_pop(&dic);
#endif // DISPATCH_COCOA_COMPAT
	_dispatch_reset_wlh();
	_dispatch_clear_basepri();
	_dispatch_queue_set_current(NULL);
}

看看head是如何被取出的,_dispatch_root_queue_drain_one

static inline struct dispatch_object_s *
_dispatch_root_queue_drain_one(dispatch_queue_global_t dq)
{
	struct dispatch_object_s *head, *next;

start:
	// The MEDIATOR value acts both as a "lock" and a signal
    // 把链表头取出来,把DISPATCH_ROOT_QUEUE_MEDIATOR存进去,根据注释DISPATCH_ROOT_QUEUE_MEDIATOR作为锁和信号
	head = os_atomic_xchg2o(dq, dq_items_head,
			DISPATCH_ROOT_QUEUE_MEDIATOR, relaxed);

	// head不为空
	if (unlikely(head == NULL)) {
		// The first xchg on the tail will tell the enqueueing thread that it
		// is safe to blindly write out to the head pointer. A cmpxchg honors
		// the algorithm.
		if (unlikely(!os_atomic_cmpxchg2o(dq, dq_items_head,
				DISPATCH_ROOT_QUEUE_MEDIATOR, NULL, relaxed))) {
			goto start;
		}
		if (unlikely(dq->dq_items_tail)) { // <rdar://problem/14416349>
			if (__DISPATCH_ROOT_QUEUE_CONTENDED_WAIT__(dq,
					_dispatch_root_queue_head_tail_quiesced)) {
				goto start;
			}
		}
		_dispatch_root_queue_debug("no work on global queue: %p", dq);
		return NULL;
	}

	// head也不是mediator
	if (unlikely(head == DISPATCH_ROOT_QUEUE_MEDIATOR)) {
		// This thread lost the race for ownership of the queue.
		if (likely(__DISPATCH_ROOT_QUEUE_CONTENDED_WAIT__(dq,
				_dispatch_root_queue_mediator_is_gone))) {
			goto start;
		}
		return NULL;
	}

	// Restore the head pointer to a sane value before returning.
	// If 'next' is NULL, then this item _might_ be the last item.
    // 取head的next
	next = head->do_next;

	if (unlikely(!next)) {
		os_atomic_store2o(dq, dq_items_head, NULL, relaxed);
		// 22708742: set tail to NULL with release, so that NULL write to head
		//           above doesn't clobber head from concurrent enqueuer
		if (os_atomic_cmpxchg2o(dq, dq_items_tail, head, NULL, release)) {
			// both head and tail are NULL now
			goto out;
		}
		// There must be a next item now.
		next = os_mpsc_get_next(head, do_next);
	}

	// 把next作为head存进去
	os_atomic_store2o(dq, dq_items_head, next, relaxed);
    // 根据需要创建线程或者复用线程池的线程
	_dispatch_root_queue_poke(dq, 1, 0);
out:
	// 返回取出来的head
	return head;
}

回过头去看,_dispatch_continuation_pop_inline,

static inline void
_dispatch_continuation_pop_inline(dispatch_object_t dou,
		dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags,
		dispatch_queue_class_t dqu)
{
	dispatch_pthread_root_queue_observer_hooks_t observer_hooks =
			_dispatch_get_pthread_root_queue_observer_hooks();
	if (observer_hooks) observer_hooks->queue_will_execute(dqu._dq);
	flags &= _DISPATCH_INVOKE_PROPAGATE_MASK;
	// vtable中的invoke如果实现了的话,
	if (_dispatch_object_has_vtable(dou)) {
		// 执行dx_invoke
		dx_invoke(dou._dq, dic, flags);
	} else {
		// 否则执行这里的代码
		_dispatch_continuation_invoke_inline(dou, flags, dqu);
	}
	if (observer_hooks) observer_hooks->queue_did_execute(dqu._dq);
}

这里我们需要再看看vtable的值,

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
	.do_type        = DISPATCH_QUEUE_CONCURRENT_TYPE,
	.do_dispose     = _dispatch_lane_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_lane_invoke,

	.dq_activate    = _dispatch_lane_activate,
	.dq_wakeup      = _dispatch_lane_wakeup,
	.dq_push        = _dispatch_lane_concurrent_push,
);

DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
	.do_type        = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
	.do_dispose     = _dispatch_object_no_dispose,
	.do_debug       = _dispatch_queue_debug,
	.do_invoke      = _dispatch_object_no_invoke,

	.dq_activate    = _dispatch_queue_no_activate,
	.dq_wakeup      = _dispatch_root_queue_wakeup,
	.dq_push        = _dispatch_root_queue_push,
);

可以很清楚地看到,queue_global.do_invoke_dispatch_object_no_invoke,即没有实现,所以自定义队列会走dx_invoke,而全局队列会走else的代码。

dx_invoke之后也会执行_dispatch_continuation_invoke_inline的代码,这也是为什么说自定义并行队列的逻辑包含了自定义串行队列、主队列、全局队列的逻辑。

dx_invoke执行DC_VTABLE(ASYNC_REDIRECT),实际上就是_dispatch_async_redirect_invoke,我们继续往下看,

void
_dispatch_async_redirect_invoke(dispatch_continuation_t dc,
		dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags)
{
	dispatch_thread_frame_s dtf;
	struct dispatch_continuation_s *other_dc = dc->dc_other;
	dispatch_invoke_flags_t ctxt_flags = (dispatch_invoke_flags_t)dc->dc_ctxt;
	// if we went through _dispatch_root_queue_push_override,
	// the "right" root queue was stuffed into dc_func
	dispatch_queue_global_t assumed_rq = (dispatch_queue_global_t)dc->dc_func;
	dispatch_lane_t dq = dc->dc_data;
	dispatch_queue_t rq, old_dq;
	dispatch_priority_t old_dbp;

	// 这里也是一些判断和配置
	if (ctxt_flags) {
		flags &= ~_DISPATCH_INVOKE_AUTORELEASE_MASK;
		flags |= ctxt_flags;
	}
	old_dq = _dispatch_queue_get_current();
	if (assumed_rq) {
		old_dbp = _dispatch_root_queue_identity_assume(assumed_rq);
		_dispatch_set_basepri(dq->dq_priority);
	} else {
		old_dbp = _dispatch_set_basepri(dq->dq_priority);
	}

	uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_NO_INTROSPECTION;
	_dispatch_thread_frame_push(&dtf, dq);
    // 这里会执行_dispatch_continuation_pop
	_dispatch_continuation_pop_forwarded(dc, dc_flags, NULL, {
		_dispatch_continuation_pop(other_dc, dic, flags, dq);
	});
	_dispatch_thread_frame_pop(&dtf);
	if (assumed_rq) _dispatch_queue_set_current(old_dq);
	_dispatch_reset_basepri(old_dbp);

	rq = dq->do_targetq;
	while (unlikely(rq->do_targetq && rq != old_dq)) {
		_dispatch_lane_non_barrier_complete(upcast(rq)._dl, 0);
		rq = rq->do_targetq;
	}

	// pairs with _dispatch_async_redirect_wrap
	_dispatch_lane_non_barrier_complete(dq, DISPATCH_WAKEUP_CONSUME_2);
}

继续看_dispatch_continuation_pop,

void
_dispatch_continuation_pop(dispatch_object_t dou, dispatch_invoke_context_t dic,
		dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu)
{
	_dispatch_continuation_pop_inline(dou, dic, flags, dqu._dq);
}

又回到了_dispatch_continuation_pop_inline

static inline void
_dispatch_continuation_pop_inline(dispatch_object_t dou,
		dispatch_invoke_context_t dic, dispatch_invoke_flags_t flags,
		dispatch_queue_class_t dqu)
{
	dispatch_pthread_root_queue_observer_hooks_t observer_hooks =
			_dispatch_get_pthread_root_queue_observer_hooks();
	if (observer_hooks) observer_hooks->queue_will_execute(dqu._dq);
	flags &= _DISPATCH_INVOKE_PROPAGATE_MASK;
	// vtable中的invoke如果实现了的话,
	if (_dispatch_object_has_vtable(dou)) {
		// 执行dx_invoke
		dx_invoke(dou._dq, dic, flags);
	} else {
		// 否则执行这里的代码
		_dispatch_continuation_invoke_inline(dou, flags, dqu);
	}
	if (observer_hooks) observer_hooks->queue_did_execute(dqu._dq);
}

这一次执行_dispatch_continuation_invoke_inline

static inline void
_dispatch_continuation_invoke_inline(dispatch_object_t dou,
		dispatch_invoke_flags_t flags, dispatch_queue_class_t dqu)
{
	dispatch_continuation_t dc = dou._dc, dc1;
	dispatch_invoke_with_autoreleasepool(flags, {
		uintptr_t dc_flags = dc->dc_flags;
		// Add the item back to the cache before calling the function. This
		// allows the 'hot' continuation to be used for a quick callback.
		//
		// The ccache version is per-thread.
		// Therefore, the object has not been reused yet.
		// This generates better assembly.
		_dispatch_continuation_voucher_adopt(dc, dc_flags);
		if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
			_dispatch_trace_item_pop(dqu, dou);
		}
		if (dc_flags & DC_FLAG_CONSUME) {
			dc1 = _dispatch_continuation_free_cacheonly(dc);
		} else {
			dc1 = NULL;
		}
		if (unlikely(dc_flags & DC_FLAG_GROUP_ASYNC)) {
			_dispatch_continuation_with_group_invoke(dc);
		} else {
        	// 执行这里代码
			_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
			_dispatch_trace_item_complete(dc);
		}
		if (unlikely(dc1)) {
			_dispatch_continuation_free_to_cache_limit(dc1);
		}
	});
	_dispatch_perfmon_workitem_inc();
}

继续看_dispatch_client_callout

static inline void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
	return f(ctxt);
}

这时候,我们得回到dc初始化的时候_dispatch_continuation_init_f

static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	pthread_priority_t pp = 0;
	dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
	dc->dc_func = f;
	dc->dc_ctxt = ctxt;
	// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
	// should not be propagated, only taken from the handler if it has one
	if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
		pp = _dispatch_priority_propagate();
	}
	_dispatch_continuation_voucher_set(dc, flags);
	return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}

其中的func为_dispatch_call_block_and_release,ctxt持有了我们传入的work,再看看_dispatch_call_block_and_release

void
_dispatch_call_block_and_release(void *block)
{
	void (^b)(void) = block;
	b();
	Block_release(b);
}

至此dispatch_async执行自定义并行队列的异步任务的流程就全部结束了。

总结

任务存储逻辑

  1. 将work, qos,flags放入dc中;
  2. 根据dq的类型(我们解析的是自定义并行队列),执行dq中vtable的do_push函数;
  3. 如果dq中有正在执行的任务,把当前任务存入dq中即可;
  4. 如果dq中没有正在执行的任务,这时将当前任务重定向到目标队列(也就是root队列);
  5. 这时,dc中的vtabel会被替换,然后dq(此时为root队列,接下来的dq均指root队列)执行do_push函数;(接下来为全局队列/主队列执行任务的流程)
  6. _dispatch_root_queue_push被执行;
  7. 如果dq曾经添加过任务,那么只需要将当前任务添加到dq中即可;
  8. 如果dq是第一次添加任务,则会进行一些初始化,并且创建线程;

任务执行逻辑

  1. 内核唤起线程执行代码start_wqthread
  2. 根据qos取出相应dq(root队列);
  3. 取出dq中的head任务,执行do_invoke函数,执行_dispatch_async_redirect_invoke转发逻辑;
  4. 一些处理之后,走回全局队列的执行逻辑,_dispatch_continuation_invoke_inline
  5. 最终_dispatch_client_callout执行了我们传入的work;

结语

由于能力有限,文章中难免出现疏漏或者根本性错误,欢迎指正。

如果文章对您有帮助,可以点个赞支持一下,谢谢!