GCD函数

294 阅读10分钟

在前一篇文章GCD基础中,主要回顾了下GCD的一些基础概念,本文则聚焦于GCD的两个重要函数asyncsync,我们看这两个函数,个人认为主要看得是任务的调用时机,以及关于线程的相关操作,因此本文也是主要看这两方面。

同步函数

首先来看下同步函数的定义:

DISPATCH_NOINLINE
void dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
	uintptr_t dc_flags = DC_FLAG_BLOCK;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
	}
	_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
DISPATCH_NOINLINE
static void _dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
		uintptr_t dc_flags)
{
	_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}

#define _dispatch_Block_invoke(bb) \
		((dispatch_function_t)((struct Block_layout *)bb)->invoke)
  • 这里传入的两个参数分别是队列dq任务work
  • unlikely类似于fastpath(x)是一种指令优化,
  • 任务work这里通过_dispatch_Block_invoke做了一次包装,形成统一格式,((dispatch_function_t)((struct Block_layout *)block块代码)->invoke)
  • 最终调用_dispatch_sync_f进行同步函数的处理,而_dispatch_sync_f又做了次跳转到_dispatch_sync_f_inline
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	if (likely(dq->dq_width == 1)) {
		return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
	}

	if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
		DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// Global concurrent queues and queues bound to non-dispatch threads
	// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
		return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
	}

	if (unlikely(dq->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
			_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

在这里我们要分析任务的调用时机,也就是说主要看func参数的传递方向就可以,但在_dispatch_sync_f_inline函数中,可以看到有挺多的调用func的并带有返回的方法阻碍我们分析,其实这里dispatch根据不同的情况会进行不同的方法调用,我们也要根据不同情况来看下调用情况,

并发队列+同步函数

dispatch_queue_t queue = dispatch_queue_create("fm", DISPATCH_QUEUE_CONCURRENT);

为了找到最终调用的方法,我们可以通过下符号断点的方式查看调用流程: image.png _dispatch_sync_f_inline在并发队列的情况下,调用的是_dispatch_sync_invoke_and_complete image.png _dispatch_sync_invoke_and_complete又调用_dispatch_sync_function_invoke_inline image.png _dispatch_sync_function_invoke_inline调用_dispatch_client_callout image.png _dispatch_client_callout通过f(ctxt)进行回调block块代码。 因此并发队列的情况下调用流程为:

[dispatch_sync - 传入队列以及任务block] --> [调用_dispatch_sync_f] --> [_dispatch_sync_f_inline] --> [_dispatch_sync_invoke_and_complete] --> [_dispatch_sync_function_invoke_inline] --> [_dispatch_client_callout] --> [回调block块]

全局并发队列+同步函数

在并发队列的情况下还有一种特殊情况就是全局并发队列dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0) image.png 通过断点调试可以看到全局并发队列在执行到[dispatch_sync - 传入队列以及任务block] --> [调用_dispatch_sync_f] --> [_dispatch_sync_f_inline]之后调用的是_dispatch_sync_f_slow image.png 而在_dispatch_sync_f_slow中又有两处调用,同样添加符号断点: image.png 发现:_dispatch_sync_f_slow调用的_dispatch_sync_function_invoke

image.png_dispatch_sync_function_invoke中又调用了_dispatch_sync_function_invoke_inline,然后调用_dispatch_client_callout 因此整个全局并发队列调用流程如下: [dispatch_sync - 传入队列以及任务block] --> [调用_dispatch_sync_f] --> [_dispatch_sync_f_inline] --> [_dispatch_sync_f_slow] --> [_dispatch_sync_function_invoke] --> [_dispatch_sync_function_invoke_inline] --> [_dispatch_client_callout] --> [回调block块]

串行队列+同步函数

那么串行队列的调用又是如何呢? image.png 显然:dispatch_sync函数在调用_dispatch_sync_f_inline后调用的是_dispatch_barrier_sync_f

image.png _dispatch_barrier_sync_f又调用_dispatch_barrier_sync_f_inline image.png 但是在_dispatch_barrier_sync_f_inline该函数中又分别调用不同的方法,我们同样使用下符号断点的方式,对这三个函数名下符号断点。 image.png 也就是说_dispatch_barrier_sync_f_inline调用了_dispatch_lane_barrier_sync_invoke_and_complete

image.png_dispatch_lane_barrier_sync_invoke_and_complete函数进入就调用了_dispatch_sync_function_invoke_inline,之后就调用_dispatch_client_callout然后通过f(ctxt)进行block任务代码。

因此,串行队列的情况下调用流程为: [dispatch_sync - 传入队列以及任务block] --> [调用_dispatch_sync_f] --> [_dispatch_sync_f_inline] --> [_dispatch_barrier_sync_f] --> [_dispatch_barrier_sync_f_inline] --> [_dispatch_sync_function_invoke_inline] --> [_dispatch_client_callout] --> [回调block块]

主队列+同步函数

同样在串行队列下也有一种特殊情况,也就是主队列dispatch_get_main_queue,在前一篇文章GCD基础中,我们说到这种情况会发生死锁,那就从调用流程以及源码中看下为什么会死锁:

image.png 首先开始还是串行队列+同步函数的流程,也就是 [dispatch_sync - 传入队列以及任务block] --> [调用_dispatch_sync_f] --> [_dispatch_sync_f_inline] --> [_dispatch_barrier_sync_f]然后调用_dispatch_barrier_sync_f_inline image.png 但在_dispatch_barrier_sync_f_inline函数内容,调用了_dispatch_sync_f_slow image.png _dispatch_sync_f_slow函数内部调用__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq)触发死锁。

在触发死锁前流程如下: [dispatch_sync - 传入队列以及任务block] --> [调用_dispatch_sync_f] --> [_dispatch_sync_f_inline] --> [_dispatch_barrier_sync_f] -->[ _dispatch_sync_f_slow]-->[调用__DISPATCH_WAIT_FOR_QUEUE__触发死锁]

死锁

那么为什么会死锁呢?我们可以通过调用的__DISPATCH_WAIT_FOR_QUEUE__函数来看下

DISPATCH_NOINLINE
static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{
	uint64_t dq_state = _dispatch_wait_prepare(dq);
	if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
		DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
				"dispatch_sync called on queue"
				"already owned by current thread");
	}
//省略大部分代码,死锁的情况下,下方代码不走
}

DISPATCH_ALWAYS_INLINE
static inline bool
_dq_state_drain_locked_by(uint64_t dq_state, dispatch_tid tid)
{
	return _dispatch_lock_is_locked_by((dispatch_lock)dq_state, tid);
}
//最终调用的,其实是这个函数
DISPATCH_ALWAYS_INLINE
static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
	// equivalent to _dispatch_lock_owner(lock_value) == tid
	return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}

#define DLOCK_OWNER_MASK        ((dispatch_lock)0xfffffffc)

可以看到最终调用的是_dispatch_lock_is_locked_by,在该函数中

  • lock_value表示的是当前队列
  • tid表示当前线程
  • DLOCK_OWNER_MASK通过定义可知是个非常大的数 也就是说((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0的前提是lock_valuetid是相等的, 因此整个表达式意思就是当前线程正在被这个串行队列的任务占用
    因此当前线程被占用,而无法执行新任务,新任务又在等待执行,这也就导致了死锁发生。这里也有了触发死锁的三个必要条件:
  • 当前线程:正在执行任务的线程;
  • 同步:不会开辟新线程,并且立即执行新任务。
  • 当前串行队列里面同步执行当前串行队列的其他任务

总结:

纵观整个同步函数的调用流程可以发现以下几点:

  • 同步函数不会开启新线程,只能在当前线程执行任务。
  • 新加入的任务会被立即执行。
  • 当前串行队列里面同步执行当前串行队列的其他(后加入的)任务就会死锁,解决的方法就是将同步的串行队列放到另外一个线程就能够解决。

异步函数

接下来就是异步函数dispatch_async,首先来看下异步函数的源码定义:

static inline dispatch_continuation_t
_dispatch_continuation_alloc(void)
{
	dispatch_continuation_t dc =
			_dispatch_continuation_alloc_cacheonly();
	if (unlikely(!dc)) {
		return _dispatch_continuation_alloc_from_heap();
	}
	return dc;
}

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME;
	dispatch_qos_t qos;

	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
  • _dispatch_continuation_alloc用来开辟内存空间,存储包装block使用;
  • _dispatch_continuation_init初始化函数,保存block任务上下文,指定block的执行函数
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, dispatch_block_t work,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
        //包装block任务
	void *ctxt = _dispatch_Block_copy(work);

	dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		dc->dc_flags = dc_flags;
		dc->dc_ctxt = ctxt;
		// will initialize all fields but requires dc_flags & dc_ctxt to be set
		return _dispatch_continuation_init_slow(dc, dqu, flags);
	}

	dispatch_function_t func = _dispatch_Block_invoke(work);
	if (dc_flags & DC_FLAG_CONSUME) {
		func = _dispatch_call_block_and_release;
	}
	return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}

DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	pthread_priority_t pp = 0;
	dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
	dc->dc_func = f;
	dc->dc_ctxt = ctxt;
	// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
	// should not be propagated, only taken from the handler if it has one
	if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
		pp = _dispatch_priority_propagate();
	}
	_dispatch_continuation_voucher_set(dc, flags);
	return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}

typedef uint32_t dispatch_qos_t;
typedef uint32_t dispatch_priority_t;

#define DISPATCH_QOS_UNSPECIFIED        ((dispatch_qos_t)0)
dispatch_qos_t qos = DISPATCH_QOS_UNSPECIFIED;

整个_dispatch_continuation_init主要是为了保存block任务,以及返回qos优先级参数;

  • dc即为包装block的地址;这里通过dc->dc_ctxt对任务进行保存,dc->dc_func对任务调用做了封装。
  • dispatch_qos_t为封装后的任务优先级,通过定义可以看到dispatch_qos_tuint32_t,因此_dispatch_continuation_priority_set函数返回的qos也就是个uint32_t类型
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
		dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
	if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
		_dispatch_trace_item_push(dqu, dc);
	}
#else
	(void)dc_flags;
#endif
	return dx_push(dqu._dq, dc, qos);
}

保存数据之后就调用_dispatch_continuation_async,然后调用了dx_push函数,dq_push需要根据队列的类型,执行不同的函数

#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)

image.png

这里我们以自定义并发队列为例,也就是说调用_dispatch_lane_concurrent_push

void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	// <rdar://problem/24738102&24743140> reserving non barrier width
	// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
	// width equivalent), so we have to check that this thread hasn't
	// enqueued anything ahead of this call or we can break ordering
	if (dq->dq_items_tail == NULL &&
			!_dispatch_object_is_waiter(dou) &&
			!_dispatch_object_is_barrier(dou) &&
			_dispatch_queue_try_acquire_async(dq)) {
		return _dispatch_continuation_redirect_push(dq, dou, qos);
	}

	_dispatch_lane_push(dq, dou, qos);
}

_dispatch_continuation_redirect_push(dispatch_lane_t dl,
		dispatch_object_t dou, dispatch_qos_t qos)
{
	...省略部分代码
	dispatch_queue_t dq = dl->do_targetq;
	if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
	dx_push(dq, dou, qos);
}

再非栅栏函数的情况下,则调用_dispatch_continuation_redirect_push,但在这里又再次的调用了dx_push,那么在这里与之前的调用只有一个dq的区别,那么此处的dq又是什么呢?

我们看到这里的dqdl->do_targetqdl也就是创建异步任务时传入的queue,在queue创建的时候,我们是通过如下代码创建的:

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
	return _dispatch_lane_create_with_target(label, attr,
			DISPATCH_TARGET_QUEUE_DEFAULT, true);
}

static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
		dispatch_queue_t tq, bool legacy)
{

	...省略代码...
	if (!tq) {
		tq = _dispatch_get_root_queue(
				qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
				overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
		if (unlikely(!tq)) {
			DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
		}
	}

	...省略代码...
	_dispatch_retain(tq);
	dq->do_targetq = tq;

	
	return _dispatch_trace_queue_create(dq)._dq;
}
static inline dispatch_queue_global_t
_dispatch_get_root_queue(dispatch_qos_t qos, bool overcommit)
{
	if (unlikely(qos < DISPATCH_QOS_MIN || qos > DISPATCH_QOS_MAX)) {
		DISPATCH_CLIENT_CRASH(qos, "Corrupted priority");
	}
	return &_dispatch_root_queues[2 * (qos - 1) + overcommit];
}

我们知道DISPATCH_TARGET_QUEUE_DEFAULT的值为NULL,因此tq = _dispatch_get_root_queue(),而此时dispatch_get_root_queue 的返回值为dispatch_queue_global_t类型,也就是说tqdispatch_queue_global_t类型,然后通过dq->do_targetq = tqdo_targetq赋值成了dispatch_queue_global_t,然后又再次调用了dx_push

image.png

所以当第二次再次调用dx_push的时候,就又调用了_dispatch_root_queue_push

void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
	...省略大量代码
	_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
		dispatch_object_t _head, dispatch_object_t _tail, int n)
{
	struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
	if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
		return _dispatch_root_queue_poke(dq, n, 0);
	}
}
void
_dispatch_root_queue_poke(dispatch_queue_global_t dq, int n, int floor)
{
	...省略大量代码
	return _dispatch_root_queue_poke_slow(dq, n, floor);
}
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
	...省略大量代码

	_dispatch_root_queues_init();
	...省略大量代码
}

在调用_dispatch_root_queue_push之后,就进入到了一连串状态判断以及参数设置的过程,最终调用_dispatch_root_queues_init进行队列及线程的绑定。

那block又是在何时被调用的呢?我们可以先通过查看堆栈来看一下调用堆栈:

image.png 可以看到最初调用的方法为_dispatch_worker_thread2,通过全局搜索源码可以看到调用关系:

image.png 可以看到,所有的_dispatch_worker_thread2调用都是_dispatch_root_queues_init_once ,而调用_dispatch_root_queues_init_once函数的地方也只有一处,即:

static inline void
_dispatch_root_queues_init(void)
{
	dispatch_once_f(&_dispatch_root_queues_pred, NULL,
			_dispatch_root_queues_init_once);
}
  • dispatch_once_f保证了只会调用一次;
  • _dispatch_root_queues_init_once:就是最终调用执行_dispatch_worker_thread2函数进行线程与任务的绑定。

之后的函数调用就如同堆栈中打印的那样,这里就不做赘述。

总结:

  • 在异步函数中会进行开辟子线程操作,在子线程中执行任务
  • 任务不会立即执行,也不会阻塞当前线程

面试题

根据以上总结,我们可以看几个面试题:

题目一

     dispatch_queue_t queue = dispatch_queue_create("fm", DISPATCH_QUEUE_CONCURRENT);
    dispatch_async(queue, ^{
        NSLog(@"1");
    });
    dispatch_async(queue, ^{
        NSLog(@"2");
    });
    dispatch_sync(queue, ^{
        NSLog(@"3");
    });
    NSLog(@"0");
    dispatch_async(queue, ^{
        NSLog(@"7");
    });
    dispatch_async(queue, ^{
        NSLog(@"8");
    });
    dispatch_async(queue, ^{
        NSLog(@"9");
    });
  • 首先1、2会开辟子线程,执行时间不确定;
  • 3、0的打印是有顺序的,3在前,0在后
  • 由于代码是从上往下执行,因此7、8、9肯定是在0之后打印
  • 7、8、9由于也是在子线程中进行,具体顺序也是未知,下边是给出一种情况的运行结果:

image.png

题目二

 dispatch_queue_t t = dispatch_queue_create("fm", DISPATCH_QUEUE_CONCURRENT);
    NSLog(@"1");
    dispatch_sync(t, ^{
        NSLog(@"2");
        dispatch_async(t, ^{
            NSLog(@"3");
        });
        NSLog(@"4");
    });
    NSLog(@"5");
  • 首先程序从上往下执行先打印1
  • sync同步函数,会立即执行,打印2,而里边的3会开辟子线程,执行时机未知,
  • 然后程序会打印4,在sync执行完之后会再执行54与5也是有前后顺序,sync执行不完,5不会打印,下边是给出一种情况的运行结果:

image.png

题目三

而当我们把题目二的并发队列改成串行队列时,结果又有什么不同呢?

dispatch_queue_t t = dispatch_queue_create("fm", DISPATCH_QUEUE_SERIAL);
    NSLog(@"1");
    dispatch_sync(t, ^{
        NSLog(@"2");
        dispatch_async(t, ^{
            NSLog(@"3");
        });
        
        NSLog(@"4");
        sleep(3);
    });
   
    NSLog(@"5");
  • 首先程序从上往下执行先打印1
  • 然后执行sync同步函数,会立即执行,这里需要注意的是,此时会将sync中的任务添加到串行队列中(题目二也会添加),然后打印2
  • 然后就遇到异步函数async,此时又会把async的任务添加到串行队列中(题目二也会添加)但是串行队列有个特性,既要满足先进先出,串行队列还会等待任务完成,下一任务才会执行,而并发队列不会等待任务执行完毕。因此
  • 打印2后会再打印4,然后执行sleep(3),执行完整个任务块之后,再去异步执行3
  • 3与5的执行顺序不确定,执行顺序结果如下:

image.png

题目四

- (void)test4 {
    self.num = 0;
    for (int i = 0; i < 100; i ++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.num ++;
        });
    }
    NSLog(@"self.num = %d",self.num);
}
  • 可以看到for循环使用来调用async函数的,并且在执行完100次之后,就直接跳出了循环,进行num值打印
  • 而此时async异步函数可能调用了一部分,具体情况是未知的,因为开辟线程执行函数都是废时间的,for循环可不会等待这个时间,因此打印的值一定小于等于100。

与该题目形成对比的是如下题目:

题目四

- (void)test3 {
    self.num = 0;
    while (self.num < 100) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.num ++;
        });
    }
    NSLog(@"self.num = %d",self.num);
}
  • for循环换成while循环,可以知道的是num的值一定被加到了100,否则出不了循环。
  • 在跳出循环执行NSLog之间,由于子线程block调用,该值还有可能继续往上加,因此题目四的结果一定大于等于100.