iOS底层(十六)-GCD(三)

1,029 阅读6分钟

一、队列的创建底层原理

创建一个队列:

dispatch_sync(dispatch_queue_create("com.test", NULL), ^{
    NSLog(@"GCD");
});

当创建这样GCD的时候, 首先会调用dispatch_sync函数, 然后再向下执行内部的block.

在调用同步函数的时候, 会出现死锁的问题. 来看一下底层的实现处理.

拿到官方的 libdispatch 源码.

通过提示api得知, 同步的声明是 dispatch_sync(dispatch_queue_t _Nonnull queue, ^(void)block), 在libdispatch中搜索 dispatch_sync(dispatch_:

根据流程 dispatch_sync --> _dispatch_sync_f --> _dispatch_sync_f_inline :

static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	if (likely(dq->dq_width == 1)) {
		return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
	}

	if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
		DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// Global concurrent queues and queues bound to non-dispatch threads
	// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
		return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
	}

	if (unlikely(dq->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
			_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

进入函数后就会进行依次判断, 然后调用一个barrier栅栏函数. 这就意味着串形队列的实现其实就是调用栅栏函数.

跟进 _dispatch_barrier_sync_f --> _dispatch_barrier_sync_f_inline :

static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	// 获取线程ID, 通过ID来识别线程 -- mach pthread --
	dispatch_tid tid = _dispatch_tid_self();

	if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
		DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	
	
	if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
		return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
				DC_FLAG_BARRIER | dc_flags);
	}

	if (unlikely(dl->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func,
				DC_FLAG_BARRIER | dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
			DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
					dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

首先是拿到一个线程ID, 发现 _dispatch_queue_try_acquire_barrier_sync 用到了这个tid, 进入这个函数:

_dispatch_queue_try_acquire_barrier_sync --> _dispatch_queue_try_acquire_barrier_sync_and_suspend :

static inline bool
_dispatch_queue_try_acquire_barrier_sync_and_suspend(dispatch_lane_t dq,
		uint32_t tid, uint64_t suspend_count)
{
	uint64_t init  = DISPATCH_QUEUE_STATE_INIT_VALUE(dq->dq_width);
	uint64_t value = DISPATCH_QUEUE_WIDTH_FULL_BIT | DISPATCH_QUEUE_IN_BARRIER |
			_dispatch_lock_value_from_tid(tid) |
			(suspend_count * DISPATCH_QUEUE_SUSPEND_INTERVAL);
	uint64_t old_state, new_state;
	// 从底层获取信息 -- 状态信息 - 当前队列 - 线程
	return os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, acquire, {
		uint64_t role = old_state & DISPATCH_QUEUE_ROLE_MASK;
		if (old_state != (init | role)) {
			os_atomic_rmw_loop_give_up(break);
		}
		new_state = value | role;
	});
}

获取系统的线程状态, 然后返回出去了. 再回来到 _dispatch_queue_try_acquire_barrier_sync调用的地方. 这里也就是获取一个底层线程的状态, 来调用一个 _dispatch_sync_f_slow. 当我们在碰到死锁导致程序报错, 这个时候可以看到有一个 _dispatch_sync_f_slow, 那么调用 _dispatch_queue_try_acquire_barrier_sync 的地方就是出现死锁状况的.

进入 _dispatch_sync_f_slow :

static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
		dispatch_function_t func, uintptr_t top_dc_flags,
		dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
	//..省略

	_dispatch_trace_item_push(top_dq, &dsc);
	__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);

	if (dsc.dsc_func == NULL) {
		dispatch_queue_t stop_dq = dsc.dc_other;
		return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
	}

	_dispatch_introspection_sync_begin(top_dq);
	_dispatch_trace_item_pop(top_dq, &dsc);
	_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
			DISPATCH_TRACE_ARG(&dsc));
}

来到 _dispatch_trace_item_push 进入一探究竟:

static inline void
_dispatch_trace_item_push(dispatch_queue_class_t dqu, dispatch_object_t _tail)
{
	if (unlikely(DISPATCH_QUEUE_PUSH_ENABLED())) {
		_dispatch_trace_continuation(dqu._dq, _tail._do, DISPATCH_QUEUE_PUSH);
	}

	_dispatch_trace_item_push_inline(dqu._dq, _tail._do);
	_dispatch_introspection_queue_push(dqu, _tail);
}

可以看到它将当前的任务向队列里面push, 队列是一种数据结构, 很明了这里就是一个压栈的操作.

接着压栈向下走: DISPATCH_WAIT_FOR_QUEUE 一个等待线程, 常见的等待调用大多出现在死锁上. 其实在死锁程序异常时, 线程栏里也会出现这个函数.

进入这个函数:

static void
__DISPATCH_WAIT_FOR_QUEUE__(dispatch_sync_context_t dsc, dispatch_queue_t dq)
{
	uint64_t dq_state = _dispatch_wait_prepare(dq);
	if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {
		DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,
				"dispatch_sync called on queue "
				"already owned by current thread");
	}
    //...省略
}

通过这个dq来获取等待的状态, 然后对此抛出异常: 已经被当前线程正在执行. 跟进一下 _dq_state_drain_locked_by --> _dispatch_lock_is_locked_by :

static inline bool
_dispatch_lock_is_locked_by(dispatch_lock lock_value, dispatch_tid tid)
{
	// equivalent to _dispatch_lock_owner(lock_value) == tid
	// ^ (异或运算法) 两个相同就会出现 0 否则为1
	return ((lock_value ^ tid) & DLOCK_OWNER_MASK) == 0;
}

根据tid做来一个异或操作, 判断等待的lock_value是否与正要执行的线程为同一个. 如果相同, 最终会返回YES, 返回到之前的抛出异常里.

总结:

  1. 通过push操作, 对队列进行添加任务, 队列遵守FIFO原则, 按照顺序执行同步
  2. 栅栏函数和syn是差不多的
  3. 首先会获取线程ID tid
  4. 再获取线程的状态state
  5. 把tid和等待的value做比较, 如果一样就会抛出异常造成死锁.

二、同步任务的执行

在上面的block里面打上断点, 执行程序:

有两个关键的函数调用, 进入 _dispatch_lane_barrier_sync_invoke_and_complete :

static void
_dispatch_lane_barrier_sync_invoke_and_complete(dispatch_lane_t dq,
		void *ctxt, dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
	_dispatch_sync_function_invoke_inline(dq, ctxt, func);
	_dispatch_trace_item_complete(dc);
	if (unlikely(dq->dq_items_tail || dq->dq_width > 1)) {
		return _dispatch_lane_barrier_complete(dq, 0, 0);
	}

	// Presence of any of these bits requires more work that only
	// _dispatch_*_barrier_complete() handles properly
	//
	// Note: testing for RECEIVED_OVERRIDE or RECEIVED_SYNC_WAIT without
	// checking the role is sloppy, but is a super fast check, and neither of
	// these bits should be set if the lock was never contended/discovered.
	const uint64_t fail_unlock_mask = DISPATCH_QUEUE_SUSPEND_BITS_MASK |
			DISPATCH_QUEUE_ENQUEUED | DISPATCH_QUEUE_DIRTY |
			DISPATCH_QUEUE_RECEIVED_OVERRIDE | DISPATCH_QUEUE_SYNC_TRANSFER |
			DISPATCH_QUEUE_RECEIVED_SYNC_WAIT;
	uint64_t old_state, new_state;

	// similar to _dispatch_queue_drain_try_unlock
	os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
		new_state  = old_state - DISPATCH_QUEUE_SERIAL_DRAIN_OWNED;
		new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
		new_state &= ~DISPATCH_QUEUE_MAX_QOS_MASK;
		if (unlikely(old_state & fail_unlock_mask)) {
			os_atomic_rmw_loop_give_up({
				return _dispatch_lane_barrier_complete(dq, 0, 0);
			});
		}
	});
	if (_dq_state_is_base_wlh(old_state)) {
		_dispatch_event_loop_assert_not_owned((dispatch_wlh_t)dq);
	}
}

可以看到它在这里又会从os系统底层来获取线程状态, 断点的block的调用是依赖线程执行的, 所以首先要从系统中获取线程状态, 良好的线程状态才能让block去执行.

来到 _dispatch_lane_barrier_complete

static void
_dispatch_lane_barrier_complete(dispatch_lane_class_t dqu, dispatch_qos_t qos,
		dispatch_wakeup_flags_t flags)
{
	dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE;
	dispatch_lane_t dq = dqu._dl;

	if (dq->dq_items_tail && !DISPATCH_QUEUE_IS_SUSPENDED(dq)) {
		struct dispatch_object_s *dc = _dispatch_queue_get_head(dq);
		if (likely(dq->dq_width == 1 || _dispatch_object_is_barrier(dc))) {
			if (_dispatch_object_is_waiter(dc)) {
				return _dispatch_lane_drain_barrier_waiter(dq, dc, flags, 0);
			}
		} else if (dq->dq_width > 1 && !_dispatch_object_is_barrier(dc)) {
			return _dispatch_lane_drain_non_barriers(dq, dc, flags);
		}

		if (!(flags & DISPATCH_WAKEUP_CONSUME_2)) {
			_dispatch_retain_2(dq);
			flags |= DISPATCH_WAKEUP_CONSUME_2;
		}
		target = DISPATCH_QUEUE_WAKEUP_TARGET;
	}

	uint64_t owned = DISPATCH_QUEUE_IN_BARRIER +
			dq->dq_width * DISPATCH_QUEUE_WIDTH_INTERVAL;
	return _dispatch_lane_class_barrier_complete(dq, qos, flags, target, owned);
}

当线程为串形或者是栅栏函数的时候, 会走到相应的 _dispatch_lane_drain_barrier_waiter里面去. 这里就再次证明了串形和栅栏函数是及其相似的.

继续向下进入 _dispatch_lane_class_barrier_complete --> _dispatch_queue_push_queue :

static inline void
_dispatch_queue_push_queue(dispatch_queue_t tq, dispatch_queue_class_t dq,
		uint64_t dq_state)
{
	_dispatch_trace_item_push(tq, dq);
	return dx_push(tq, dq, _dq_state_max_qos(dq_state));
}

通过这个 dx_push 得到:

#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)

再来查找 dq_push, 发现很多地方都有这个属性方法, 直接来看最典型的一个 .dq_push = _dispatch_root_queue_push , 搜索 _dispatch_root_queue_push :

void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
		dispatch_qos_t qos)
{
    //...省略
#if HAVE_PTHREAD_WORKQUEUE_QOS
	if (_dispatch_root_queue_push_needs_override(rq, qos)) {
		return _dispatch_root_queue_push_override(rq, dou, qos);
	}
#else
	(void)qos;
#endif
	_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}

进入 _dispatch_root_queue_push_inline -> _dispatch_root_queue_poke --> _dispatch_root_queue_poke_slow --> _dispatch_root_queues_init --> dispatch_once_f:

void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
	// 如果你来过一次 -- 下次就不来
	dispatch_once_gate_t l = (dispatch_once_gate_t)val;
	//DLOCK_ONCE_DONE
#if !DISPATCH_ONCE_INLINE_FASTPATH || DISPATCH_ONCE_USE_QUIESCENT_COUNTER
	uintptr_t v = os_atomic_load(&l->dgo_once, acquire);
	if (likely(v == DLOCK_ONCE_DONE)) {
		return;
	}
#if DISPATCH_ONCE_USE_QUIESCENT_COUNTER
	if (likely(DISPATCH_ONCE_IS_GEN(v))) {
		return _dispatch_once_mark_done_if_quiesced(l, v);
	}
#endif
#endif
	// 满足条件 -- 试图进去
	if (_dispatch_once_gate_tryenter(l)) {
		// 单利调用 -- v->DLOCK_ONCE_DONE
		return _dispatch_once_callout(l, ctxt, func);
	}
	return _dispatch_once_wait(l);
}

static void
_dispatch_once_callout(dispatch_once_gate_t l, void *ctxt,
		dispatch_function_t func)
{
	// block()
	_dispatch_client_callout(ctxt, func);
	_dispatch_once_gate_broadcast(l);
}

这里实际上就是一个单例的实现过程.看到一个之前打印出来的 _dispatch_once_callout: 证明这个流程是正确的. 进入这个方法:

void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
	_dispatch_get_tsd_base();
	void *u = _dispatch_get_unwind_tsd();
	if (likely(!u)) return f(ctxt);
	_dispatch_set_unwind_tsd(NULL);
	f(ctxt);
	_dispatch_free_unwind_tsd();
	_dispatch_set_unwind_tsd(u);
}

这个f(ctxt)上下文指的就是我们所写的block的function. 实际上在同步过程中, 系统不需要考虑线程的保存等等, 因为执行过程中就需要给调用了. 但是异步就需要给保存了

三、异步的任务执行与创建

直接在源码里搜索 dispatch_async( , 跟进 dispatch_async --> _dispatch_continuation_init --> ****

static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, dispatch_block_t work,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
    //省略...
    
	dispatch_function_t func = _dispatch_Block_invoke(work);
	if (dc_flags & DC_FLAG_CONSUME) {
		func = _dispatch_call_block_and_release;
	}
	return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}

这里通过 _dispatch_Block_invoke 对当前对任务进行了一次处理, 转成 dispatch_function_t 类型. 接着向下来到 _dispatch_continuation_init_f:

static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	pthread_priority_t pp = 0;
	dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
	dc->dc_func = f;
	dc->dc_ctxt = ctxt;
	// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
	// should not be propagated, only taken from the handler if it has one
	if (!(flags & DISPATCH_BLOCK_HAS_PRIORITY)) {
		pp = _dispatch_priority_propagate();
	}
	_dispatch_continuation_voucher_set(dc, flags);
	return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}

可以看到它直接将任务方法保存到dc_func里面去了. 然后将一些参数包装成一个dc储存起来.

在异步任务中, 会把block进行一个函数式保存. 这里大致了解了 dispatch_async 下的 _dispatch_continuation_init . 继续接下来往里走, 看一下 dispatch_async 方法下的 _dispatch_continuation_async --> dx_push , 跟同步差不多的一个宏定义:

#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z) 接着找 dq_push, 其实所有的队列都是由根队列衍生出来都, 所以这里依旧还是找 .dq_push = _dispatch_root_queue_push

来到 _dispatch_root_queue_push --> _dispatch_root_queue_push_inline --> _dispatch_root_queue_poke --> _dispatch_root_queue_poke_slow , 上文中在这里看过 _dispatch_root_queues_init, 接着向下走来到

_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) 
{
    //..省略
    do {
		can_request = t_count < floor ? 0 : t_count - floor;
		if (remaining > can_request) {
			_dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
					remaining, can_request);
			os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
			remaining = can_request;
		}
		if (remaining == 0) {
			_dispatch_root_queue_debug("pthread pool is full for root queue: "
					"%p", dq);
			return;
		}
	} while (!os_atomic_cmpxchgvw2o(dq, dgq_thread_pool_size, t_count,
			t_count - remaining, &t_count, acquire));
	//..省略
}

在这里, 通过一个count的是否小于0的判断, 拿到一个能够请求的数量. 在 os_atomic_sub2o 中请求数量, 再从线程池中减去所请求的数量. 这里主要就是对线程池的一些处理, 看线程池是否可以正常的创建线程. 为下面的创建线程做准备. 接着下面又有一个do-while循环:

_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor) 
{
    //..省略
	do {
		_dispatch_retain(dq); // released in _dispatch_worker_thread
		while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
			if (r != EAGAIN) {
				(void)dispatch_assume_zero(r);
			}
			_dispatch_temporary_resource_shortage();
		}
	} while (--remaining);
	//..省略
}

在这列对需要创建线程进行 pthread_create 循环创建.