说到GCD,大家都不会陌生,相信大多数人要用到多线程技术的情况下都会选择使用GCD去解决,那么今天要聊的就是关于GCD的那些事儿。
一、什么是GCD?
- GCD的全称是Grand Central Dispatch,中央调度器(原来是个暖男嘛)
- 纯C语言,提供了非常多强大的函数
- GCD是苹果公司为多核的并行运算提出的解决方案
- GCD会自动利用更多的CPU内核(比如双核、四核)
- GCD会自动管理线程的生命周期(创建线程、调度任务、销毁线程)
- 程序员只需要告诉GCD想要执行什么任务,不需要编写任何线程管理代码 这样看起来GCD确实很牛逼的样子,那么我们总该说说怎么用他吧。
二、GCD的使用
说到这里,首先就不得不提到2个很重要的东西
- 队列:存放线程的容器,就有点像一个漏斗,先进漏斗的就先出来(FIFO原则)。GCD中又分为2种队列,串行队列和并行队列。
- 串行队列:顾名思义,就是一个糖葫芦串,串一根棍子上,只能一个个排队进嘴里对吧。
- 并行队列:这个也好理解,就像一条多行车道,咱在限定的车速内谁都能超谁的车。 咱在网上找个图,您也可以看看。
-
任务:任务其实也很好理解,就是上面介绍队列的时候串行里面那颗糖葫芦,并行里面的某辆车。在GCD里面就是那段你要执行的代码,一般放在block里面。当然,执行任务的方式也有2种,分为同步和异步。
- 同步(dispatch_sync):这种方式不会开启新的线程呢,一个任务进入队列里面以后必须等该任务完成以后下一个任务才会开始执行,在这之前,接下来要执行的任务都在等待。
- 异步(dispatch_async):跟上面一对比,我们能猜到这种方式当然就能开启线程了,有多个任务执行的时候,不同一个个排队去等,就像坐地铁站刷卡进站,你想找哪台闸机都行。但是值得注意的是,异步具有开启线程的能力,但是并不一定会开启哟。
-
使用方法:直接贴个代码问题不大吧
// 创建串行队列
dispatch_queue_t queue_serial = dispatch_queue_create("我是串行队列", DISPATCH_QUEUE_SERIAL);
// 创建并行行队列
dispatch_queue_t queue_concurrent = dispatch_queue_create("我是并行队列", DISPATCH_QUEUE_CONCURRENT);
// 同步任务 + 串行队列
dispatch_sync(queue_serial, ^{
NSLog(@"我是任务1---%@",[NSThread currentThread]);
});
// 异步任务 + 串行队列
dispatch_async(queue_serial, ^{
NSLog(@"我是任务2---%@",[NSThread currentThread]);
});
// 同步任务 + 并行队列
dispatch_sync(queue_concurrent, ^{
NSLog(@"我是任务3---%@",[NSThread currentThread]);
});
// 异步任务 + 并行队列
dispatch_async(queue_concurrent, ^{
NSLog(@"我是任务4---%@",[NSThread currentThread]);
});
看到这里,使用的时候会发现有四种组合方式,分别是
- 同步任务 + 串行队列:如下
for (int i = 0; i < 10; i ++) {
dispatch_sync(queue_serial, ^{
NSLog(@"我是任务1---%@",[NSThread currentThread]);
});
}
打印的日志为
2021-04-20 14:55:03.187698+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.187840+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.187901+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.187950+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.187996+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.188041+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.188087+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.188232+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.188325+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
2021-04-20 14:55:03.188404+0800 newTest[73597:6009377] 我是任务1---<NSThread: 0x282514980>{number = 1, name = main}
可以看到并没有开启新的线程。当前线程全部是主线程
- 异步任务 + 串行队列:如下
for (int i = 0; i < 10; i ++) {
dispatch_async(queue_serial, ^{
NSLog(@"我是任务2---%@",[NSThread currentThread]);
});
}
打印日志为
2021-04-20 15:02:46.011832+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.011963+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.012014+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.012062+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.012109+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.012155+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.012209+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.012366+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.013143+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
2021-04-20 15:02:46.013275+0800 newTest[73645:6011697] 我是任务2---<NSThread: 0x280a7ce00>{number = 6, name = (null)}
可以看到虽然当前线程不是主线程,但是都在同一线程执行,并没有开启新的线程
- 同步任务 + 并行队列:如下
for (int i = 0; i < 10; i ++) {
dispatch_sync(queue_concurrent, ^{
NSLog(@"我是任务3---%@",[NSThread currentThread]);
});
}
打印日志为
2021-04-20 15:08:42.200996+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201159+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201217+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201267+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201315+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201361+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201408+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201585+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201684+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
2021-04-20 15:08:42.201769+0800 newTest[73669:6012830] 我是任务3---<NSThread: 0x280f008c0>{number = 1, name = main}
可以看到并没有开启新的线程。当前线程全部是主线程
- 异步任务 + 并行队列:
for (int i = 0; i < 10; i ++) {
dispatch_async(queue_concurrent, ^{
NSLog(@"我是任务4---%@",[NSThread currentThread]);
});
}
打印日志为
2021-04-20 15:10:55.334249+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334402+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334454+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334503+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334551+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334598+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334653+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.334730+0800 newTest[73681:6013653] 我是任务4---<NSThread: 0x2804e6880>{number = 6, name = (null)}
2021-04-20 15:10:55.334859+0800 newTest[73681:6013654] 我是任务4---<NSThread: 0x2804e66c0>{number = 5, name = (null)}
2021-04-20 15:10:55.335202+0800 newTest[73681:6013652] 我是任务4---<NSThread: 0x2804e30c0>{number = 7, name = (null)}
可以看到开启了多个线程,任务交替执行
通过以上的实验,我们可以总结出这样的结论:
- 串行队列总是不开启线程的
- 同步执行总是不开启线程的
- 只有并行+异步这样的方式才会开启多个线程
当然喽,为了方便我们使用,GCD还提供了两个特殊的队列
- 主队列:
dispatch_get_main_queue(),其实就是一个串行队列嘛,值得注意的是,主队列+同步的方式会导致崩溃,原因是造成死锁导致! - 全局队列:
dispatch_get_global_queue(0, 0),其实就是一个并发队列嘛。
好了,知道了队列和任务,我们突然想到了在开发中最常见到的情况就是:有很多耗时任务需要用到子线程去处理,等这些耗时任务全部完成以后我们要统一回到主线程刷新UI界面,尤其可能这些耗时任务还都是异步回调的。那么我们该怎么使用呢?
三、线程组
对于以上提到的场景,我们首先可以考虑的就是线程组:
// 创建线程组
dispatch_group_t group = dispatch_group_create();
for (int i = 0; i < 10; i ++) {
// 将任务加入线程组
dispatch_group_enter(group);
[HttpManger POST:@"url" parameters:para constructingBodyWithBlock:nil progress:nil success:^(NSURLSessionDataTask * _Nonnull task, id _Nullable responseObject){
NSLog(@"任务%d执行成功",i);
// 任务执行完毕离开线程组
dispatch_group_leave(group);
}failure:^(NSURLSessionDataTask * _Nullable task, NSError * _Nonnull error) {
NSLog(@"任务%d执行失败",i);
// 任务执行失败离开线程组
dispatch_group_leave(group);
}];
}
// 线程组所有任务全部执行完毕通知主线程任务全部完成
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSLog(@"任务完成");
});
这种方式使用起来也超级简单,但是这里需要注意的是:enter和leave一定要成对出现哦 否则会出现意想不到的情况,感兴趣的朋友可以自己去试试。
- 对于线程组,还有一个更简单的使用方式,像这样的:
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, dispatch_get_global_queue(0, 0), ^{
sleep(1);
NSLog(@"1");
});
dispatch_group_async(group, dispatch_get_global_queue(0, 0), ^{
NSLog(@"2");
});
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSLog(@"3");
});
日志如下
2021-04-20 16:54:38.890454+0800 newTest[74251:6032187] 2
2021-04-20 16:54:42.259026+0800 newTest[74251:6032191] 1
2021-04-20 16:54:42.304722+0800 newTest[74251:6032077] 3
好了,介绍完线程组,应该只剩一个信号量,咱们就能愉快的玩耍GCD了,事不宜迟,马上开干!
四、信号量
首先贴上一张图,让大家看看有哪些线程锁
可以看到,信号量的性能是很优秀的吧,使用如下:
//创建信号量:括号里的数字代表一次同时能执行任务的个数
dispatch_semaphore_t semaphore = dispatch_semaphore_create(2);
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
//任务1
dispatch_async(queue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
NSLog(@"run task 1");
sleep(1);
NSLog(@"complete task 1");
dispatch_semaphore_signal(semaphore);
});
//任务2
dispatch_async(queue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
NSLog(@"run task 2");
sleep(1);
NSLog(@"complete task 2");
dispatch_semaphore_signal(semaphore);
});
//任务3
dispatch_async(queue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
NSLog(@"run task 3");
sleep(1);
NSLog(@"complete task 3");
dispatch_semaphore_signal(semaphore);
});
日志如下
2021-04-20 16:10:55.334249+0800 newTest[73681:6013654] run task 1
2021-04-20 16:10:55.334402+0800 newTest[73681:6013654] run task 2
2021-04-20 16:10:55.334551+0800 newTest[73681:6013654] complete task 1
2021-04-20 16:10:55.334730+0800 newTest[73681:6013653] complete task 2
2021-04-20 16:10:55.334859+0800 newTest[73681:6013654] run task 3
2021-04-20 16:10:55.335202+0800 newTest[73681:6013652] complete task 3
上述多个耗时任务的场景也可以用信号量解决哦,感兴趣的自己试试吧。
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
以上篇幅简单介绍了一下GCD常用的方法,当然不够全面,也不够深入。所以接下来要好好的深入了解一下GCD的底层原理,做好准备了吗?
五、底层原理
1.创建队列 dispatch_queue_create()
既然是探索原理那么就需要看看创建队列的时候底层代码的实现,可是很不巧,我们点进去看不到源代码。但是我们断点以后查看堆栈信息能发现调用了libdyld.dylib库里面的函数。于是我们就去搞了一份libdispatch源码!
当然,有些厉害的朋友也能通过读取寄存器和内存信息推出来函数的实现原理,但是相对比较麻烦,所以我们还是看一下源码吧。(源码可以去苹果官网下载)
首先我们能看到的是这个:
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
return _dispatch_lane_create_with_target(label, attr,
DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
返回了一个_dispatch_lane_create_with_target(label, attr, DISPATCH_TARGET_QUEUE_DEFAULT, true)
于是我们接着往下看上述函数的实现:
DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);
//
// Step 1: Normalize arguments (qos, overcommit, tq)
//
dispatch_qos_t qos = dqai.dqai_qos;
#if !HAVE_PTHREAD_WORKQUEUE_QOS
if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;
}
if (qos == DISPATCH_QOS_MAINTENANCE) {
dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;
}
#endif // !HAVE_PTHREAD_WORKQUEUE_QOS
_dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;
if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) {
if (tq->do_targetq) {
DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
"a non-global target queue");
}
}
if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {
// Handle discrepancies between attr and target queue, attributes win
if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
overcommit = _dispatch_queue_attr_overcommit_enabled;
} else {
overcommit = _dispatch_queue_attr_overcommit_disabled;
}
}
if (qos == DISPATCH_QOS_UNSPECIFIED) {
qos = _dispatch_priority_qos(tq->dq_priority);
}
tq = NULL;
} else if (tq && !tq->do_targetq) {
// target is a pthread or runloop root queue, setting QoS or overcommit
// is disallowed
if (overcommit != _dispatch_queue_attr_overcommit_unspecified) {
DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute "
"and use this kind of target queue");
}
} else {
if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
// Serial queues default to overcommit!
overcommit = dqai.dqai_concurrent ?
_dispatch_queue_attr_overcommit_disabled :
_dispatch_queue_attr_overcommit_enabled;
}
}
if (!tq) {
tq = _dispatch_get_root_queue(
qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
if (unlikely(!tq)) {
DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
}
}
//
// Step 2: Initialize the queue
//
if (legacy) {
// if any of these attributes is specified, use non legacy classes
if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {
legacy = false;
}
}
const void *vtable;
dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
if (dqai.dqai_concurrent) {
vtable = DISPATCH_VTABLE(queue_concurrent);
} else {
vtable = DISPATCH_VTABLE(queue_serial);
}
switch (dqai.dqai_autorelease_frequency) {
case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
dqf |= DQF_AUTORELEASE_NEVER;
break;
case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
dqf |= DQF_AUTORELEASE_ALWAYS;
break;
}
if (label) {
const char *tmp = _dispatch_strdup_if_mutable(label);
if (tmp != label) {
dqf |= DQF_LABEL_NEEDS_FREE;
label = tmp;
}
}
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
sizeof(struct dispatch_lane_s));
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
dq->dq_label = label;
dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
dqai.dqai_relpri);
if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
}
if (!dqai.dqai_inactive) {
_dispatch_queue_priority_inherit_from_target(dq, tq);
_dispatch_lane_inherit_wlh_from_target(dq, tq);
}
_dispatch_retain(tq);
dq->do_targetq = tq;
_dispatch_object_debug(dq, "%s", __func__);
return _dispatch_trace_queue_create(dq)._dq;
}
这段代码看起来挺长的,但是人家写了注释啊:
-
Step 1: Normalize arguments (qos, overcommit, tq),有道词典一搞发现意思是规范化参数
-
Step 2: Initialize the queue,同理,初始化队列 呃,看到这里好多朋友要说了你这不是扯淡吗,哪个对象的创建不是这两步呢?好的那么我们来看看这个函数的细节。
-
首先一进来就是
dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);那么我们应该去看看_dispatch_queue_attr_to_info()是干啥的。 进去看到实现代码是这样的:
dispatch_queue_attr_info_t
_dispatch_queue_attr_to_info(dispatch_queue_attr_t dqa)
{
dispatch_queue_attr_info_t dqai = { };
if (!dqa) return dqai;
#if DISPATCH_VARIANT_STATIC
if (dqa == &_dispatch_queue_attr_concurrent) {
dqai.dqai_concurrent = true;
return dqai;
}
#endif
if (dqa < _dispatch_queue_attrs ||
dqa >= &_dispatch_queue_attrs[DISPATCH_QUEUE_ATTR_COUNT]) {
DISPATCH_CLIENT_CRASH(dqa->do_vtable, "Invalid queue attribute");
}
size_t idx = (size_t)(dqa - _dispatch_queue_attrs);
dqai.dqai_inactive = (idx % DISPATCH_QUEUE_ATTR_INACTIVE_COUNT);
idx /= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT;
dqai.dqai_concurrent = !(idx % DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT);
idx /= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT;
dqai.dqai_relpri = -(idx % DISPATCH_QUEUE_ATTR_PRIO_COUNT);
idx /= DISPATCH_QUEUE_ATTR_PRIO_COUNT;
dqai.dqai_qos = idx % DISPATCH_QUEUE_ATTR_QOS_COUNT;
idx /= DISPATCH_QUEUE_ATTR_QOS_COUNT;
dqai.dqai_autorelease_frequency =
idx % DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
idx /= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
dqai.dqai_overcommit = idx % DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
idx /= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
return dqai;
}
看到这里我们应该知道了dqa这个东西的创建流程了:
- 1、如果是参数dqa为NULL,
_dispatch_queue_attr_to_info函数直接返回。 - 2、
dispatch_queue_attr_info_t类型的空结构体。如果不是NULL,苹果将通过规定的算法计算出idx,idx采用了位域写法,通过取余方式获取对应的值,为dqa初始化。 - 3、
dispatch_queue_attr_info_t类型如下:
typedef struct dispatch_queue_attr_info_s {
dispatch_qos_t dqai_qos : 8;
int dqai_relpri : 8;
uint16_t dqai_overcommit:2;
uint16_t dqai_autorelease_frequency:2;
uint16_t dqai_concurrent:1;
uint16_t dqai_inactive:1;
} dispatch_queue_attr_info_t;
继续阅读该函数我们发现了这样一段代码:
if (dqai.dqai_concurrent) {
// OS_dispatch_queue_concurrent_class
vtable = DISPATCH_VTABLE(queue_concurrent);
} else {
vtable = DISPATCH_VTABLE(queue_serial);
}
看到这里我们似乎闻到了某种熟悉的味道,queue_concurrent和queue_serial,我的天,这不就是我们创建队列的时候用来区分是串行还是并发的吗?是的,dqai.dqai_concurrent就是用来区分串行还是并行的。我们不由得继续向下看这两个宏到底在干啥,于是我们继续看:
#define DISPATCH_VTABLE(name) DISPATCH_OBJC_CLASS(name)
#define DISPATCH_OBJC_CLASS(name) (&DISPATCH_CLASS_SYMBOL(name))
#if USE_OBJC
#define DISPATCH_CLASS_SYMBOL(name) OS_dispatch_##name##_class
底层将传入的参数name,拼接生成对应的队列类:
- 串行队列的类为:
OS_dispatch_queue_serial_class - 并行队列的类为:
OS_dispatch_queue_concurrent_class那我们接着看:
if (label) {
const char *tmp = _dispatch_strdup_if_mutable(label);
if (tmp != label) {
dqf |= DQF_LABEL_NEEDS_FREE;
label = tmp;
}
}
// 开辟内存 - 生成响应的对象 queue
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
sizeof(struct dispatch_lane_s));
// 构造方法
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
// 标签
dq->dq_label = label;
// 优先级
dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
dqai.dqai_relpri);
if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
}
if (!dqai.dqai_inactive) {
_dispatch_queue_priority_inherit_from_target(dq, tq);
_dispatch_lane_inherit_wlh_from_target(dq, tq);
}
_dispatch_retain(tq);
dq->do_targetq = tq;
判断参数label如果不为NULL,调用函数_dispatch_strdup_if_mutable(const char *str)进行处理,判断如果源字符串是可变的,就重新分配内存并拷贝内容到内存中,否则直接返回。
const char *
_dispatch_strdup_if_mutable(const char *str)
{
#if HAVE_DYLD_IS_MEMORY_IMMUTABLE
size_t size = strlen(str) + 1;
if (unlikely(!_dyld_is_memory_immutable(str, size))) {
char *clone = (char *) malloc(size);
if (dispatch_assume(clone)) {
memcpy(clone, str, size);
}
return clone;
}
return str;
#else
return strdup(str);
#endif
}
处理完成后,判断是否是原来的那一个label,不一致就替换。接下来调用_dispatch_object_alloc函数为队列分配内存,生成响应的对象dq,将队列包装成dispatch_lane_t类型。然后调用_dispatch_queue_init构造函数,为队列初始化:
构造函数
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
参数里面的宏
#define DISPATCH_QUEUE_WIDTH_FULL_BIT 0x0020000000000000ull
#define DISPATCH_QUEUE_WIDTH_FULL 0x1000ull
#define DISPATCH_QUEUE_WIDTH_POOL (DISPATCH_QUEUE_WIDTH_FULL - 1)
#define DISPATCH_QUEUE_WIDTH_MAX (DISPATCH_QUEUE_WIDTH_FULL - 2)
#define DISPATCH_QUEUE_USES_REDIRECTION(width) \
({ uint16_t _width = (width); \
_width > 1 && _width < DISPATCH_QUEUE_WIDTH_POOL; })
通过dqai.dqai_concurrent判断,如果是并行队列,width为0xffe,十进制为4094,如果是串行队列则width为1,通过这里我们也就知道为什么并行队列口子大,串行队列口子小。再为dq的所有属性赋值,dq_label/dq_priority等等。将自己创建的队列绑定到root队列上。
我们注意到:该函数在最后return _dispatch_trace_queue_create(dq)._dq;
通过查看代码发现这个返回函数调用顺序是:
_dispatch_trace_queue_create_dispatch_introspection_queue_create_dispatch_introspection_queue_create_hookdispatch_introspection_queue_get_info最后落到了dispatch_introspection_queue_get_info上面,我们来看看他的实现:
#define dx_vtable(x) (&(x)->do_vtable->_os_obj_vtable)
#define dx_metatype(x) (dx_vtable(x)->do_type & _DISPATCH_META_TYPE_MASK)
DISPATCH_USED inline
dispatch_introspection_queue_s
dispatch_introspection_queue_get_info(dispatch_queue_t dq)
{
if (dx_metatype(dq) == _DISPATCH_WORKLOOP_TYPE) {
return _dispatch_introspection_workloop_get_info(upcast(dq)._dwl);
}
return _dispatch_introspection_lane_get_info(upcast(dq)._dl);
}
这个函数就是一个判断条件,通过dx_metatype判断是否为并行队列,如果为串行队列则调用
- 串行队列调用
_dispatch_introspection_workloop_get_info
static inline dispatch_introspection_queue_s
_dispatch_introspection_workloop_get_info(dispatch_workloop_t dwl)
{
uint64_t dq_state = os_atomic_load2o(dwl, dq_state, relaxed);
dispatch_introspection_queue_s diq = {
.queue = dwl->_as_dq,
.target_queue = dwl->do_targetq,
.label = dwl->dq_label,
.serialnum = dwl->dq_serialnum,
.width = 1,
.suspend_count = 0,
.enqueued = _dq_state_is_enqueued(dq_state),
.barrier = _dq_state_is_in_barrier(dq_state),
.draining = 0,
.global = 0,
.main = 0,
};
return diq;
}
- 并行队列调用
_dispatch_introspection_lane_get_info
static inline dispatch_introspection_queue_s
_dispatch_introspection_lane_get_info(dispatch_lane_class_t dqu)
{
dispatch_lane_t dq = dqu._dl;
bool global = _dispatch_object_is_global(dq);
uint64_t dq_state = os_atomic_load2o(dq, dq_state, relaxed);
dispatch_introspection_queue_s diq = {
.queue = dq->_as_dq,
.target_queue = dq->do_targetq,
.label = dq->dq_label,
.serialnum = dq->dq_serialnum,
.width = dq->dq_width,
.suspend_count = _dq_state_suspend_cnt(dq_state) + dq->dq_side_suspend_cnt,
.enqueued = _dq_state_is_enqueued(dq_state) && !global,
.barrier = _dq_state_is_in_barrier(dq_state) && !global,
.draining = (dq->dq_items_head == (void*)~0ul) ||
(!dq->dq_items_head && dq->dq_items_tail),
.global = global,
.main = dx_type(dq) == DISPATCH_QUEUE_MAIN_TYPE,
};
return diq;
}
最后将结构体dispatch_introspection_queue_s重新包装赋值,至此,整个队列的创建过程我们就走完了一遍。
typedef struct dispatch_introspection_queue_s {
dispatch_queue_t queue;
dispatch_queue_t target_queue;
const char *label;
unsigned long serialnum;
unsigned int width;
unsigned int suspend_count;
unsigned long enqueued:1,
barrier:1,
draining:1,
global:1,
main:1;
} dispatch_introspection_queue_s;
可能很少有朋友能耐心看到这里,说实话整个过程确实挺麻烦的,但是底层不仅要封装的完备,并且要考虑到很多的兼容性,处理起来自然会复杂一点。那么我们总结一下整个流程吧:
- 队列创建方法
dispatch_queue_create中的参数二(即队列类型),决定了下层中 max & 1(用于区分是 串行 还是 并发),其中1表示串行。 - queue 也是一个对象,也需要底层通过alloc + init 创建,并且在alloc中也有一个class,这个class是通过宏定义拼接而成,并且同时会指定isa的指向。
- 创建队列在底层的处理是通过模板创建的,其类型是
dispatch_introspection_queue_s结构体。
通过同样的方法,我们还可以去看看主队列和全局队列的创建流程哦,感兴趣的朋友自己去试试吧!
2.函数的底层原理
这里只提供异步函数dispatch_async的原理分析,同步函数dispatch_sync的朋友可以自己试试。
往下看就是这个函数:
void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
dispatch_block_t db)
{
dispatch_continuation_t dc = _dispatch_continuation_alloc();
uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
dispatch_qos_t qos;
qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
_dispatch_continuation_group_async(dg, dq, dc, qos);
}
我们发现这里面主要两个函数
- 构造函数:
_dispatch_continuation_init:
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, dispatch_block_t work,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
// 将任务拷贝
void *ctxt = _dispatch_Block_copy(work);
// 赋值
dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
if (unlikely(_dispatch_block_has_private_data(work))) {
dc->dc_flags = dc_flags;
dc->dc_ctxt = ctxt;
// will initialize all fields but requires dc_flags & dc_ctxt to be set
return _dispatch_continuation_init_slow(dc, dqu, flags);
}
// 封装work - 异步回调
dispatch_function_t func = _dispatch_Block_invoke(work);
if (dc_flags & DC_FLAG_CONSUME) {
func = _dispatch_call_block_and_release;
}
return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
// 异步回调的宏
#define _dispatch_Block_invoke(bb) \
((dispatch_function_t)((struct Block_layout *)bb)->invoke)
- 异步函数:
_dispatch_continuation_group_async:
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
_dispatch_trace_item_push(dqu, dc);
}
#else
(void)dc_flags;
#endif
return dx_push(dqu._dq, dc, qos);
}
最后返回的是dx_push(),那我们一找出来,发现原来就是个宏啊
#define dx_vtable(x) (&(x)->do_vtable->_os_obj_vtable)
#define dx_type(x) dx_vtable(x)->do_type
#define dx_metatype(x) (dx_vtable(x)->do_type & _DISPATCH_META_TYPE_MASK)
#define dx_cluster(x) (dx_vtable(x)->do_type & _DISPATCH_TYPE_CLUSTER_MASK)
#define dx_hastypeflag(x, f) (dx_vtable(x)->do_type & _DISPATCH_##f##_TYPEFLAG)
#define dx_debug(x, y, z) dx_vtable(x)->do_debug((x), (y), (z))
#define dx_dispose(x, y) dx_vtable(x)->do_dispose(x, y)
#define dx_invoke(x, y, z) dx_vtable(x)->do_invoke(x, y, z)
#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z) // 原来是个宏
#define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z)
dq_push需要根据队列的类型,执行不同的函数,如下所示分别是串行队列,并发队列和全局队列
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane,
.do_type = DISPATCH_QUEUE_SERIAL_TYPE,
.do_dispose = _dispatch_lane_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_lane_invoke,
.dq_activate = _dispatch_lane_activate,
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_push,
);
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
.do_type = DISPATCH_QUEUE_CONCURRENT_TYPE,
.do_dispose = _dispatch_lane_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_lane_invoke,
.dq_activate = _dispatch_lane_activate,
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_concurrent_push,
);
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane,
.do_type = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE,
.do_dispose = _dispatch_object_no_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_object_no_invoke,
.dq_activate = _dispatch_queue_no_activate,
.dq_wakeup = _dispatch_root_queue_wakeup,
.dq_push = _dispatch_root_queue_push,
);
这样的话我们可以通过设置一个全局队列异步函数来验证调试:添加一个符号断点_dispatch_lane_concurrent_push。
发现确实走了该函数,那么我们就可以继续探索_dispatch_lane_concurrent_push的实现了啊。进入该函数,我们发现:
DISPATCH_NOINLINE
void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
dispatch_qos_t qos)
{
// <rdar://problem/24738102&24743140> reserving non barrier width
// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
// width equivalent), so we have to check that this thread hasn't
// enqueued anything ahead of this call or we can break ordering
if (dq->dq_items_tail == NULL &&
!_dispatch_object_is_waiter(dou) &&
!_dispatch_object_is_barrier(dou) &&
_dispatch_queue_try_acquire_async(dq)) {
return _dispatch_continuation_redirect_push(dq, dou, qos);
}
_dispatch_lane_push(dq, dou, qos);
}
返回的是这个函数:_dispatch_continuation_redirect_push,而该函数的实现是
DISPATCH_NOINLINE
static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
dispatch_object_t dou, dispatch_qos_t qos)
{
if (likely(!_dispatch_object_is_redirection(dou))) {
dou._dc = _dispatch_async_redirect_wrap(dl, dou);
} else if (!dou._dc->dc_ctxt) {
// find first queue in descending target queue order that has
// an autorelease frequency set, and use that as the frequency for
// this continuation.
dou._dc->dc_ctxt = (void *)
(uintptr_t)_dispatch_queue_autorelease_frequency(dl);
}
dispatch_queue_t dq = dl->do_targetq;
if (!qos) qos = _dispatch_priority_qos(dq->dq_priority);
dx_push(dq, dou, qos);
}
最后又回到了dx_push,禁止套娃!综合前面队列创建时可知,我们猜想队列也是一个对象,有父类、根类,所以会递归执行到根类的方法。接下来,通过根类的_dispatch_root_queue_push符号断点,来验证猜想是否正确:
结果完全正确!
那我们顺着
_dispatch_root_queue_push来看看这个函数的调用栈吧:
_dispatch_root_queue_push_dispatch_root_queue_push_inline_dispatch_root_queue_poke_dispatch_root_queue_poke_slow最后一个函数的实现如下:
DISPATCH_NOINLINE
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
int remaining = n;
int r = ENOSYS;
_dispatch_root_queues_init();//重点
...
//do-while循环创建线程
do {
_dispatch_retain(dq); // released in _dispatch_worker_thread
while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
if (r != EAGAIN) {
(void)dispatch_assume_zero(r);
}
_dispatch_temporary_resource_shortage();
}
} while (--remaining);
...
}
主要有2步操作:
- 通过
_dispatch_root_queues_init方法注册回调 - 通过
do-while循环创建线程,使用pthread_create方法 再来分析一下_dispatch_root_queues_init:
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queues_init(void)
{
dispatch_once_f(&_dispatch_root_queues_pred, NULL, _dispatch_root_queues_init_once);
}
进入_dispatch_root_queues_init_once的源码,其内部不同事务的调用句柄都是_dispatch_worker_thread2:
static void
_dispatch_root_queues_init_once(void *context DISPATCH_UNUSED)
{
_dispatch_fork_becomes_unsafe();
#if DISPATCH_USE_INTERNAL_WORKQUEUE
size_t i;
for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
_dispatch_root_queue_init_pthread_pool(&_dispatch_root_queues[i], 0,
_dispatch_root_queues[i].dq_priority);
}
#else
int wq_supported = _pthread_workqueue_supported();
int r = ENOTSUP;
if (!(wq_supported & WORKQ_FEATURE_MAINTENANCE)) {
DISPATCH_INTERNAL_CRASH(wq_supported,
"QoS Maintenance support required");
}
if (unlikely(!_dispatch_kevent_workqueue_enabled)) {
r = _pthread_workqueue_init(_dispatch_worker_thread2,
offsetof(struct dispatch_queue_s, dq_serialnum), 0);
#if DISPATCH_USE_KEVENT_WORKQUEUE
} else if (wq_supported & WORKQ_FEATURE_KEVENT) {
r = _pthread_workqueue_init_with_kevent(_dispatch_worker_thread2,
(pthread_workqueue_function_kevent_t)
_dispatch_kevent_worker_thread,
offsetof(struct dispatch_queue_s, dq_serialnum), 0);
#endif
} else {
DISPATCH_INTERNAL_CRASH(wq_supported, "Missing Kevent WORKQ support");
}
if (r != 0) {
DISPATCH_INTERNAL_CRASH((r << 16) | wq_supported,
"Root queue initialization failed");
}
#endif // DISPATCH_USE_INTERNAL_WORKQUEUE
}
其block回调执行的调用路径为:
_dispatch_root_queues_init_once_dispatch_worker_thread2_dispatch_root_queue_drain_dispatch_root_queue_drain_dispatch_continuation_pop_inline_dispatch_continuation_invoke_inline_dispatch_client_calloutdispatch_call_block_and_release看一下堆栈信息:
* thread #3, queue = 'ios', stop reason = breakpoint 1.1
* frame #0: 0x00000001003b189c newTest`__25-[ViewController gcdTest]_block_invoke(.block_descriptor=0x00000001003b4100) at ViewController.m:103:9
frame #1: 0x00000001005e3db8 libdispatch.dylib`_dispatch_call_block_and_release + 24
frame #2: 0x00000001005e55fc libdispatch.dylib`_dispatch_client_callout + 16
frame #3: 0x00000001005e82a0 libdispatch.dylib`_dispatch_continuation_pop + 524
frame #4: 0x00000001005e7708 libdispatch.dylib`_dispatch_async_redirect_invoke + 688
frame #5: 0x00000001005f7120 libdispatch.dylib`_dispatch_root_queue_drain + 376
frame #6: 0x00000001005f7a48 libdispatch.dylib`_dispatch_worker_thread2 + 152
frame #7: 0x00000001d6aac568 libsystem_pthread.dylib`_pthread_wqthread + 212
可以验证上述调用流程。
在这里需要说明一点的是,单例的block回调和异步函数的block回调是不同的。单例中,block回调中的func是_dispatch_Block_invoke(block)。而异步函数中,block回调中的func是dispatch_call_block_and_release
总结一下吧:
- 【准备工作】:首先,将异步任务拷贝并封装,并设置回调函数func
- 【block回调】:底层通过dx_push递归,会重定向到根队列,然后通过pthread_creat创建线程,最后通过dx_invoke执行block回调(注意dx_push 和 dx_invoke 是成对的)