iOS多线程 - GCD(三)

689 阅读6分钟

前两篇我们主要探索了GCD的函数和队列的调度及死锁和单例,本篇我们开始探索GCD的其他函数。

栅栏函数

栅栏函数最直接的作用是:控制任务执行顺序,同步

  • dispatch_battier_async 前面的任务执行完毕才会来到这里。
  • dispatch_battier_sync 作用相同,但是会阻塞线程,影响后面的任务执行。
  • 栅栏函数只能控制同一并发队列。

栅栏函数的使用

我们一般会如下使用栅栏函数:

 dispatch_queue_t concurrentQueue = dispatch_queue_create("jason", DISPATCH_QUEUE_CONCURRENT);
 dispatch_async(concurrentQueue, ^{
     NSLog(@"123");
 }); 
 dispatch_async(concurrentQueue, ^{
     sleep(1);
     NSLog(@"456");
 });
 dispatch_barrier_async(concurrentQueue1, ^{
     NSLog(@"----%@-----",[NSThread currentThread]);
 });
 dispatch_async(concurrentQueue, ^{
     NSLog(@"789");
 });
 NSLog(@"10 11 12");

我们打印结果:

123
10 11 12
456
----<NSThread: 0x6000012ca180>{number = 2, name = (null)}-----
789

可以看到栅栏函数阻塞了自己的block和后续异步函数的执行,也就是必须前面的函数执行之后才会执行后续操作。

我们换成同步栅栏函数再执行一次:

 dispatch_barrier_sync(concurrentQueue1, ^{
     NSLog(@"----%@-----",[NSThread currentThread]);
 });

打印结果:

123
456
----<NSThread: 0x6000025a4140>{number = 1, name = main}-----
10 11 12
789

可以看到dispatch_battier_sync阻塞了NSLog(@"10 11 12");代码的执行。

注意:还有一个点需要注意就是栅栏函数不能阻塞全局并发队列

接下来我们从源码层面看栅栏函数的实现以及为什么全局并发队列不能被栅栏函数阻塞。

栅栏函数底层原理

我们还是以同步函数dispatch_barrier_sync为例探索

调试代码如图: 1629551309944.jpg 调试代码有个点就是sleep(30)方便调试等待前面方法执行完成之前的调用。

我们从libdispatch源码中搜索dispatch_barrier_sync:

void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
  uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
  if (unlikely(_dispatch_block_has_private_data(work))) {
    return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
  }
  _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}

调用了_dispatch_barrier_sync_f函数,继续跟进去:

static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
    dispatch_function_t func, uintptr_t dc_flags)
{
  _dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}

调用_dispatch_barrier_sync_f_inline函数:

static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
    dispatch_function_t func, uintptr_t dc_flags)
{
  dispatch_tid tid = _dispatch_tid_self();
​
  if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
    DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
  }
​
  dispatch_lane_t dl = upcast(dq)._dl;
  // The more correct thing to do would be to merge the qos of the thread
  // that just acquired the barrier lock into the queue state.
  //
  // However this is too expensive for the fast path, so skip doing it.
  // The chosen tradeoff is that if an enqueue on a lower priority thread
  // contends with this fast path, this thread may receive a useless override.
  //
  // Global concurrent queues and queues bound to non-dispatch threads
  // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
  if (unlikely(!_dispatch_queue_try_acquire_barrier_sync(dl, tid))) {
    //经过断点调试执行了这里
    return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
        DC_FLAG_BARRIER | dc_flags);
  }
​
  if (unlikely(dl->do_targetq->do_targetq)) {
    return _dispatch_sync_recurse(dl, ctxt, func,
        DC_FLAG_BARRIER | dc_flags);
  }
  _dispatch_introspection_sync_begin(dl);
  _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
      DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
          dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}

这里我们加两个符号断点调试一下看看执行的是_dispatch_sync_f_slow还是_dispatch_sync_recurse,经过添加符号断点实际调用了_dispatch_sync_f_slow函数。

static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
    dispatch_function_t func, uintptr_t top_dc_flags,
    dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
  dispatch_queue_t top_dq = top_dqu._dq;
  dispatch_queue_t dq = dqu._dq;
  if (unlikely(!dq->do_targetq)) {
    return _dispatch_sync_function_invoke(dq, ctxt, func);
  }
​
  pthread_priority_t pp = _dispatch_get_priority();
  struct dispatch_sync_context_s dsc = {
    .dc_flags    = DC_FLAG_SYNC_WAITER | dc_flags,
    .dc_func     = _dispatch_async_and_wait_invoke,
    .dc_ctxt     = &dsc,
    .dc_other    = top_dq,
    .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
    .dc_voucher  = _voucher_get(),
    .dsc_func    = func,
    .dsc_ctxt    = ctxt,
    .dsc_waiter  = _dispatch_tid_self(),
  };
​
  _dispatch_trace_item_push(top_dq, &dsc);
  __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);
​
  if (dsc.dsc_func == NULL) {
    // dsc_func being cleared means that the block ran on another thread ie.
    // case (2) as listed in _dispatch_async_and_wait_f_slow.
    dispatch_queue_t stop_dq = dsc.dc_other;
    return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
  }
  _dispatch_introspection_sync_begin(top_dq);
  _dispatch_trace_item_pop(top_dq, &dsc);
  _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
      DISPATCH_TRACE_ARG(&dsc));
}

_dispatch_sync_f_slow函数我们比较熟悉了,在上一篇探索过,我们继续添加符号断点,

这里有个细节,就是会先调用__DISPATCH_WAIT_FOR_QUEUE__等待sleep(300);的调用结束说明等待的过程是__DISPATCH_WAIT_FOR_QUEUE__处理,接着会调用到dq_push_dispatch_lane_concurrent_push函数->_dispatch_lane_push->_dispatch_lane_push_waiter也就是阻塞了当前队列。

  • 自定义并发队列会执行_dispatch_lane_wakeup会有等待barrier的判断。
  • 全局并发队列会执行_dispatch_root_queue_wakeup方法,所以不会有等待的方法,所以不会阻塞

等待执行的队列完成之后会调用_dispatch_lane_class_barrier_complete函数->dx_wakeup->_dispatch_lane_wakeup->_dispatch_queue_wakeup

这里会调用_dispatch_client_callout函数,然后同步函数就接着被执行了。

信号量

信号量的使用

先看使用代码:

 dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
 dispatch_semaphore_t sem = dispatch_semaphore_create(0);
  //任务1
  dispatch_async(queue, ^{
      dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); // sem开始为0 需要等待等待
      NSLog(@"执行任务1");
      NSLog(@"任务1完成");
  });
  //任务2
  dispatch_async(queue, ^{
      sleep(2);
      NSLog(@"执行任务2");
      NSLog(@"任务2完成");
      dispatch_semaphore_signal(sem); // 发信号 sem+1
  });

信号量主要有三个函数:

  • dispatch_semaphore_create:创建信号量
  • dispatch_semaphore_wait:等待信号量
  • dispatch_semaphore_signal:信号量释放

它可以控制GCD的最大并发数。

信号量的底层原理

底层原理其实就是三个方法的探索,我们依次探索

dispatch_semaphore_wait

全局搜索dispatch_semaphore_wait

intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
  //--1
  long value = os_atomic_dec2o(dsema, dsema_value, acquire);
  if (likely(value >= 0)) {
    return 0;
  }
  return _dispatch_semaphore_wait_slow(dsema, timeout);
}

前面的例子我们创建的信号量为0,经过--操作<0,会执行_dispatch_semaphore_wait_slow函数

DISPATCH_NOINLINE
static intptr_t
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
    dispatch_time_t timeout)
{
  long orig;
​
  _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);
  switch (timeout) {
  default:
    if (!_dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) {
      break;
    }
    // Fall through and try to undo what the fast path did to
    // dsema->dsema_value
  case DISPATCH_TIME_NOW:
    //超时操作
    orig = dsema->dsema_value;
    while (orig < 0) {
      if (os_atomic_cmpxchgv2o(dsema, dsema_value, orig, orig + 1,
          &orig, relaxed)) {
        return _DSEMA4_TIMEOUT();
      }
    }
    // Another thread called semaphore_signal().
    // Fall through and drain the wakeup.
  case DISPATCH_TIME_FOREVER:
     //一直等待
    _dispatch_sema4_wait(&dsema->dsema_sema);
    break;
  }
  return 0;
}

可见等待的函数是_dispatch_sema4_wait:

void
_dispatch_sema4_wait(_dispatch_sema4_t *sema)
{
  int ret = 0;
  do {
    ret = sem_wait(sema);
  } while (ret == -1 && errno == EINTR);
  DISPATCH_SEMAPHORE_VERIFY_RET(ret);
}

sem_wait是c语言的方法,这里是一个do-while循环事宜等待信号量的值满足条件。

dispatch_semaphore_signal

全局搜索dispatch_semaphore_signal

intptr_t
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
  //++1
  long value = os_atomic_inc2o(dsema, dsema_value, release);
  if (likely(value > 0)) {
    return 0;
  }
  if (unlikely(value == LONG_MIN)) {
    DISPATCH_CLIENT_CRASH(value,
        "Unbalanced call to dispatch_semaphore_signal()");
  }
  return _dispatch_semaphore_signal_slow(dsema);
}

这里信号量++一次,如果信号量>0就可以正常执行了,如果信号量还是<=0,会进入函数_dispatch_semaphore_signal_slow,它是异常的处理一直++信号量的值,直到返回的值为正值。

小结

  • dispatch_semaphore_create:初始化信号量。
  • dispatch_semaphore_wait:对信号量的value--
  • dispatch_semaphore_signal对信号量的value++

调度组

调度组的使用

有时候我们可能需要等待多个接口都返回数据才能进行下一步的操作:

dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_group_async(group, queue, ^{
      //下载任务1
      sleep(2);
});
dispatch_group_async(group, queue, ^{
      //下载任务2
      sleep(2);
});
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
      //组里任务都执行之后,下一步的操作。 
});

或者:

dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_group_enter(group);
dispatch_async(queue, ^{
      //下载任务1
      sleep(2);
      dispatch_group_leave(group);
});
dispatch_group_enter(group);
dispatch_async(queue, ^{
      //下载任务2
      sleep(2);
      dispatch_group_leave(group);
});
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
      //组里任务都执行之后,下一步的操作。 
});

调度组底层原理

我们从三个方面分析,组是如何控制同步的、dispatch_group_enterdispatch_group_leave的搭配、dispatch_group_async的原理。

dispatch_group_create函数
dispatch_group_t
dispatch_group_create(void)
{
  return _dispatch_group_create_with_count(0);
}

主要是创建了一个组的数据结构

DISPATCH_ALWAYS_INLINE
static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
  dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
      sizeof(struct dispatch_group_s));
  dg->do_next = DISPATCH_OBJECT_LISTLESS;
  dg->do_targetq = _dispatch_get_default_queue(false);
  if (n) {
    os_atomic_store2o(dg, dg_bits,
        (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
    os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
  }
  return dg;
}
dispatch_group_enter函数
void
dispatch_group_enter(dispatch_group_t dg)
{
  // The value is decremented on a 32bits wide atomic so that the carry
  // for the 0 -> -1 transition is not propagated to the upper 32bits.
  /// --操作
  uint32_t old_bits = os_atomic_sub_orig2o(dg, dg_bits,
      DISPATCH_GROUP_VALUE_INTERVAL, acquire);
  uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK;
  if (unlikely(old_value == 0)) {
    _dispatch_retain(dg); // <rdar://problem/22318411>
  }
  if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) {
    DISPATCH_CLIENT_CRASH(old_bits,
        "Too many nested calls to dispatch_group_enter()");
  }
}

开始count == 0,进行--操作,变为-1,

dispatch_group_leave函数
void
dispatch_group_leave(dispatch_group_t dg)
{
  // The value is incremented on a 64bits wide atomic so that the carry for
  // the -1 -> 0 transition increments the generation atomically.
  uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,
      DISPATCH_GROUP_VALUE_INTERVAL, release);
  uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);
​
  if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {//old_state == -1
    old_state += DISPATCH_GROUP_VALUE_INTERVAL;
    do {
      new_state = old_state;
      if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {
        new_state &= ~DISPATCH_GROUP_HAS_WAITERS;
        new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
      } else {
        // If the group was entered again since the atomic_add above,
        // we can't clear the waiters bit anymore as we don't know for
        // which generation the waiters are for
        new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;
      }
      if (old_state == new_state) break;
    } while (unlikely(!os_atomic_cmpxchgv2o(dg, dg_state,
        old_state, new_state, &old_state, relaxed)));
    return _dispatch_group_wake(dg, old_state, true);///唤醒 notify
  }
​
  if (unlikely(old_value == 0)) {
    DISPATCH_CLIENT_CRASH((uintptr_t)old_value,
        "Unbalanced call to dispatch_group_leave()");
  }
}

count由-1变为0,可以看出old_state != new_state时会一直while循环,当相等是执行_dispatch_group_wake唤醒阻塞的函数。

dispatch_group_notify
static inline void
_dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
    dispatch_continuation_t dsn)
{
  uint64_t old_state, new_state;
  dispatch_continuation_t prev;
​
  dsn->dc_data = dq;
  _dispatch_retain(dq);prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next);
  if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg);
  os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next);
  if (os_mpsc_push_was_empty(prev)) {
    os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, {
      new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS;
      if ((uint32_t)old_state == 0) {
        os_atomic_rmw_loop_give_up({/// block callout函数
          return _dispatch_group_wake(dg, new_state, false);
        });
      }
    });
  }
}

可以看到old_state == 0的时候才会执行block,也就是dispatch_group_leave执行完成。

dispatch_group_async
void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
    dispatch_block_t db)
{
  //任务封装
  dispatch_continuation_t dc = _dispatch_continuation_alloc();
  uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC;
  dispatch_qos_t qos;
  qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags);
  _dispatch_continuation_group_async(dg, dq, dc, qos);
}
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
    dispatch_continuation_t dc, dispatch_qos_t qos)
{
  dispatch_group_enter(dg);//执行了enter
  dc->dc_data = dg;
  _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
    dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
#if DISPATCH_INTROSPECTION
  if (!(dc_flags & DC_FLAG_NO_INTROSPECTION)) {
    _dispatch_trace_item_push(dqu, dc);
  }
#else
  (void)dc_flags;
#endif
  return dx_push(dqu._dq, dc, qos);//调用的函数 
}

这里在_dispatch_continuation_group_async函数调用了dispatch_group_enter函数,然后再dx_push之后也就是callout函数之后执行了dispatch_group_leave函数。

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_with_group_invoke(dispatch_continuation_t dc)
{
  struct dispatch_object_s *dou = dc->dc_data;
  unsigned long type = dx_type(dou);
  if (type == DISPATCH_GROUP_TYPE) {
    _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
    _dispatch_trace_item_complete(dc);
    ///执行了leave方法
    dispatch_group_leave((dispatch_group_t)dou);
  } else {
    DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type");
  }
}

小结

  • dispatch_group_enterdispatch_group_leave成对出现
  • dispatch_group_enter底层对group的value做--操作(0->1)
  • dispatch_group_leave底层对group的value做++操作(-1->0)
  • dispatch_group_notify底层判断group的state是否为0,为0就通知执行block。
  • 任务唤醒有两种方式:1、dispatch_group_leave2、dispatch_group_notify
  • dispatch_group_async等同于 enter+leave,底层实现包含一对enter+leave。

dispatch_source

dispatch_source我们平时会使用到的场景是一个计时器(倒计时):

- (void)timeDone{
    //倒计时时间
    __block int timeout = 3;
    
    //创建队列
    dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0);
    
    //创建timer
    dispatch_source_t timer = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, globalQueue);
    
    //设置1s触发一次,0s的误差
    /*
     - source 分派源
     - start 数控制计时器第一次触发的时刻。参数类型是 dispatch_time_t,这是一个opaque类型,我们不能直接操作它。我们得需要 dispatch_time 和 dispatch_walltime 函数来创建它们。另外,常量 DISPATCH_TIME_NOW 和 DISPATCH_TIME_FOREVER 通常很有用。
     - interval 间隔时间
     - leeway 计时器触发的精准程度
     */
    dispatch_source_set_timer(timer,dispatch_walltime(NULL, 0),1.0*NSEC_PER_SEC, 0);
    
     //触发的事件
    dispatch_source_set_event_handler(timer, ^{
        //倒计时结束,关闭
        if (timeout <= 0) {
            //取消dispatch源
            dispatch_source_cancel(timer);
        }else{
            timeout--;
            
            dispatch_async(dispatch_get_main_queue(), ^{
                //更新主界面的操作
                NSLog(@"倒计时 - %d", timeout);
            });
        }
    });
    //开始执行dispatch源
    dispatch_resume(timer);
}