14.多线程原理- GCD

282 阅读8分钟

线程和进程的定义

- 线程是进程的基本执⾏单元,⼀个进程的所有任务都在线程中执⾏

  • 进程要想执行任务,必须得有线程,进程至少要有一个线程
  • 程序启动会默认开启一条线程,这条线程被称为主线程或UI线程

- 进程是指在系统中正在运⾏的⼀个应⽤程序

  • 每个进程之前是独立的,没有进程军运行在其专用的且受保护的内存空间内
  • 通过活动监视器可以查看mac系统中所开启的进程

进程与线程的关系

地址空间:同⼀进程的线程共享本进程的地址空间,⽽进程之间则是独⽴的地址空间。

资源拥有:同⼀进程内的线程共享本进程的资源如内存、I/O、cpu等,但是进程之间的

资源是独⽴的。

  • 1: ⼀个进程崩溃后,在保护模式下不会对其他进程产⽣影响,但是⼀个线程崩溃整个进

程都死掉。所以多进程要⽐多线程健壮。

  • 2: 进程切换时,消耗的资源⼤,效率⾼。所以涉及到频繁的切换时,使⽤线程要好于进

程。同样如果要求同时进⾏并且⼜要共享某些变量的并发操作,只能⽤线程不能⽤进程

  • 3: 执⾏过程:每个独⽴的进程有⼀个程序运⾏的⼊⼝、顺序执⾏序列和程序⼊⼝。但是

线程不能独⽴执⾏,必须依存在应⽤程序中,由应⽤程序提供多个线程执⾏控制。

  • 4: 线程是处理器调度的基本单位,但是进程不是。

  • 5: 线程没有地址空间,线程包含在进程地址空间中

多线程的意义

  • 优点
    • 能适当提高程序的执行效率
    • 能适当提高资源的利用率(CPU,内存)
    • 线程上的任务执行完成后,线程会自动销毁
  • 缺点
    • 开启线程需要占用一定的内存空间(默认情况下,每一个线程占512KB)
    • 如果开启大量的线程,会占用大量的内存空间,降低程序的性能
    • 线程越多,CPU在调用线程的开销就越大
    • 程序设计越加复杂,比如线程之间的通讯、多线程的数据共享

线程和进程的定义

时间片的概念,CPU在多个任务之间进行快速切换,这个时间间隔就是时间片

  • (单核CPU)同一时间,CPU只能处理一个线程,只有一个线程在执行
  • 多线程同时执行
    • 是CPU快速的在多个线程之间的切换
    • CPU调用线程的时间足够块,就造成了多线程的同时执行的效果
  • 如果线程数量非常多
    • CPU会在N个线程之间切换,消耗大量的CPU资源
    • 每个线程被调用的次数会降低,线程的执行效率降低

多线程技术方案

image.png

线程的生命周期

image.png

image.png

  • 任何任务都需要在线程中才能完成
  • 1.判断是否有空闲的线程,在判断线程数量是否超过阈值。
  • 2.创建线程New
  • 3.就绪runnable
  • 4.CPU调度当前线程就Running
  • 5.CPU调度其他线程就进入等待 Blocked
  • 6.重新进入就绪状态
  • 7.调用完成后死亡。

饱和策略

• AbortPolicy 直接抛出RejectedExecutionExeception异常来阻⽌系统正常运⾏

• CallerRunsPolicy 将任务回退到调⽤者

• DisOldestPolicy 丢掉等待最久的任务

• DisCardPolicy 直接丢弃任务 

这四种拒绝策略均实现的RejectedExecutionHandler接⼝

  • 多线程调用同一资源的时候回造成资源信息的错乱加锁
  • 自旋锁 VS 互斥锁
    • 互斥锁 发现其他线程执行 当前线程 休眠 (就绪状态) 一直在等打开 唤醒执行

    • 自旋锁 发现其他线程执行 当前线程 询问 - 忙等 耗费性能比较高 atomic是原子属性,是为多线程开发准备的,是默认属性! 仅仅在属性的 setter 方法中,增加了锁(自旋锁),能够保证同一时间,只有一条线程对属性进行操作同一时间 单(线程)写多(线程)读的线程处理技术
      nonatomic  是非原子属性 没有锁!性能高!

image.png

image.png

GCD

image.png

image.png

image.png

image.png

image.png

- (void)queueTest{
    // 队列总共有几种 3 - 4
    // libdispatch.dylib - GCD 底层源码
    // 队列怎么创建 : DISPATCH_QUEUE_SERIAL / DISPATCH_QUEUE_CONCURRENT
    
    // OS_dispatch_queue_serial
    dispatch_queue_t serial = dispatch_queue_create("kc", DISPATCH_QUEUE_SERIAL); //串行队列
    // OS_dispatch_queue_concurrent
    // OS_dispatch_queue_concurrent
    dispatch_queue_t conque = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT); //并发队列
    // DISPATCH_QUEUE_SERIAL max && 1
    // queue 对象 alloc init class
    dispatch_queue_t mainQueue = dispatch_get_main_queue(); //主队列

    dispatch_queue_t globQueue = dispatch_get_global_queue(0, 0); //全局并发队列

    NSLog(@"%@-%@-%@-%@",serial,conque,mainQueue,globQueue);
    // .dq_atomic_flags = DQF_WIDTH(1),
    // .dq_serialnum = 1,
    // 串行队列必然有某些特性 VS 并发队列
    
    // dispatch_queue_create
}

主队列的生成

dispatch_queue_t mainQueue = dispatch_get_main_queue(); //主队列
  • 在Opensource的源码中搜索 dispatch_get_main_queue(void)找到 image.png
  • 再搜索DISPATCH_GLOBAL_OBJECT找到这个define image.png 发现主队列的定义,第一个参数是类型,第二个才是对象所以搜索_dispatch_main_q
  • 搜索 _dispatch_main_q = 发现创建的地方

image.png

  • .dq_label = "com.apple.main-thread",这个就是主队列的label名字,可以通过这个来快速找到。也可以在bt打印栈数据发现这个label
  • .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),``.dq_serialnum = 1, 这个可以证明主队列是一个串行队列。

串行队列的特性分析

  • dispatch_queue_create image.png
  • _dispatch_lane_create_with_target image.png 查看返回值 return _dispatch_trace_queue_create(dq)._dq;发现并没有什么不同,所以研究的对象转为对dq的研究。
  • dispatch_lane_t dq = _dispatch_object_alloc(vtable,
//开辟内存
dispatch_lane_t dq = _dispatch_object_alloc(vtable,
    sizeof(struct dispatch_lane_s));
//创建对象 这里有一个判断 DISPATCH_QUEUE_WIDTH_MAX : 1
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

  • 这里的width决定了这个队列是串行还是异步dqf |= DQF_WIDTH(width); image.png
  • 这里从底层验证了DQF_WIDTH(1)是决定里这个主队列是串行队列。

全局并发队列

image.png

image.png

继承链关系

typedef struct dispatch_queue_s : public dispatch_object_s {} * dispatch_queue_t 结论 dispatch_queue_t 的本质是 dispatch_queue_s 继承自 dispatch_object_s

image.png

  • 面试题
//
- (void)MTDemo{
    while (self.num < 5) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.num++;
        });
    }
    NSLog(@"end : %d",self.num); // 结果是 >= 5
    //分析是 while循环 self.num >= 5 时才会从从循环中出来,但是开启的是异步线程,可以对self.num++,可能是多个线程对self.num++,可能就直接超出5了
}

- (void)KSDemo{
   
    for (int i= 0; i<10000; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.num++;
        });
    }
    NSLog(@"end : %d",self.num); // < 10000
    //分析这里i循环10000次但是并不管结果的线程的数据是否及时的增加完毕就直接打印结果了,所以结果可能比10000小
}

同步函数GCD的任务执行栈

  • 是什么时候开始执行block中的内容的
- (void)mainSyncTest{  
dispatch_sync(dispatch_get_global_queue(0, 0), ^{

        NSLog(@"1");
    });
}

image.png 发现block传入的参数为work所以查看 _dispatch_Block_invoke 的封装流程,这个就是block的封装 image.png 执行则在_dispatch_sync_f函数查看底层

image.png

  • _dispatch_sync_f_inline,串行的dq_width = 1,进入_dispatch_barrier_sync_f,并发则进入下面 image.png

  • 打上断点 查看函数调用栈,发现进入_dispatch_sync_f_slow这个函数中,由于测试的代码是全局并发队列,所以进入这个函数 image.png

  • _dispatch_sync_function_invoke image.png

  • _dispatch_client_callout 完成对block的调用 image.png

  • 测试发现bt打印函数调用栈可以验证上述的推到过程 image.png 结论:同步函数的任务的执行和函数是放在一起的,只要把中间的状态进行判断处理就可以了。

异步函数调用

  • 关注点在异步任务和创建的线程 image.png image.png _dispatch_continuation_priority_set 优先级处理函数 image.png

  • _dispatch_continuation_async

image.png

  • 找到宏定义分析 dq_push image.png

  • 返现dq_push 是各种队列类型的函数赋值,各种队列的不同采取的策略也不同 image.png

  • 全局并发队列类型 image.png

  • 异步并发队列 image.png

image.png

  • 进入_dispatch_lane_push,这里注意是否含有barrier栅栏函数来控制流程。

  • 下全局断点进入_dispatch_lane_push、_dispatch_lane_push_waiter和dx_wakeup image.png

  • 进入dx_wakeup #define dx_wakeup(x, y, z) dx_vtable(x)->dq_wakeup(x, y, z) 发现是一个宏定义

  • 并发队列操作本质是_dispatch_lane_wakeup的赋值 image.png

image.png

— 代码很长跟流程进入,也可以bt打印发现

_dispatch_queue_wakeup(dispatch_queue_class_t dqu, dispatch_qos_t qos,
		dispatch_wakeup_flags_t flags, dispatch_queue_wakeup_target_t target)
{
	dispatch_queue_t dq = dqu._dq;
	uint64_t old_state, new_state, enqueue = DISPATCH_QUEUE_ENQUEUED;
	dispatch_assert(target != DISPATCH_QUEUE_WAKEUP_WAIT_FOR_EVENT);

	if (target && !(flags & DISPATCH_WAKEUP_CONSUME_2)) {
		_dispatch_retain_2(dq);
		flags |= DISPATCH_WAKEUP_CONSUME_2;
	}

	if (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {
		//
		// _dispatch_lane_class_barrier_complete() is about what both regular
		// queues and sources needs to evaluate, but the former can have sync
		// handoffs to perform which _dispatch_lane_class_barrier_complete()
		// doesn't handle, only _dispatch_lane_barrier_complete() does.
		//
		// _dispatch_lane_wakeup() is the one for plain queues that calls
		// _dispatch_lane_barrier_complete(), and this is only taken for non
		// queue types.
		//
		dispatch_assert(dx_metatype(dq) == _DISPATCH_SOURCE_TYPE);
		qos = _dispatch_queue_wakeup_qos(dq, qos);
		return _dispatch_lane_class_barrier_complete(upcast(dq)._dl, qos,
				flags, target, DISPATCH_QUEUE_SERIAL_DRAIN_OWNED);
	}

	if (target) {
		if (target == DISPATCH_QUEUE_WAKEUP_MGR) {
			enqueue = DISPATCH_QUEUE_ENQUEUED_ON_MGR;
		}
		qos = _dispatch_queue_wakeup_qos(dq, qos);
		os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
			new_state = _dq_state_merge_qos(old_state, qos);
			if (flags & DISPATCH_WAKEUP_CLEAR_ACTIVATING) {
				// When an event is being delivered to a source because its
				// unote was being registered before the ACTIVATING state
				// had a chance to be cleared, we don't want to fail the wakeup
				// which could lead to a priority inversion.
				//
				// Instead, these wakeups are allowed to finish the pending
				// activation.
				if (_dq_state_is_activating(old_state)) {
					new_state &= ~DISPATCH_QUEUE_ACTIVATING;
				}
			}
			if (likely(!_dq_state_is_suspended(new_state) &&
					!_dq_state_is_enqueued(old_state) &&
					(!_dq_state_drain_locked(old_state) ||
					enqueue != DISPATCH_QUEUE_ENQUEUED_ON_MGR))) {
				// Always set the enqueued bit for async enqueues on all queues
				// in the hierachy
				new_state |= enqueue;
			}
			if (flags & DISPATCH_WAKEUP_MAKE_DIRTY) {
				new_state |= DISPATCH_QUEUE_DIRTY;
			} else if (new_state == old_state) {
				os_atomic_rmw_loop_give_up(goto done);
			}
		});
#if HAVE_PTHREAD_WORKQUEUE_QOS
	} else if (qos) {
		//
		// Someone is trying to override the last work item of the queue.
		//
		os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, relaxed, {
			// Avoid spurious override if the item was drained before we could
			// apply an override
			if (!_dq_state_drain_locked(old_state) &&
				!_dq_state_is_enqueued(old_state)) {
				os_atomic_rmw_loop_give_up(goto done);
			}
			new_state = _dq_state_merge_qos(old_state, qos);
			if (_dq_state_is_base_wlh(old_state) &&
					!_dq_state_is_suspended(old_state) &&
					/* <rdar://problem/63179930> */
					!_dq_state_is_enqueued_on_manager(old_state)) {

				// Always set the enqueued bit for async enqueues on all queues
				// in the hierachy (rdar://62447289)
				//
				// Scenario:
				// - mach channel DM
				// - targetting TQ
				//
				// Thread 1:
				// - has the lock on (TQ), uncontended sync
				// - causes a wakeup at a low QoS on DM, causing it to have:
				//   max_qos = UT, enqueued = 1
				// - the enqueue of DM onto TQ hasn't happened yet.
				//
				// Thread 2:
				// - an incoming IN IPC is being merged on the servicer
				// - DM having qos=UT, enqueud=1, no further enqueue happens,
				//   but we need an extra override and go through this code for
				//   TQ.
				// - this causes TQ to be "stashed", which requires the enqueued
				//   bit set, else try_lock_wlh() will complain and the
				//   wakeup refcounting will be off.
				new_state |= enqueue;
			}

			if (new_state == old_state) {
				os_atomic_rmw_loop_give_up(goto done);
			}
		});

		target = DISPATCH_QUEUE_WAKEUP_TARGET;
#endif // HAVE_PTHREAD_WORKQUEUE_QOS
	} else {
		goto done;
	}

	if (likely((old_state ^ new_state) & enqueue)) {
		dispatch_queue_t tq;
		if (target == DISPATCH_QUEUE_WAKEUP_TARGET) {
			// the rmw_loop above has no acquire barrier, as the last block
			// of a queue asyncing to that queue is not an uncommon pattern
			// and in that case the acquire would be completely useless
			//
			// so instead use depdendency ordering to read
			// the targetq pointer.
			os_atomic_thread_fence(dependency);
			tq = os_atomic_load_with_dependency_on2o(dq, do_targetq,
					(long)new_state);
		} else {
			tq = target;
		}
		dispatch_assert(_dq_state_is_enqueued(new_state));
		return _dispatch_queue_push_queue(tq, dq, new_state);
	}
#if HAVE_PTHREAD_WORKQUEUE_QOS
	if (unlikely((old_state ^ new_state) & DISPATCH_QUEUE_MAX_QOS_MASK)) {
		if (_dq_state_should_override(new_state)) {
			return _dispatch_queue_wakeup_with_override(dq, new_state,
					flags);
		}
	}
#endif // HAVE_PTHREAD_WORKQUEUE_QOS
done:
	if (likely(flags & DISPATCH_WAKEUP_CONSUME_2)) {
		return _dispatch_release_2_tailcall(dq);
	}
}

  • 对上面的每一个return进行全局断点调试,在进入_dispatch_lane_class_barrier_complete
  • 进入_dispatch_queue_push_queue后进入_dispatch_root_queue_push,

image.png

  • 一下流程和全局并发队列相同

全局并发队列分析

  • 查看对qos的操作 image.png

image.png

image.png

image.png

image.png

  • _dispatch_worker_thread2 image.png

  • 发现任务被封装为cfg.workq_cb = _dispatch_worker_thread2;或者 r = _pthread_workqueue_init_with_workloop(_dispatch_worker_thread2, (pthread_workqueue_function_kevent_t) 由此可见GCD其实就是对底层pthread的封装

  • 封装完毕后 image.png

  • _dispatch_root_queue_drain image.png

  • _dispatch_continuation_pop_inline image.png

  • _dispatch_continuation_invoke_inline image.png

  • 最后同串行队列分析一致开始调用block,进行函数的处理调用。

  • 重新返回这个函数

_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
	int remaining = n;
#if !defined(_WIN32)
	int r = ENOSYS;
#endif

	_dispatch_root_queues_init();
	_dispatch_debug_root_queue(dq, __func__);
	_dispatch_trace_runtime_event(worker_request, dq, (uint64_t)n);

#if !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
	if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)
#endif
	{
		_dispatch_root_queue_debug("requesting new worker thread for global "
				"queue: %p", dq);
		r = _pthread_workqueue_addthreads(remaining,
				_dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
		(void)dispatch_assume_zero(r);
		return;
	}
#endif // !DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_POOL
	dispatch_pthread_root_queue_context_t pqc = dq->do_ctxt;
	if (likely(pqc->dpq_thread_mediator.do_vtable)) {
		while (dispatch_semaphore_signal(&pqc->dpq_thread_mediator)) {
			_dispatch_root_queue_debug("signaled sleeping worker for "
					"global queue: %p", dq);
			if (!--remaining) {
				return;
			}
		}
	}

	bool overcommit = dq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
	if (overcommit) {
		os_atomic_add2o(dq, dgq_pending, remaining, relaxed);
	} else {
		if (!os_atomic_cmpxchg2o(dq, dgq_pending, 0, remaining, relaxed)) {
			_dispatch_root_queue_debug("worker thread request still pending for "
					"global queue: %p", dq);
			return;
		}
	}

	int can_request, t_count;
	// seq_cst with atomic store to tail <rdar://problem/16932833>
	t_count = os_atomic_load2o(dq, dgq_thread_pool_size, ordered);
	do {
		can_request = t_count < floor ? 0 : t_count - floor;
		if (remaining > can_request) {
			_dispatch_root_queue_debug("pthread pool reducing request from %d to %d",
					remaining, can_request);
			os_atomic_sub2o(dq, dgq_pending, remaining - can_request, relaxed);
			remaining = can_request;
		}
		if (remaining == 0) {
			_dispatch_root_queue_debug("pthread pool is full for root queue: "
					"%p", dq);
			return;
		}
	} while (!os_atomic_cmpxchgv2o(dq, dgq_thread_pool_size, t_count,
			t_count - remaining, &t_count, acquire));

#if !defined(_WIN32)
	pthread_attr_t *attr = &pqc->dpq_thread_attr;
	pthread_t tid, *pthr = &tid;
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
	if (unlikely(dq == &_dispatch_mgr_root_queue)) {
		pthr = _dispatch_mgr_root_queue_init();
	}
#endif
	do {
		_dispatch_retain(dq); // released in _dispatch_worker_thread
		while ((r = pthread_create(pthr, attr, _dispatch_worker_thread, dq))) {
			if (r != EAGAIN) {
				(void)dispatch_assume_zero(r);
			}
			_dispatch_temporary_resource_shortage();
		}
	} while (--remaining);
#else // defined(_WIN32)
#if DISPATCH_USE_MGR_THREAD && DISPATCH_USE_PTHREAD_ROOT_QUEUES
	if (unlikely(dq == &_dispatch_mgr_root_queue)) {
		_dispatch_mgr_root_queue_init();
	}
#endif
	do {
		_dispatch_retain(dq); // released in _dispatch_worker_thread
#if DISPATCH_DEBUG
		unsigned dwStackSize = 0;
#else
		unsigned dwStackSize = 64 * 1024;
#endif
		uintptr_t hThread = 0;
		while (!(hThread = _beginthreadex(NULL, dwStackSize, _dispatch_worker_thread_thunk, dq, STACK_SIZE_PARAM_IS_A_RESERVATION, NULL))) {
			if (errno != EAGAIN) {
				(void)dispatch_assume(hThread);
			}
			_dispatch_temporary_resource_shortage();
		}
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
		if (_dispatch_mgr_sched.prio > _dispatch_mgr_sched.default_prio) {
			(void)dispatch_assume_zero(SetThreadPriority((HANDLE)hThread, _dispatch_mgr_sched.prio) == TRUE);
		}
#endif
		CloseHandle((HANDLE)hThread);
	} while (--remaining);
#endif // defined(_WIN32)
#else
	(void)floor;
#endif // DISPATCH_USE_PTHREAD_POOL
}

  • 如果是普通的全局并发队列则创建线程去执行任务 image.png
  • 否则做一个do-while循环线程池的处理,可用线程的判断及处理 image.png
  • bt打印验证流程 image.png