iOS-锁

278 阅读14分钟

问题先行

1.下面代码输出结果情况?

@property (atomic, assign) NSInteger number;
- (void)test {
    for (int i = 0; i<4; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            for (int j = 0; j < 10000; j ++) {
                self.number = self.number + 1;
                NSLog(@"%d-self.number is %ld",i,self.number);
            }
        });
    }
}

2.下面代码存在问题吗,为什么?

@property (nonatomic, strong) NSMutableArray *array;
- (void)test {
    for (int i = 0; i<4000; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            @synchronized (self.array) {
                self.array = [NSMutableArray array];
            }
        });
    }
}

3.如何保证下面输出荆条有序?

- (void)test {
    for (int i = 0; i < 10; i++) {
        static void (^increaseMethod)(int);
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            increaseMethod = ^(int num) {
                if (num > 0) {
                    increaseMethod(num - 1);
                }
                NSLog(@" num == %d",num);
            };
            increaseMethod(10);
        });
    }
}

4.如下代码调用method1方法如何输出?

- (void)method1 {
    self.lock = [[NSLock alloc]init];
    [self.lock lock];
    NSLog(@"1");
    [self method2];
    NSLog(@"2");
    [self.lock unlock];
}

- (void)method2 {
    [self.lock lock];
    NSLog(@"3");
    [self.lock unlock];
}

锁的分类

锁大概分为以下几类,自旋锁互斥锁递归锁(特殊的互斥锁)读写锁信号量条件锁

自旋锁

自旋锁是计算机科学用于多线程同步的一种锁,线程反复检查锁变量是否可用。由于线程在这一过程中保持执行,因此是一种忙等待。一旦获取了自旋锁,线程会一直保持该锁,直至显式释放自旋锁

获取、释放自旋锁,实际上是读写自旋锁的存储内存或寄存器。因此这种读写操作必须是原子的。通常用test-and-set等原子操作来实现。

利用TSL指令来实现自旋锁,核心代码如下

function Lock(boolean *lock) {
    while (test_and_set (lock) == 1)
      ;
}

int TestAndSet(int* lockPtr) {
    int oldValue;
     
    // Start of atomic segment
    // The following statements should be interpreted as pseudocode for
    // illustrative purposes only.
    // Traditional compilation of this code will not guarantee atomicity, the
    // use of shared memory (i.e. not-cached values), protection from compiler
    // optimization, or other required properties.
    oldValue = *lockPtr;
    *lockPtr = LOCKED;
    // End of atomic segment

    return oldValue;
}

来自维基百科

iOS中自旋锁的运用

OSSpinLock

因为安全问题,目前已经废弃。相关可见YY大神文章

atomic

查看Property相关源码如下

void objc_setProperty_atomic(id self, SEL _cmd, id newValue, ptrdiff_t offset)
{
    reallySetProperty(self, _cmd, newValue, offset, true, false, false);
}

static inline void reallySetProperty(id self, SEL _cmd, id newValue, ptrdiff_t offset, bool atomic, bool copy, bool mutableCopy)
{
    if (offset == 0) {
        object_setClass(self, newValue);
        return;
    }

    id oldValue;
    id *slot = (id*) ((char*)self + offset);

    if (copy) {
        newValue = [newValue copyWithZone:nil];
    } else if (mutableCopy) {
        newValue = [newValue mutableCopyWithZone:nil];
    } else {
        if (*slot == newValue) return;
        newValue = objc_retain(newValue);
    }

    if (!atomic) {
        oldValue = *slot;
        *slot = newValue;
    } else {
        spinlock_t& slotlock = PropertyLocks[slot];
        slotlock.lock();
        oldValue = *slot;
        *slot = newValue;        
        slotlock.unlock();
    }

    objc_release(oldValue);
}

id objc_getProperty(id self, SEL _cmd, ptrdiff_t offset, BOOL atomic) {
    if (offset == 0) {
        return object_getClass(self);
    }

    // Retain release world
    id *slot = (id*) ((char*)self + offset);
    if (!atomic) return *slot;
        
    // Atomic retain release world
    spinlock_t& slotlock = PropertyLocks[slot];
    slotlock.lock();
    id value = objc_retain(*slot);
    slotlock.unlock();
    
    // for performance, we (safely) issue the autorelease OUTSIDE of the spinlock.
    return objc_autoreleaseReturnValue(value);
}

可以看出atomic是通过spinlock_t进行加锁处理的。spinlock_t的源码如下

using spinlock_t = mutex_tt<LOCKDEBUG>;

class mutex_tt : nocopy_t {
    os_unfair_lock mLock;
 public:
    constexpr mutex_tt() : mLock(OS_UNFAIR_LOCK_INIT) {
        lockdebug_remember_mutex(this);
    }

    constexpr mutex_tt(__unused const fork_unsafe_lock_t unsafe) : mLock(OS_UNFAIR_LOCK_INIT) { }

    void lock() {
        lockdebug_mutex_lock(this);

        // <rdar://problem/50384154>
        uint32_t opts = OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION | OS_UNFAIR_LOCK_ADAPTIVE_SPIN;
        os_unfair_lock_lock_with_options_inline
            (&mLock, (os_unfair_lock_options_t)opts);
    }

    void unlock() {
        lockdebug_mutex_unlock(this);

        os_unfair_lock_unlock_inline(&mLock);
    }
   ......
  }

探其底层可以看出是用os_unfair_lock来实现的,执行lockunlock的其实是mutex_t,也就是互斥锁

atomic 原子操作只是对setter 和 getter 方法进行加锁

优缺点

  • 优点:自旋锁不会引起调用者睡眠,所以不会进行线程调度,CPU时间片轮转等耗时操作.如果能在很短的时间内获得锁,自旋锁的效率远高于互斥锁。适用于持有锁较短的程序.
  • 缺点:自旋锁一直占用CPU,在未获得锁的情况下,自旋锁一直运行(忙等状态、询问),占用着CPU,如果不能在很短的时间内获得锁,这无疑会使CPU效率降低。自旋锁不能实现递归调用。

互斥锁

互斥锁(Mutex)是一种用于多线程编程中,防止两条线程同时对同一公共资源(比如全局变量)进行读写的机制。该目的通过将代码切片成一个一个的临界区域(critical section)达成。临界区域指的是一块对公共资源进行访问的代码,并非一种机制或是算法。一个程序、进程、线程可以拥有多个临界区域,但是并不一定会应用互斥锁。

需要此机制的资源的例子有:旗标、队列、计数器、中断处理程序等用于在多条并行运行的代码间传递数据、同步状态等的资源。维护这些资源的同步、一致和完整是很困难的,因为一条线程可能在任何一个时刻被暂停(休眠)或者恢复(唤醒)。

来自维基百科

iOS中互斥锁的运用

@synchronized

通过clang查看底层代码或者通过汇编调试,都可以看出核心的方法是objc_sync_enterobjc_sync_exit方法。源码如下:

int objc_sync_enter(id obj)
{
    int result = OBJC_SYNC_SUCCESS;
    if (obj) {
        SyncData* data = id2data(obj, ACQUIRE);
        ASSERT(data);
        data->mutex.lock();
    } else {
        // @synchronized(nil) does nothing
        if (DebugNilSync) {
            _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
        }
        objc_sync_nil();
    }
    return result;
}

int objc_sync_exit(id obj)
{
    int result = OBJC_SYNC_SUCCESS;
    if (obj) {
        SyncData* data = id2data(obj, RELEASE); 
        if (!data) {
            result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR;
        } else {
            bool okay = data->mutex.tryUnlock();
            if (!okay) {
                result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR;
            }
        }
    } else {
        // @synchronized(nil) does nothing
    }
    return result;
}
  • objid类型,id是对象指针类型objc_object*
  • 如果obj=nil,则会调用了objc_sync_nil()方法,其直接return。
  • objc_sync_enter方法作用是任务开始时进行加锁操作,而objc_sync_exit方法作用是在任务结束时进行解锁操作。如果参数为nil相当于没有加锁解锁的作用,这就是@synchronized内部自己实现的加锁解锁功能
  • objc_sync_enter方法和objc_sync_exit方法都有id2data方法而且加锁解锁的功能也是通过id2data方法的返回值调用的
  • 可以看出mutex是用recursive_mutex_t来实现的,执行lockunlock的其实是os_unfair_recursive_lock,也就是递归互斥锁

可以看出加锁和解锁都是通过id2data实现的。在分析id2data源码之前先看下SyncDataStripedMapSyncCache结构

typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData;// 单向链表形式
    DisguisedPtr<objc_object> object;// 将object进行底层封装
    int32_t threadCount;  // 多少线程对同一对象进行加锁的操作
    recursive_mutex_t mutex; // 递归锁
} SyncData;
  • nextDataSyncData相同的数据类型,单向链表的形式。
  • object:将object进行底层封装,方便计算比较。
  • threadCount:多少线程对同一对象进行加锁的操作。
  • recursive_mutex_t mutex:递归锁 StripedMap结构分析
#define LOCK_FOR_OBJ(obj) sDataLists[obj].lock
#define LIST_FOR_OBJ(obj) sDataLists[obj].data
static StripedMap<SyncList> sDataLists;

struct SyncList {
    SyncData *data;
    spinlock_t lock;

    constexpr SyncList() : data(nil), lock(fork_unsafe_lock) { }
};

class StripedMap {
#if TARGET_OS_IPHONE && !TARGET_OS_SIMULATOR
    enum { StripeCount = 8 };
#else
    enum { StripeCount = 64 };
#endif
   。。。。。。
}
  • StripedMap是一张哈希表在真机情况的存储SyncList个数是8个,其它环境64个。
  • SyncList是一个结构体有两个变量SyncData *datalock SyncCache结构分析
typedef struct {
    SyncData *data;
    unsigned int lockCount;  // number of times THIS THREAD locked this block
} SyncCacheItem;

typedef struct SyncCache {
    unsigned int allocated;
    unsigned int used;
    SyncCacheItem list[0];
} SyncCache;
  • SyncCache是一个结构体,每个线程缓存对应一个SyncCache
  • SyncCacheItem表示每个对象锁的信息。SyncCacheItem也是一个结构体里面包含了SyncDatalockCount当前线程当前对象锁的次数 重头戏id2data
static SyncData* id2data(id object, enum usage why)
{
    /// 从哈希表中获取object对应的锁,目的防止对线程同时操作
    spinlock_t *lockp = &LOCK_FOR_OBJ(object);
    /// 从哈希表中获取object对应的SyncData地址
    SyncData **listp = &LIST_FOR_OBJ(object);
    SyncData* result = NULL;
#if SUPPORT_DIRECT_THREAD_KEYS
    // Check per-thread single-entry fast cache for matching object
    bool fastCacheOccupied = NO;
    /// 从线程的局部存储中查找data
    SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
    if (data) {
        fastCacheOccupied = YES;
        if (data->object == object) {
            // Found a match in fast cache.
            uintptr_t lockCount;
            result = data;
            lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
            if (result->threadCount <= 0  ||  lockCount <= 0) {
                _objc_fatal("id2data fastcache is buggy");
            }
            switch(why) {
            case ACQUIRE: {
                lockCount++;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                break;
            }
            case RELEASE:
                lockCount--;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                if (lockCount == 0) {
                    // remove from fast cache
                    tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }
            return result;
        }
    }
#endif
    // Check per-thread cache of already-owned locks for matching object
    SyncCache *cache = fetch_cache(NO);
    /// 从线程缓存中查找
    if (cache) {
        unsigned int i;
        for (i = 0; i < cache->used; i++) {
            SyncCacheItem *item = &cache->list[i];
            if (item->data->object != object) continue;

            // Found a match.
            result = item->data;
            if (result->threadCount <= 0  ||  item->lockCount <= 0) {
                _objc_fatal("id2data cache is buggy");
            }
            switch(why) {
            case ACQUIRE:
                item->lockCount++;
                break;
            case RELEASE:
                item->lockCount--;
                if (item->lockCount == 0) {
                    // remove from per-thread cache
                    cache->list[i] = cache->list[--cache->used];
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }
            return result;
        }
    }
    // Thread cache didn't find anything.
    // Walk in-use list looking for matching object
    // Spinlock prevents multiple threads from creating multiple 
    // locks for the same new object.
    // We could keep the nodes in some hash table if we find that there are
    // more than 20 or so distinct locks active, but we don't do that now.
    lockp->lock();
    /// 多线程操作流程
    {
        SyncData* p;
        SyncData* firstUnused = NULL;
        for (p = *listp; p != NULL; p = p->nextData) {
            if ( p->object == object ) {
                result = p;
                // atomic because may collide with concurrent RELEASE
                OSAtomicIncrement32Barrier(&result->threadCount);
                goto done;
            }
            if ( (firstUnused == NULL) && (p->threadCount == 0) )
                firstUnused = p;
        }
        // no SyncData currently associated with object
        if ( (why == RELEASE) || (why == CHECK) )
            goto done;
        // an unused one was found, use it
        if ( firstUnused != NULL ) {
            result = firstUnused;
            result->object = (objc_object *)object;
            result->threadCount = 1;
            goto done;
        }
    }

    // Allocate a new SyncData and add to list.
    // XXX allocating memory with a global lock held is bad practice,
    // might be worth releasing the lock, allocating, and searching again.
    // But since we never free these guys we won't be stuck in allocation very often.
    /// 创建SyncData
    posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));
    /// 赋值操作
    result->object = (objc_object *)object;
    result->threadCount = 1;
    new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);
    result->nextData = *listp;
    /// 更新哈希表中的值
    *listp = result;
 done:
    lockp->unlock();
    if (result) {
        // Only new ACQUIRE should get here.
        // All RELEASE and CHECK and recursive ACQUIRE are 
        // handled by the per-thread caches above.
        if (why == RELEASE) {
            // Probably some thread is incorrectly exiting 
            // while the object is held by another thread.
            return nil;
        }
        if (why != ACQUIRE) _objc_fatal("id2data is buggy");
        if (result->object != object) _objc_fatal("id2data is buggy");

#if SUPPORT_DIRECT_THREAD_KEYS
        if (!fastCacheOccupied) {
            // Save in fast thread cache
            tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
            tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        } else 
#endif
        {
            // Save in thread cache
            if (!cache) cache = fetch_cache(YES);
            cache->list[cache->used].data = result;
            cache->list[cache->used].lockCount = 1;
            cache->used++;
        }
    }
    return result;
}
  • 首先从tls_get_direct(线程局部存储)中查找SyncData
    • 如果查找到了,获取lockCount,用来记录对象被锁了几次(即锁的嵌套次数)
    • 通过传入的操作类型判断,①如果是ACQUIRE,表示加锁,则进行lockCount++,并保存到tls_get_direct缓存。②如果是RELEASE,表示释放,则进行lockCount--,并保存到tls_get_direct缓存。③如果lockCount 等于0,从tls中移除线程data
  • 如果tls_get_direct没有查找到就到线程缓存中去查找
    • 通过fetch_cache方法查找cache缓存中是否有线程
    • 如果有则遍历cache总表,读取出线程对应的SyncCacheItem
    • SyncCacheItem中取出data,然后后续步骤与tls_get_direct的匹配是一致的
  • 如果都没有则表示是第一次进来,此时创建SyncData,并存储到相应缓存中
    • 如果在cache中找到线程,且与object相等,则进行赋值、以及threadCount++
    • 如果在cache中没有找到,则threadCount等于1 总结
      1.sychronized调用的每个对象,都会为其分配创建和绑定一个SyncData并存储在哈希表中。
      2.sychronized绑定的SyncData底层是链表查找、缓存查找以及递归,是非常耗内存以及性能的 3.sychronized绑定的SyncData里面是一把递归锁,可以自动进行加锁解锁

优缺点

  • 优点:在调用被锁的资源时,调用者的线程会进行睡眠。cpu可以调度其他的线程工作。所以任务复杂的时间长的建议使用互斥锁
  • 缺点:互斥锁就涉及到了线程的调度开销,如果任务时间很短,线程调度就会显得降低了cpu的效率

NSLock

NSLock是属于Foundation框架,由于Foundation框架没有开源,我们进行曲线救国,借助Swift的开源框架Foundation来分析NSLock的底层实现,其原理应该和OC是大致相同

核心代码如下:

open class NSLock: NSObject, NSLocking {
    internal var mutex = _MutexPointer.allocate(capacity: 1)
#if os(macOS) || os(iOS) || os(Windows)
    private var timeoutCond = _ConditionVariablePointer.allocate(capacity: 1)
    private var timeoutMutex = _MutexPointer.allocate(capacity: 1)
#endif

    public override init() {
#if os(Windows)
        InitializeSRWLock(mutex)
        InitializeConditionVariable(timeoutCond)
        InitializeSRWLock(timeoutMutex)
#else
        pthread_mutex_init(mutex, nil)
#if os(macOS) || os(iOS)
        pthread_cond_init(timeoutCond, nil)
        pthread_mutex_init(timeoutMutex, nil)
#endif
#endif
    }
    
    deinit {
#if os(Windows)
        // SRWLocks do not need to be explicitly destroyed
#else
        pthread_mutex_destroy(mutex)
#endif
        mutex.deinitialize(count: 1)
        mutex.deallocate()
#if os(macOS) || os(iOS) || os(Windows)
        deallocateTimedLockData(cond: timeoutCond, mutex: timeoutMutex)
#endif
    }
    
    open func lock() {
#if os(Windows)
        AcquireSRWLockExclusive(mutex)
#else
        pthread_mutex_lock(mutex)
#endif
    }

    open func unlock() {
#if os(Windows)
        ReleaseSRWLockExclusive(mutex)
        AcquireSRWLockExclusive(timeoutMutex)
        WakeAllConditionVariable(timeoutCond)
        ReleaseSRWLockExclusive(timeoutMutex)
#else
        pthread_mutex_unlock(mutex)
#if os(macOS) || os(iOS)
        // Wakeup any threads waiting in lock(before:)
        pthread_mutex_lock(timeoutMutex)
        pthread_cond_broadcast(timeoutCond)
        pthread_mutex_unlock(timeoutMutex)
#endif
#endif
    }

    open func `try`() -> Bool {
#if os(Windows)
        return TryAcquireSRWLockExclusive(mutex) != 0
#else
        return pthread_mutex_trylock(mutex) == 0
#endif
    }
    
    open func lock(before limit: Date) -> Bool {
#if os(Windows)
        if TryAcquireSRWLockExclusive(mutex) != 0 {
          return true
        }
#else
        if pthread_mutex_trylock(mutex) == 0 {
            return true
        }
#endif

#if os(macOS) || os(iOS) || os(Windows)
        return timedLock(mutex: mutex, endTime: limit, using: timeoutCond, with: timeoutMutex)
#else
        guard var endTime = timeSpecFrom(date: limit) else {
            return false
        }
#if os(WASI)
        return true
#else
        return pthread_mutex_timedlock(mutex, &endTime) == 0
#endif
#endif
    }

    open var name: String?
}

通过源码实现可以看出底层是通过pthread_mutex互斥锁实现的

pthread_mutex

pthread_mutex又称POSIX互斥锁,是C语言下多线程加互斥锁的方式,你结构如下:

int pthread_mutex_init(pthread_mutex_t * __restrict, const pthread_mutexattr_t * __restrict);

int pthread_mutex_lock(pthread_mutex_t *);

int pthread_mutex_trylock(pthread_mutex_t *);

int pthread_mutex_unlock(pthread_mutex_t *);

int pthread_mutex_destroy(pthread_mutex_t *);

int pthread_mutex_setprioceiling(pthread_mutex_t * __restrict, int, int * __restrict);

int pthread_mutex_getprioceiling(const pthread_mutex_t * __restrict, int * __restrict);

互斥锁有2种初始化方式:
第一种是静态方式加锁:pthread_mutex_tmutex=PTHREAD_MUTEX_INITIALIZER;全局变量或者static变量
第二种是动态方式加锁:pthread_mutex_init(pthread_mutex_t,constpthread_mutexattr_t) pthread_mutex_init这是初始化一个锁,pthread_mutex_t为互斥锁的类型,传NULL为默认类型,一共有4类型

PTHREAD_MUTEX_NORMAL 缺省类型,也就是普通锁。当一个线程加锁以后,其余请求锁的线程将形成一个等待队列,并在解锁后先进先出原则获得锁。

PTHREAD_MUTEX_ERRORCHECK 检错锁,如果同一个线程请求同一个锁,则返回 EDEADLK,否则与普通锁类型动作相同。这样就保证当不允许多次加锁时不会出现嵌套情况下的死锁。

PTHREAD_MUTEX_RECURSIVE 递归锁,允许同一个线程对同一个锁成功获得多次,并通过多次 unlock 解锁。

PTHREAD_MUTEX_DEFAULT 适应锁,动作最简单的锁类型,仅等待解锁后重新竞争,没有等待队列。

pthread_mutexattr_t可以用来设置锁的类型,比如递归锁

pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)

pthread_mutex_trylock方法,pthread_mutex_trylockpthread_mutex_lock的区别在于,pthread_mutex_lock返回的是YESNOpthread_mutex_trylock加锁成功返回的是0,失败返回的是错误提示码。

pthread_mutex_init(pthread_mutex_t mutex,const pthread_mutexattr_t attr);

初始化锁变量mutex。attr为锁属性,NULL值为默认属性。

pthread_mutex_lock(pthread_mutex_t mutex);加锁

pthread_mutex_tylock(*pthread_mutex_t *mutex);加锁,但是与2不一样的是当锁已经在使用的时候,返回为EBUSY,而不是挂起等待。

pthread_mutex_unlock(pthread_mutex_t *mutex);释放锁

pthread_mutex_destroy(pthread_mutex_t* mutex);使用完后释放,常用于递归锁的时候

pthread_mutexattr_setpshared 设置互斥锁范围语法

pthread_mutexattr_getpshared 获取互斥锁范围语法

递归锁(特殊的互斥锁)

递归锁就是同一个线程可以加锁N次而不会引发死锁。递归锁特殊的互斥锁,即是带有递归性质的互斥锁

iOS中递归锁的应用

NSRecursiveLock

NSRecursiveLockNSLock一样,属于Foundation框架,同样借助Swift的开源框架Foundation来分析NSRecursiveLock的底层实现

public override init() {
    super.init()
#if os(Windows)
    InitializeCriticalSection(mutex)
    InitializeConditionVariable(timeoutCond)
    InitializeSRWLock(timeoutMutex)
#else
#if CYGWIN || os(OpenBSD)
    var attrib : pthread_mutexattr_t? = nil
#else
    var attrib = pthread_mutexattr_t()
#endif
    withUnsafeMutablePointer(to: &attrib) { attrs in
        pthread_mutexattr_init(attrs)
#if os(OpenBSD)
        let type = Int32(PTHREAD_MUTEX_RECURSIVE.rawValue)
#else
        let type = Int32(PTHREAD_MUTEX_RECURSIVE)
#endif
        pthread_mutexattr_settype(attrs, type)
        pthread_mutex_init(mutex, attrs)
    }
#if os(macOS) || os(iOS)
    pthread_cond_init(timeoutCond, nil)
    pthread_mutex_init(timeoutMutex, nil)
#endif
#endif
}

对比NSLockNSRecursiveLock,其底层实现几乎一模一样,区别在于init时,NSRecursiveLock有一个标识PTHREAD_MUTEX_RECURSIVE,而NSLock是默认的。

如果NSRecursiveLock被多线程操作的时候,也可能因为线程之间获取锁、释放锁的互相等待而出现死锁情况。

读写锁

读写锁是计算机程序的并发控制的一种同步机制,也称共享-互斥锁多读者-单写者锁。用于解决读写问题。读操作可并发重入,写操作是互斥的。

读写锁通常用互斥锁条件变量信号量实现。

来自维基百科

iOS中读写锁的应用

AFNetworking中的引用如下

self.requestHeaderModificationQueue = dispatch_queue_create("requestHeaderModificationQueue", DISPATCH_QUEUE_CONCURRENT);

/// 同步方法可以异步获取字典
- (NSDictionary *)HTTPRequestHeaders {
    NSDictionary __block *value;
    dispatch_sync(self.requestHeaderModificationQueue, ^{
        value = [NSDictionary dictionaryWithDictionary:self.mutableHTTPRequestHeaders];
    });
    return value;
}
/// 异步方法使用dispatch_barrier_sync/dispatch_barrier_async可以保证一次只有一个线程
- (void)setValue:(NSString *)value
forHTTPHeaderField:(NSString *)field
{
    dispatch_barrier_sync(self.requestHeaderModificationQueue, ^{
        [self.mutableHTTPRequestHeaders setValue:value forKey:field];
    });
}
/// 同步方法可以并发获取值
- (NSString *)valueForHTTPHeaderField:(NSString *)field {
    NSString __block *value;
    dispatch_sync(self.requestHeaderModificationQueue, ^{
        value = [self.mutableHTTPRequestHeaders valueForKey:field];
    });
    return value;
}

自己实现如下

@interface ViewController ()

@property (nonatomic, copy)NSString *text;
@property (nonatomic, strong)dispatch_queue_t currentQueue;

@end

@implementation ViewController

@synthesize text = _text;

- (void)readWriteLock {
    self.currentQueue = dispatch_queue_create("com.andy.test", DISPATCH_QUEUE_CONCURRENT);
    /// 模拟写操作
    for (int i = 0; i<50; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            self.text = [NSString stringWithFormat:@"%d",i];
        });
    }
    /// 模拟读操作
    for (int i = 0; i<50; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            NSString *string = self.text;
        });
    }
}


/// 写操作,栅栏函数是不允许并发的,所以写操作是单线程进入的
- (void)setText:(NSString *)text {
    __weak typeof(self) weakSelf = self;
    dispatch_barrier_async(self.currentQueue, ^{
        __strong typeof(weakSelf) strongSelf = weakSelf;
        strongSelf -> _text = text;
        NSLog(@"写操作----%@----%@",text,[NSThread currentThread]);
        sleep(.5);
    });
}

/// 在同一个队列,有可防止读写同时进行

/// 读操作,是可以并发,所以读操作是多线程进入的
- (NSString *)text {
    __block NSString *string = nil;
    __weak typeof(self) weakSelf = self;
    dispatch_sync(self.currentQueue, ^{
        __strong typeof(weakSelf) strongSelf = weakSelf;
        string = strongSelf -> _text;
        NSLog(@"读操作----%@----%@",string,[NSThread currentThread]);
        sleep(.1);
    });
    return string;
}

@end

image.png 使用pthread_rwlock_t也是可以的

// 初始化
@property (nonatomic, assign)pthread_rwlock_t rwlock;
pthread_rwlock_init(&_rwlock, NULL);

- (void)write:(int)i {
    pthread_rwlock_wrlock(&_rwlock);
    self.text = [NSString stringWithFormat:@"%d",i];
    NSLog(@"写操作----%@----%@",self.text,[NSThread currentThread]);
    sleep(.6);
    pthread_rwlock_unlock(&_rwlock);
}

- (void)read {
    pthread_rwlock_rdlock(&_rwlock);
    NSLog(@"读操作----%@----%@",self.text,[NSThread currentThread]);
    sleep(.1);
    pthread_rwlock_unlock(&_rwlock);
}

-(void)dealloc{
    pthread_rwlock_destroy(&_rwlock);
}

信号量

详情见GCD相关文章中的分析

条件锁

条件锁就是有特定条件的锁,所谓条件只是一个抽象概念。说白了就是有条件的互斥锁
例如:使用NSConditionLock对象,可以确保线程仅在满足特定条件时才能获取锁。 一旦获得了锁并执行了代码的关键部分,线程就可以放弃该锁并将关联条件设置为新的条件。条件本身是任意的:您可以根据应用程序的需要定义它们。

问题先行解答

1.下面代码输出结果情况?
输出结果为40000次,最大值<=40000,因为atomic只是针对settergetter方法进行加锁,上述代码有四个异步线程同时执行,如果某个时间线程1执行到getter方法,之后cpu立即切换到 线程2去执行他的get方法那么这个时候他们进行+1的处理并执行setter方法,那么两个线程的 number就会是一样的结果,这样我们的+1就会出现线程安全问题,就会导致我们的数字出现偏差。

2.下面代码存在问题吗,为什么?
crash,因为崩溃的主要原因是array在某一瞬间变成了nil,从@synchronized底层流程知道,如果加锁的对象成了nil,是锁不住的,相当于下面这种情况,block内部不停的retainrelease,会在某一瞬间上一个还未release,下一个已经准备release,这样会导致野指针的产生

3.如下通过NSLock或者@synchronized添加互斥锁即可

NSLock锁
- (void)test {
    NSLock *lock = [[NSLock alloc]init];
    for (int i = 0; i < 10; i++) {
        static void (^increaseMethod)(int);
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            [lock lock];
            increaseMethod = ^(int num) {
                if (num > 0) {
                    increaseMethod(num - 1);
                }
                NSLog(@" num == %d",num);
            };
            increaseMethod(10);
            [lock unlock];
        });
    }
}
@synchronized锁
- (void)test {
    for (int i = 0; i < 10; i++) {
        static void (^increaseMethod)(int);
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            increaseMethod = ^(int num) {
                @synchronized (self) {
                    if (num > 0) {
                        increaseMethod(num - 1);
                    }
                    NSLog(@" num == %d",num);
                }
            };
            increaseMethod(10);
        });
    }
}

4.如下代码调用method1方法如何输出?
输出结果:1,并且照成了死锁。原因如下:由于当前线程运行到第一个lock加锁,现在再次运行到lock同样的锁,需等待当前线程解锁,把当前线程挂起,不能解锁。可以使用递归锁解决如下

- (void)method1 {
    self.lock = [[NSRecursiveLock alloc]init];
    [self.lock lock];
    NSLog(@"1");
    [self method2];
    NSLog(@"2");
    [self.lock unlock];
}

- (void)method2 {
    [self.lock lock];
    NSLog(@"3");
    [self.lock unlock];
}