Binder源码解析:ServiceManger获取服务解析

466 阅读5分钟

之前分析getIServiceManageraddService我们都是从java层的代码出发去往后走分析代码,而getService其实有一些地方跟他们是类似的,为了减少重复流程的分析,这里从Native层的使用场景出发。这里以获取ICameraService为例。

获取defaultServiceManager

我们的起点在frameworks/av/camera/ndk/impl/ACameraManager.cpp当中,调用代码如下:

const char*  kCameraServiceName  = "media.camera";
....

sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;  
do {  
    binder = sm->getService(String16(kCameraServiceName));  
    if (binder != nullptr) {  
        break;  
    }  
    usleep(kCameraServicePollDelay);  
} while(true);

这里使用defaultServiceManager来拿到ServiceManager,其源码如下:

//frameworks/native/libs/binder/IServiceManager.cpp
sp<IServiceManager> defaultServiceManager()  
{  
    std::call_once(gSmOnce, []() {  
        sp<AidlServiceManager> sm = nullptr;  
        while (sm == nullptr) {  
            sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));  
            if (sm == nullptr) {  
                sleep(1); 
            }  
        }  
  
        gDefaultServiceManager = sp<ServiceManagerShim>::make(sm);  
    });  
  
    return gDefaultServiceManager;  
}

这个代码跟我们之前java层的代码比较类似,也是先拿ContentObject,而ServiceManagerShim相当于是native层的ServiceManager的代理。native层的代码因为不需要把对象转成Java的消耗,代码其实更加简单一点。这里我们拿到了ServiceManagerShim,就可以继续去看它的getService方法了。

请求getService

sp<IBinder> ServiceManagerShim::getService(const String16& name) const  
{  
    static bool gSystemBootCompleted = false;  
  
    sp<IBinder> svc = checkService(name);  
    if (svc != nullptr) return svc;  
  
    const bool isVendorService =  
        strcmp(ProcessState::self()->getDriverName().c_str(), "/dev/vndbinder") == 0;  
    constexpr int64_t timeout = 5000;  
    int64_t startTime = uptimeMillis();  
    // 如果是Vendor的服务,不能够访问系统的属性
    if (!gSystemBootCompleted && !isVendorService) {  
#ifdef __ANDROID__  
        char bootCompleted[PROPERTY_VALUE_MAX];  
        property_get("sys.boot_completed", bootCompleted, "0");  
        gSystemBootCompleted = strcmp(bootCompleted, "1") == 0 ? true : false;  
#else  
        gSystemBootCompleted = true;  
#endif  
    }  
    // 如果拿不到binder service就等待,系统服务和vendor时间有区分,直到超时才停止
    const useconds_t sleepTime = gSystemBootCompleted ? 1000 : 100;  
    int n = 0;  
    while (uptimeMillis() - startTime < timeout) {  
        n++;  
        usleep(1000*sleepTime);  
  
        sp<IBinder> svc = checkService(name);  
        if (svc != nullptr) {  
            return svc;  
        }  
    }  
    return nullptr;  
}

这里就是调用checkService去获取Service,源码如下:

sp<IBinder> ServiceManagerShim::checkService(const String16& name) const  
{  
    sp<IBinder> ret;  
    if (!mTheRealServiceManager->checkService(String8(name).c_str(), &ret).isOk()) {  
        return nullptr;  
    }  
    return ret;  
}

这里我们调用了mTheRealServiceManagercheckService方法,这个变量的实例为ServiceManager的BpBinder子类,也是由AIDL生成,其代码如下:

Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); 
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply); 
return reply.readStrongBinder();

这里跟之前分析addService的部分类似,只有最后多了一个readStrongBinder,addServicewriteStrongBinder到data,这里是读取binder调用返回的数据。后续流程也跟addService类似,这里就不再分析了。我们更关注这个binder我们是怎么拿到的,因此需要看三个地方的代码,一个是ServiceManger拿到binder并且写入到驱动给我们的过程,第二个地方是IPCThreadState当中接收数据的处理,最后就是通过readStrongBinder拿到binder的处理了。

客户端请求获取Binder服务的流程大概如下图所示:

sequenceDiagram
ServiceManagerShim->>ServiceManagerShim: defaultServiceManager()
ServiceManagerShim->>ServiceManagerShim:getService()
ServiceManagerShim->>ServiceManagerShim:checkService()
ServiceManagerShim->>+BpServiceManager:checkService()
BpServiceManager->>+BpBinder: transact()
BpBinder->>+IPCThreadState: transact()
IPCThreadState->>IPCThreadState: writeTransactionData()
IPCThreadState->>IPCThreadState: waitForResponse()
IPCThreadState->>IPCThreadState: talkWithDriver()
IPCThreadState->>+BinderDriver: ioctl(BC_TRANSACTION)
BinderDriver-->>-IPCThreadState: reply:BR_REPLY
IPCThreadState-->>-BpBinder: return resut
BpBinder-->>-BpServiceManager: return result
BpServiceManager->>BpServiceManager: readStrongBinder()
BpServiceManager-->>-ServiceManagerShim: return binder

ServiceManager服务端getService

前面分析addService我们已经知道调用路径是BBinder.transcat-->BnServiceManager.onTransact-->ServiceManger.addService,这里的服务端也是类似,具体可以看下面的流程图。

sequenceDiagram
BinderDriver-->IPCThreadState:handlePolledCommands
loop mIn.dataPosition < mIn.dataSize (当输入数据未处理完)
IPCThreadState->>+IPCThreadState: getAndExecuteCommand
IPCThreadState->>IPCThreadState: executeCommand: BR_TRANSACTION
IPCThreadState->>BBinder: transact()
BBinder->>+BnServiceManager: onTransact()
BnServiceManager->>+ServiceManager: getService()
ServiceManager-->>-BnServiceManager: return Binder
BnServiceManager->>BnServiceManager: writeStrongBinder()
BnServiceManager-->-BBinder: return reply Parcel
BBinder-->>IPCThreadState: return reply
IPCThreadState->>+IPCThreadState: sendReply
IPCThreadState->>IPCThreadState:writeTransactionData
IPCThreadState->>+IPCThreadState:waitForResponse(null, null)
IPCThreadState->>IPCThreadState: talkWithDriver
IPCThreadState->>BinderDriver: ioctl:BC_REPLY
IPCThreadState-->>-IPCThreadState::
IPCThreadState-->-IPCThreadState: finishSendReply
end

我们就省略与Binder交互的许多代码,可以直接去看getService的代码了:

Status ServiceManager::getService(const std::string& name, sp<IBinder>* outBinder) {  
    *outBinder = tryGetService(name, true);  
    return Status::ok();  
}

sp<IBinder> ServiceManager::tryGetService(const std::string& name, bool startIfNotFound) {  
    auto ctx = mAccess->getCallingContext();  
  
    sp<IBinder> out;  
    Service* service = nullptr;  
    if (auto it = mNameToService.find(name); it != mNameToService.end()) {  
        service = &(it->second);  
  
        if (!service->allowIsolated) {  //是否允许多用户环境运行
            uid_t appid = multiuser_get_app_id(ctx.uid);  
            bool isIsolated = appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END;  
  
            if (isIsolated) {  
                return nullptr;  
            }  
        }  
        out = service->binder;  
    }  
  
    if (!mAccess->canFind(ctx, name)) {  //SELinux 权限检查
        return nullptr;  
    }  
  
    if (!out && startIfNotFound) { 
        tryStartService(name);  
    }  
  
    return out;  
}

ServiceManger中获取Binder就是从我们之前添加Service的时候的那个ServiceMap中查找,当查找后做一些权限检查,当找不到的情况下,因为我们传如的startIfNotFound,因此会调用tryStartService去启动对应的Service,其代码如下:

void ServiceManager::tryStartService(const std::string& name) {  
    std::thread([=] {  
        if (!base::SetProperty("ctl.interface_start", "aidl/" + name)) {  
            ...
    }).detach();  
}

代码很简单,就是启动了一个线程,其中设置系统的properties,系统便会尝试启动这个服务,具体我们这里就不分析了。

IPCThreadState接收数据处理

在服务端发送数据时候会调用binder执行BC_REPLY,而客户端后收到BR_REPLY命令,也就是会执行waitForResponse中的如下部分:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)  
{
....
case BR_REPLY:  
    {  
        binder_transaction_data tr;  
        err = mIn.read(&tr, sizeof(tr));  
        ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");  
        if (err != NO_ERROR) goto finish;  
  
        if (reply) {  
            if ((tr.flags & TF_STATUS_CODE) == 0) {  
                reply->ipcSetDataReference(  
                    reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),  
                    tr.data_size,  
                    reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),  
                    tr.offsets_size/sizeof(binder_size_t),  
                    freeBuffer);  
            } else {  
	            ...
            }  
        } else {  
            ...
            continue;  
        }  
    }  
    goto finish;
.....
}

也就是执行上面的ipcSetDataReference,可以看一下其源码:

void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,  
    const binder_size_t* objects, size_t objectsCount, release_func relFunc)  
{  

    freeData();  //初始化Parcel状态
  
    mData = const_cast<uint8_t*>(data);  
    mDataSize = mDataCapacity = dataSize;  
    mObjects = const_cast<binder_size_t*>(objects);  
    mObjectsSize = mObjectsCapacity = objectsCount;  
    mOwner = relFunc;  
  
    binder_size_t minOffset = 0;  
    for (size_t i = 0; i < mObjectsSize; i++) {  
        binder_size_t offset = mObjects[i];  
        if (offset < minOffset) {  
            
            mObjectsSize = 0;  
            break;  
        }  
        const flat_binder_object* flat  
            = reinterpret_cast<const flat_binder_object*>(mData + offset);  
        uint32_t type = flat->hdr.type;  
        if (!(type == BINDER_TYPE_BINDER || type == BINDER_TYPE_HANDLE ||  
              type == BINDER_TYPE_FD)) {  
            ....  
            break;  
        }  
        minOffset = offset + sizeof(flat_binder_object);  
    }  
    scanForFds();  
}

代码比较简单,主要就是把data传入Parcel中,但是除此之外我们需要关注一下传入的relFunc,传入的方法为freeBuffer,此方法的执行会在下一次调用freeData的时候执行,它的实现如下:

void IPCThreadState::freeBuffer(Parcel* parcel, const uint8_t* data,  
                                size_t /*dataSize*/,  
                                const binder_size_t* /*objects*/,  
                                size_t /*objectsSize*/)  
{  
    ALOG_ASSERT(data != NULL, "Called with NULL data");  
    if (parcel != nullptr) parcel->closeFileDescriptors();  
    IPCThreadState* state = self();  
    state->mOut.writeInt32(BC_FREE_BUFFER);  
    state->mOut.writePointer((uintptr_t)data);  
    state->flushIfNeeded();  
}

看到这里,我们知道会调用binder发送这个BC_FREE_BUFFER命令,这样驱动内部会清理内存,这样就完成了Parcel和内存缓冲区的空间清理。

readStrongBinder

readStrongBinder和我们之前看过的writeStrongBinder应该是一个相反的过程,直接看代码:

status_t Parcel::readStrongBinder(sp<IBinder>* val) const  
{  
    status_t status = readNullableStrongBinder(val);  
    return status;  
}

上面的代码会调用readNullableStrongBinder,而其内部又会调用unflattenBinder,代码如下:

status_t Parcel::unflattenBinder(sp<IBinder>* out) const  
{  
    if (isForRpc()) {  
        ...  
        return finishUnflattenBinder(binder, out);  
    }  
  
    const flat_binder_object* flat = readObject(false);  
  
    if (flat) {  
        switch (flat->hdr.type) {  
            case BINDER_TYPE_BINDER: {  
                sp<IBinder> binder =  
                        sp<IBinder>::fromExisting(reinterpret_cast<IBinder*>(flat->cookie));  
                return finishUnflattenBinder(binder, out);  
            }  
            case BINDER_TYPE_HANDLE: {  
                sp<IBinder> binder =  
                    ProcessState::self()->getStrongProxyForHandle(flat->handle);  
                return finishUnflattenBinder(binder, out);  
            }  
        }  
    }  
    return BAD_TYPE;  
}

其中readObject为从Parcel中读取flat_binder_object对象,当请求的进程和服务在同一个进程时候,这里的type就是BINDER_TYPE_BINDER,当请求的进程和服务不在同一个进程则为BINDER_TYPE_HANDLE,因此我们这里是BINDER_TYPE_HANDLEgetStrongProxyForHandle我们之前在分析获取ServiceManager的时候已经分析过了,只不过那个地方handle为固定的0,而这里则是从驱动中传过来的值,最后我们会拿到一个BpBinder,也就完成了查找的过程。