深入掌握Binder原理(下)(二)

1,637 阅读12分钟

接着上一节深入掌握Binder原理(下)(一)

创建ServiceManagerProxy

再回来最前面getService的流程

/frameworks/base/core/java/android/os/ServiceManager.java

public static IBinder getService(String name) {
    try {
        IBinder service = sCache.get(name);
        if (service != null) {
            return service;
        } else {
            return Binder.allowBlocking(getIServiceManager().getService(name));
        }
    } catch (RemoteException e) {
        Log.e(TAG, "error in getService", e);
    }
    return null;
}

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }

    // Find the service manager
    sServiceManager = ServiceManagerNative
        .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
    return sServiceManager;
}

我们已经了解了BinderInternal.getContextObject()这个native方法所做的两件事情:往ServerManager发送PING_TRANSACTION消息,和以0为地址创建BpBinder。接下来就是创建java层的ServiceManager代理的过程,我们看一下asInterface的实现流程。

/frameworks/base/core/java/android/os/ServiceManager.java

static public IServiceManager asInterface(IBinder obj)
{
    if (obj == null) {
        return null;
    }
    IServiceManager in =
        (IServiceManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }
    
    return new ServiceManagerProxy(obj);
}

可以看到asInterface方法实际是以IBinder为入参,创建了ServiceManagerProxy,到这里,获取ServerMananger以及创建ServiceManagerProxy的流程就结束了,有了这个ServiceManagerProxy,应用进程就可以直接和ServiceManager进程通信了,因为ServiceManagerProxy里面封装了ServiceMananger的Binder地址。

获取ActivityManagerServer

获取了ServiceManager的Binder地址,并以ServerMananger的BpBinder创建了ServiceManagerProxy,我们接着就可以通过ServiceManagerProxy获取ActivityManagerServer的Binder地址,并创建对应的ActivityManagerServer在应用进程的Proxy了,通过调用ServiceManager.getService(Context.ACTIVITY_SERVICE)函数就能获取ActivityManagerServer的代理。

/frameworks/base/core/java/android/os/ServiceManager.java

public static IBinder getService(String name) {
    try {
        IBinder service = sCache.get(name);
        if (service != null) {
            return service;
        } else {
            return Binder.allowBlocking(getIServiceManager().getService(name));
        }
    } catch (RemoteException e) {
        Log.e(TAG, "error in getService", e);
    }
    return null;
}

这里的的ServiceManager就是ServiceManagerProxy,所以我们直接到ServiceManagerProxy对象的源码中看看getService是如何实现的。

/frameworks/base/core/java/android/os/ServiceManagerNative.java

int GET_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION;
class ServiceManagerProxy implements IServiceManager {
    ……
    public IBinder getService(String name) throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);
        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
        IBinder binder = reply.readStrongBinder();
        reply.recycle();
        data.recycle();
        return binder;
    }
    ……
}

这里调用了mRemote的transact函数,传输的业务协议码是GET_SERVICE_TRANSACTION。mRemote就是前面创建BpBinder,接着看BpBinder的transact函数是如何传输数据的。

/frameworks/native/libs/binder/BpBinder.cpp

status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        //调用IPCThreadState传输数据
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

可以看到,BpBinder里面依然还是调用了IPCThreadState的transact方法来发送数据,IPCThreadState调用transact发送数据的流程在前面已经讲过了,这儿就不重复讲了,这就简单回顾一下这个流程的步骤。

IPCThreadState的transact函数经过了writeTransactionData封装数据,并经过waitForResponse以及talkWithDriver等流程,最终调用ioctl将数据发送给Binder驱动。

Binder驱动收到数据后,经过binder_thread_write流程,寻找目标进程的binder,然后执行binder_transaction流程将数据拷贝到目标进程的binder缓冲区中。

这里的目标进程就是ServiceMannager,ServiceManager会不断的循环读取缓冲区,并解析处理数据。我们接着来看ServerMnanager是如何处理GET_SERVICE_TRANSACTION协议的,在前面已经知道它的处理函数是svcmgr_handler。

ServiceManager响应GET_SERVICE_TRANSACTION协议
int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;


    if (txn->target.ptr != BINDER_SERVICE_MANAGER)
        return -1;

    if (txn->code == PING_TRANSACTION)
        return 0;

    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if (s == NULL) {
        return -1;
    }

    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        return -1;
    }

    if (sehandle && selinux_status_updated() > 0) {
        struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
        if (tmp_sehandle) {
            selabel_close(sehandle);
            sehandle = tmp_sehandle;
        }
    }

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
        if (!handle)
            break;
        bio_put_ref(reply, handle);
        return 0;
    case SVC_MGR_ADD_SERVICE:……

    case SVC_MGR_LIST_SERVICES: ……
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

svcmgr_handler会调用do_find_service寻找目标service,这里的目标service就是activity

uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
    struct svcinfo *si = find_svc(s, len);

    if (!si || !si->handle) {
        return 0;
    }

    if (!si->allow_isolated) {
        uid_t appid = uid % AID_USER;
        if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
            return 0;
        }
    }

    if (!svc_can_find(s, len, spid, uid)) {
        return 0;
    }

    return si->handle;
}

struct svcinfo *svclist = NULL;

struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
    struct svcinfo *si;

    for (si = svclist; si; si = si->next) {
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return NULL;
}

找到目标service后就会返回目标service的handle地址,这个handle就是binder地址。ServerMananger的ioctl会将返回数据通过Binder驱动,返回给Clinet,这里还是会走binder_thread_write函数,然后经过binder_transaction流程,将binder插入到Clinet的binder_proc中,并且将数据写入到Clinet的binder_proc的buffer中。

创建ActivityManagerProxy的Proxy

当通过ServerMananger获取AMS的binder信息后,调用IActivityManager.Stub.asInterface(b)就会创建AMS在应用进程的代理,IActivityManager是一个AIDL文件,编译时会自动生成一个包含Proxy对象和Stub对象。其中Proxy对象是给Client使用的,Stub是给Server使用的。

/frameworks/base/core/java/android/app/IActivityManager.aidl

interface IActivityManager {
    ……
    int startActivity(in IApplicationThread caller, in String callingPackage, in Intent intent,
                      in String resolvedType, in IBinder resultTo, in String resultWho, int requestCode,
                          int flags, in ProfilerInfo profilerInfo, in Bundle options);
    ……
}

编译后生成的Proxy文件如下。

public interface IActivityManager extends android.os.IInterface {
    /**
     * Local-side IPC implementation stub class.
     */
    public static abstract class Stub extends android.os.Binder implements android.app.IActivityManager {
        private static final java.lang.String DESCRIPTOR = "android.app.IActivityManager";

        /**
         * Construct the stub at attach it to the interface.
         */
        public Stub() {
            this.attachInterface(this, DESCRIPTOR);
        }

        /**
         * Cast an IBinder object into an android.app.IActivityManager interface,
         * generating a proxy if needed.
         */
        public static android.app.IActivityManager asInterface(android.os.IBinder obj) {
            if ((obj == null)) {
                return null;
            }
            android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
            if (((iin != null) && (iin instanceof android.app.IActivityManager))) {
                return ((android.app.IActivityManager) iin);
            }
            return new android.app.IActivityManager.Stub.Proxy(obj);
        }

        @Override
        public android.os.IBinder asBinder() {
            return this;
        }

        @Override
        public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException {
            switch (code) {
                case INTERFACE_TRANSACTION: ……
                case TRANSACTION_openContentUri: ……
                case TRANSACTION_handleApplicationCrash: ……
                case TRANSACTION_startActivity: {
                    data.enforceInterface(DESCRIPTOR);
                    android.app.IApplicationThread _arg0;
                    _arg0 = android.app.IApplicationThread.Stub.asInterface(data.readStrongBinder());
                    java.lang.String _arg1;
                    _arg1 = data.readString();
                    android.content.Intent _arg2;
                    if ((0 != data.readInt())) {
                        _arg2 = android.content.Intent.CREATOR.createFromParcel(data);
                    } else {
                        _arg2 = null;
                    }
                    java.lang.String _arg3;
                    _arg3 = data.readString();
                    android.os.IBinder _arg4;
                    _arg4 = data.readStrongBinder();
                    java.lang.String _arg5;
                    _arg5 = data.readString();
                    int _arg6;
                    _arg6 = data.readInt();
                    int _arg7;
                    _arg7 = data.readInt();
                    android.app.ProfilerInfo _arg8;
                    if ((0 != data.readInt())) {
                        _arg8 = android.app.ProfilerInfo.CREATOR.createFromParcel(data);
                    } else {
                        _arg8 = null;
                    }
                    android.os.Bundle _arg9;
                    if ((0 != data.readInt())) {
                        _arg9 = android.os.Bundle.CREATOR.createFromParcel(data);
                    } else {
                        _arg9 = null;
                    }
                    int _result = this.startActivity(_arg0, _arg1, _arg2, _arg3, _arg4, _arg5, _arg6, _arg7, _arg8, _arg9);
                    reply.writeNoException();
                    reply.writeInt(_result);
                    return true;
                }

                private static class Proxy implements android.app.IActivityManager {
                    private android.os.IBinder mRemote;

                    Proxy(android.os.IBinder remote) {
                        mRemote = remote;
                    }

                    @Override
                    public int startActivity(android.app.IApplicationThread caller, java.lang.String callingPackage, android.content.Intent intent, java.lang.String resolvedType, android.os.IBinder resultTo, java.lang.String resultWho, int requestCode, int flags, android.app.ProfilerInfo profilerInfo, android.os.Bundle options) throws android.os.RemoteException {
                        android.os.Parcel _data = android.os.Parcel.obtain();
                        android.os.Parcel _reply = android.os.Parcel.obtain();
                        int _result;
                        try {
                            _data.writeInterfaceToken(DESCRIPTOR);
                            _data.writeStrongBinder((((caller != null)) ? (caller.asBinder()) : (null)));
                            _data.writeString(callingPackage);
                            if ((intent != null)) {
                                _data.writeInt(1);
                                intent.writeToParcel(_data, 0);
                            } else {
                                _data.writeInt(0);
                            }
                            _data.writeString(resolvedType);
                            _data.writeStrongBinder(resultTo);
                            _data.writeString(resultWho);
                            _data.writeInt(requestCode);
                            _data.writeInt(flags);
                            if ((profilerInfo != null)) {
                                _data.writeInt(1);
                                profilerInfo.writeToParcel(_data, 0);
                            } else {
                                _data.writeInt(0);
                            }
                            if ((options != null)) {
                                _data.writeInt(1);
                                options.writeToParcel(_data, 0);
                            } else {
                                _data.writeInt(0);
                            }
                            mRemote.transact(Stub.TRANSACTION_startActivity, _data, _reply, 0);
                            _reply.readException();
                            _result = _reply.readInt();
                        } finally {
                            _reply.recycle();
                            _data.recycle();
                        }
                        return _result;
                    }
                }
            }
        }
    }
}

可以看到,AMS的Proxy在调用startActivity时,会将数据封装在Parcel中,然后调用mRemote.transact函数,mRemote在前面提到过,他就是对应Server的binder地址创建的BpBinder。接下来的流程就和前面是一样,IPCThreadState经过层层调用,最终通过ioctl函数将数据传递给了Binder驱动程序,Binder驱动程序在binder_transaction将数据拷贝到ActivityManagerService的binder_proc的buffer中,AMS的binder线程不断的通过循环检测缓存区,发现有数据后,就进行数据的解析和响应。

ActivityManagerService启动Activity

经过了前面的流程,变到了最后一步ActivityManagerService启动Activity这个流程了,前面说过AIDL会生成Proxy对象和Stub对象,Proyx对象是Client使用的,在前面的Client,我们了解了Proxy实际是封装了AMS的binder地址,当调用Proxy的startActivity,会经过层层的数据封装,最终通过ioctl函数将数据经过binder驱动程序写道AMS中,那么AMS是如何使用Stub对象的呢?AMS实际就是继承了IActivityManager.Stub

public class ActivityManagerService extends IActivityManager.Stub
        implements Watchdog.Monitor, BatteryStatsImpl.BatteryCallback {
    ……
            @Override
        public final int startActivity(IApplicationThread caller, String callingPackage,
                                       Intent intent, String resolvedType, IBinder resultTo, String resultWho, int requestCode,
                                       int startFlags, ProfilerInfo profilerInfo, Bundle bOptions) {
        return startActivityAsUser(caller, callingPackage, intent, resolvedType, resultTo,
                                   resultWho, requestCode, startFlags, profilerInfo, bOptions,
                                   UserHandle.getCallingUserId());
    }
   ……
    
}

我们在看看IActivityManager.Sub的实现

public interface IActivityManager extends android.os.IInterface {
    /**
     * Local-side IPC implementation stub class.
     */
    public static abstract class Stub extends android.os.Binder implements android.app.IActivityManager {
        private static final java.lang.String DESCRIPTOR = "android.app.IActivityManager";

        /**
         * Construct the stub at attach it to the interface.
         */
        public Stub() {
            this.attachInterface(this, DESCRIPTOR);
        }

        /**
         * Cast an IBinder object into an android.app.IActivityManager interface,
         * generating a proxy if needed.
         */
        public static android.app.IActivityManager asInterface(android.os.IBinder obj) {
            if ((obj == null)) {
                return null;
            }
            android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
            if (((iin != null) && (iin instanceof android.app.IActivityManager))) {
                return ((android.app.IActivityManager) iin);
            }
            return new android.app.IActivityManager.Stub.Proxy(obj);
        }

        @Override
        public android.os.IBinder asBinder() {
            return this;
        }

        @Override
        public boolean onTransact(int code, android.os.Parcel data, android.os.Parcel reply, int flags) throws android.os.RemoteException {
            switch (code) {
                case INTERFACE_TRANSACTION: ……
                case TRANSACTION_openContentUri: ……
                case TRANSACTION_handleApplicationCrash: ……
                case TRANSACTION_startActivity: {
                    data.enforceInterface(DESCRIPTOR);
                    android.app.IApplicationThread _arg0;
                    _arg0 = android.app.IApplicationThread.Stub.asInterface(data.readStrongBinder());
                    java.lang.String _arg1;
                    _arg1 = data.readString();
                    android.content.Intent _arg2;
                    if ((0 != data.readInt())) {
                        _arg2 = android.content.Intent.CREATOR.createFromParcel(data);
                    } else {
                        _arg2 = null;
                    }
                    java.lang.String _arg3;
                    _arg3 = data.readString();
                    android.os.IBinder _arg4;
                    _arg4 = data.readStrongBinder();
                    java.lang.String _arg5;
                    _arg5 = data.readString();
                    int _arg6;
                    _arg6 = data.readInt();
                    int _arg7;
                    _arg7 = data.readInt();
                    android.app.ProfilerInfo _arg8;
                    if ((0 != data.readInt())) {
                        _arg8 = android.app.ProfilerInfo.CREATOR.createFromParcel(data);
                    } else {
                        _arg8 = null;
                    }
                    android.os.Bundle _arg9;
                    if ((0 != data.readInt())) {
                        _arg9 = android.os.Bundle.CREATOR.createFromParcel(data);
                    } else {
                        _arg9 = null;
                    }
                    int _result = this.startActivity(_arg0, _arg1, _arg2, _arg3, _arg4, _arg5, _arg6, _arg7, _arg8, _arg9);
                    reply.writeNoException();
                    reply.writeInt(_result);
                    return true;
                }
				……
            }
        }
    }
}

前面说过Client在onZygoteInit中启动binder后就会陷入不断的循环,并在talkwithdriver函数中解析其他进程传过来的数据,然后交给executeCommand函数。AMS也是同样的流程,AMS的binder线程在talkWithDriver函数中收到了其他进程传过来的数据,然后经过executeCommand函数,最终交给了Stub对象的onTransact函数来处理。

从onTransact函数处理TRANSACTION_startActivity时,就会直接调用this.startActivity()函数,此时,AMS就正式开始了activity的启动流程。

Server

通过对ServiceManager和Clinet的详细介绍,我们其实对Server了解的差不多了,其实他们三者的流程都是类似的,都需要打开binder驱动,然后mmap内存,最后通过不断循环调用ioctl函数来读写数据。而位于system_server进程的AMS和位于用户进程的普通应用,在打开binder,消息发送等流程上几乎是完全一样的,都是通过ProcessState和IPCThreadState来进行的。

打开Binder

先来看看AMS所在的system_server进程是如何打开binder的。当Zygote启动后,就会在Zygote的main函数中启动system_server进程。

/frameworks/base/core/java/com/android/internal/os/ZygoteInit.java

public static void main(String argv[]) {
    ……
   if (startSystemServer) {
		startSystemServer(abiList, socketName, zygoteServer);
	}
    ……
}

Zygote的main函数中调用了startSystemServer来启动SystemServer

/frameworks/base/core/java/com/android/internal/os/ZygoteInit.java

private static boolean startSystemServer(String abiList, String socketName) throws MethodAndArgsCaller, RuntimeException {
    ...
    //参数准备
    String args[] = {
        "--setuid=1000",
        "--setgid=1000",
        "--setgroups=1001,1002,1003,1004,1005,1006,1007,1008,1009,1010,1018,1021,1032,3001,3002,3003,3006,3007",
        "--capabilities=" + capabilities + "," + capabilities,
        "--nice-name=system_server",
        "--runtime-args",
        "com.android.server.SystemServer",
    };

    ZygoteConnection.Arguments parsedArgs = null;
    int pid;
    try {
        //用于解析参数,生成目标格式
        parsedArgs = new ZygoteConnection.Arguments(args);
        ZygoteConnection.applyDebuggerSystemProperty(parsedArgs);
        ZygoteConnection.applyInvokeWithSystemProperty(parsedArgs);

        // fork子进程,该进程是system_server进程【见小节2】
        pid = Zygote.forkSystemServer(
                parsedArgs.uid, parsedArgs.gid,
                parsedArgs.gids,
                parsedArgs.debugFlags,
                null,
                parsedArgs.permittedCapabilities,
                parsedArgs.effectiveCapabilities);
    } catch (IllegalArgumentException ex) {
        throw new RuntimeException(ex);
    }

    //进入子进程system_server
    if (pid == 0) {
        if (hasSecondZygote(abiList)) {
            waitForSecondaryZygote(socketName);
        }
        // 完成system_server进程剩余的工作
        handleSystemServerProcess(parsedArgs);
    }
    return true;
}

这里通过Zygote的fork函数,fork出了system_server进程,fork完成后,会执行handleSystemServerProcess

/frameworks/base/core/java/com/android/internal/os/ZygoteInit.java

private static void handleSystemServerProcess( ZygoteConnection.Arguments parsedArgs) throws ZygoteInit.MethodAndArgsCaller {

    closeServerSocket(); //关闭父进程zygote复制而来的Socket

    Os.umask(S_IRWXG | S_IRWXO);

    if (parsedArgs.niceName != null) {
        Process.setArgV0(parsedArgs.niceName); //设置当前进程名为"system_server"
    }

    final String systemServerClasspath = Os.getenv("SYSTEMSERVERCLASSPATH");
    if (systemServerClasspath != null) {
        //执行dex优化操作
        performSystemServerDexOpt(systemServerClasspath);
    }

    ……
  
    ClassLoader cl = null;
    if (systemServerClasspath != null) {
        // 创建类加载器,并赋予当前线程
        cl = new PathClassLoader(systemServerClasspath, ClassLoader.getSystemClassLoader());
        Thread.currentThread().setContextClassLoader(cl);
    }

    RuntimeInit.zygoteInit(parsedArgs.targetSdkVersion, parsedArgs.remainingArgs, cl);

}

handleSystemServerProcess主要是对system_server进程的一些初始化工作,最后执行RuntimeInit.zygoteInit函数。

/frameworks/base/core/java/com/android/internal/os/RuntimeInit.java

public static final void zygoteInit(int targetSdkVersion, String[] argv, ClassLoader classLoader) throws ZygoteInit.MethodAndArgsCaller {
    commonInit(); // 通用的一些初始化
    nativeZygoteInit(); // onZygoteInit回调 
    applicationInit(targetSdkVersion, argv, classLoader); // 应用初始化
}

我们这里只关注zygoteInit中的nativeZygoteInit函数,它是一个native函数。

/frameworks/base/core/jni/AndroidRuntime.cpp

static void com_android_internal_os_RuntimeInit_nativeZygoteInit(JNIEnv* env, jobject clazz) {
    gCurRuntime->onZygoteInit();
}

gCurRuntime就是当前进程,即system_server进程的runtime运行环境。

/frameworks/base/cmds/app_process/app_main.cpp

virtual void onZygoteInit()
{
    sp<ProcessState> proc = ProcessState::self();
    proc->startThreadPool();
}

可以看到,onZygoteInit出现了ProcessState的身影,onZygoteInit执行完成后,system_server进程的binder驱动就初始化完成,并陷入不断的循环进行binder数据的读写中了。

AMS申请注册Service

我们接着看AMS是如何在ServiceManager中注册Service的。SystemServer进程启动后会在main函数中启动各种service

/frameworks/base/services/java/com/android/server/SystemServer.java

public static void main(String[] args) {
        new SystemServer().run();
}

private void run() {
    mSystemServiceManager = new SystemServiceManager(mSystemContext);
    mSystemServiceManager.setRuntimeRestarted(mRuntimeRestart);
    ……
    // Start services.
    traceBeginAndSlog("StartServices");
    startBootstrapServices();
    startCoreServices();
    startOtherServices();
    SystemServerInitThreadPool.shutdown();

   	……
}

我们的ActivityManagerService就位于BootstrapServices中。

/frameworks/base/services/java/com/android/server/SystemServer.java

private void startBootstrapServices() {
   
	……
    // Activity manager runs the show.
    traceBeginAndSlog("StartActivityManager");
    mActivityManagerService = mSystemServiceManager.startService(
        ActivityManagerService.Lifecycle.class).getService();
    mActivityManagerService.setSystemServiceManager(mSystemServiceManager);
	……
    // Set up the Application instance for the system process and get started.
    mActivityManagerService.setSystemProcess();

    ……
}

当AMS启动后,startBootstrapServices会执行setSystemProcess函数。

/frameworks/base/services/java/com/android/server/SystemServer.java

public void setSystemProcess() {
    try {
        ServiceManager.addService(Context.ACTIVITY_SERVICE, this, true);
        ServiceManager.addService(ProcessStats.SERVICE_NAME, mProcessStats);
        ServiceManager.addService("meminfo", new MemBinder(this));
        ServiceManager.addService("gfxinfo", new GraphicsBinder(this));
        ServiceManager.addService("dbinfo", new DbBinder(this));
        if (MONITOR_CPU_USAGE) {
            ServiceManager.addService("cpuinfo", new CpuBinder(this));
        }
        ServiceManager.addService("permission", new PermissionController(this));
        ServiceManager.addService("processinfo", new ProcessInfoService(this));

        ApplicationInfo info = mContext.getPackageManager().getApplicationInfo(
            "android", STOCK_PM_FLAGS | MATCH_SYSTEM_ONLY);
        mSystemThread.installSystemApplicationInfo(info, getClass().getClassLoader());

        synchronized (this) {
            ProcessRecord app = newProcessRecordLocked(info, info.processName, false, 0);
            app.persistent = true;
            app.pid = MY_PID;
            app.maxAdj = ProcessList.SYSTEM_ADJ;
            app.makeActive(mSystemThread.getApplicationThread(), mProcessStats);
            synchronized (mPidsSelfLocked) {
                mPidsSelfLocked.put(app.pid, app);
            }
            updateLruProcessLocked(app, false, null);
            updateOomAdjLocked();
        }
    } catch (PackageManager.NameNotFoundException e) {
        throw new RuntimeException(
            "Unable to find android system package", e);
    }
}

在setSystemProcess函数中我们可以看到,里面调用了ServiceManager的addService函数,注册了保活activity,meminfo,gfxinfo等各种Service。这里的ServiceManager和前面Clinet端使用的ServiceManager是同一个对象,所以从addService的实现中可以看到,同样是先获取ServiceManager的Proxy,然后调用ServiceManagerd的Proyx中的addService函数。

/frameworks/base/core/java/android/os/ServiceManager.java

public static void addService(String name, IBinder service) {
    try {
        getIServiceManager().addService(name, service, false);
    } catch (RemoteException e) {
        Log.e(TAG, "error in addService", e);
    }
}

getIServiceManager()会获取ServiceManager的Proxy

/frameworks/base/core/java/android/os/ServiceManager.java

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }

    // Find the service manager
    sServiceManager = ServiceManagerNative
        .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
    return sServiceManager;
}

我们接着看ServiceManagerNative中addService函数的实现。

/frameworks/base/core/java/android/os/ServiceManager.java

int ADD_SERVICE_TRANSACTION = IBinder.FIRST_CALL_TRANSACTION+2;

public void addService(String name, IBinder service, boolean allowIsolated)
    throws RemoteException {
    Parcel data = Parcel.obtain();
    Parcel reply = Parcel.obtain();
    data.writeInterfaceToken(IServiceManager.descriptor);
    data.writeString(name);
    data.writeStrongBinder(service);
    data.writeInt(allowIsolated ? 1 : 0);
    mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
    reply.recycle();
    data.recycle();
}

可以看到,这里调用的业务协议是ADD_SERVICE_TRANSACTION,后面的流程在前面都讲过,就不重复讲了。

ServiceMananger响应Service注册申请

我们接着看ServiceMananger是如何响应Service注册的,我们还是直接看它的业务协议处理函数svcmgr_handler。

/frameworks/native/cmds/servicemanager/service_manager.c

enum {
    /* Must match definitions in IBinder.h and IServiceManager.h */
    PING_TRANSACTION  = B_PACK_CHARS('_','P','N','G'),
    SVC_MGR_GET_SERVICE = 1,
    SVC_MGR_CHECK_SERVICE,
    SVC_MGR_ADD_SERVICE,
    SVC_MGR_LIST_SERVICES,
};


int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    ……

    switch(txn->code) {
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:……
    case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        if (do_add_service(bs, s, len, handle, txn->sender_euid,
            allow_isolated, txn->sender_pid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES:……
    default:
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

svcmgr_handler函数中的SVC_MGR_ADD_SERVICE就对应了AMS中调用的ADD_SERVICE_TRANSACTION协议。我们看一下do_add_service的实现

/frameworks/native/cmds/servicemanager/service_manager.c

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    struct svcinfo *si;

    if (!handle || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(s, len, spid, uid)) {
        return -1;
    }

    si = find_svc(s, len);
    if (si) {
        if (si->handle) {
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s, len), handle, uid);
            svcinfo_death(bs, si);
        }
        si->handle = handle;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
        si->handle = handle;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        si->next = svclist;
        svclist = si;
    }

    binder_acquire(bs, handle);
    binder_link_to_death(bs, handle, &si->death);
    return 0;
}

do_add_service主要做了下面几件事情

  1. 检查添加的service是否存在,如果存在直接返回该Service的binder地址
  2. 如果不存在,在为该Service创建svcinfo数据结构,设置binder地址(这个binder地址在在binder驱动程序中被创建的,在前面讲binder_transaction中有提讲过,如果目标进行的binder_node不存在,就会为目标进程创建一个binder_node),并添加到队列中。

结尾

到这里,Binder的原理就全部讲完了。还是最前面的那段话,深入的学习Binder不仅仅只是为了知道怎么使用它,而是学习Binder的设计思想,学习它是如何架构的,学习它是如何解耦的,学习它是如何保障安全和性能的。在Binder的设计中,我们也能看到它从其他地方借鉴的部分,比如ServiceManager类似于网络请求中的DNS,Binder协议和业务协议的封装也类似于网络中层层的协议封装。

从不同的技术中总结共同点,提炼出独有的点和优秀的点,然后借鉴,总结,使用,这就是架构学习的成长之路。