Binder 解析之 getService() 理解(五)

1,756 阅读12分钟

背景

上篇文章从服务端角度出发,分析了 addService 的过程。 这次从客户端角度出发继续分析 getService 的过程。

client 作为客户端, ServiceManager 作为服务端。

一、client端发起getService()请求

1.1 Java层getService()

经过上篇文章分析,我们知道肯定调用的是 ServiceManagerProxy 类的 addService() 方法:

   public IBinder getService(String name) throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);// 1 写入服务的名字 
        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
       
        //2 从reply中读取 readStrongBinder得到IBinder对象
        IBinder binder = reply.readStrongBinder();
        reply.recycle(); // 回收parcel
        data.recycle();// 回收parcel
        return binder;
    }
  1. 写入服务的名字 name,发起请求
  2. 从reply中获得IBinder对象

mRemote 其实是BinderProxy对象,最终会调用transactNative()

transactNative()最终会调用到native层的 BpBinder()

1.2 BpBinder::transact()

/frameworks/native/libs/binder/BpBinder.cpp

status_t BpBinder::transact(
211      uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
212  {
213      // Once a binder has died, it will never come back to life.
214      if (mAlive) {
215          status_t status = IPCThreadState::self()->transact(
216              mHandle, code, data, reply, flags);
217          if (status == DEAD_OBJECT) mAlive = 0;
218          return status;
219      }
220  
221      return DEAD_OBJECT;
222  }
223  

内部调用的 IPCThreadStatetransact()方法。

1.3 IPCThreadState::self()

IPCThreadState* IPCThreadState::self()
287  {
288      if (gHaveTLS) {
289  restart:
290          const pthread_key_t k = gTLS;
                // 如果TLS中有,则返回,没有就创建
291          IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
              
292          if (st) return st;
293          return new IPCThreadState; //创建新的对象
294      }
295  
296      if (gShutdown) {
297          ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
298          return nullptr;
299      }
300  
301      pthread_mutex_lock(&gTLSMutex);
302      if (!gHaveTLS) {
303          int key_create_value = pthread_key_create(&gTLS, threadDestructor);
304          if (key_create_value != 0) {
305              pthread_mutex_unlock(&gTLSMutex);
306              ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
307                      strerror(key_create_value));
308              return nullptr;
309          }
310          gHaveTLS = true;
311      }
312      pthread_mutex_unlock(&gTLSMutex);
313      goto restart;
314  }
315  

IPCThreadState会把自己存储在TLS(thread local storage)线程私有内存空间中,因此是线程私有的。

再看 IPCThreadState 的构造方法:

 IPCThreadState::IPCThreadState()
803      : mProcess(ProcessState::self()),
804        mWorkSource(kUnsetWorkSource),
805        mPropagateWorkSource(false),
806        mStrictModePolicy(0),
807        mLastTransactionBinderFlags(0),
808        mCallRestriction(mProcess->mCallRestriction)
809  {
        // 存入到TLS中
810      pthread_setspecific(gTLS, this);
811      clearCaller();
812      mIn.setDataCapacity(256); //输入结构体,容量为256字节
813      mOut.setDataCapacity(256); // 输出结构体 容量为256字节
814      mIPCThreadStateBase = IPCThreadStateBase::self();
815  }

1.4 ProcessState::self()

获取进程单例对象。 我们来看 ProcessState 的构造方法:

ProcessState::ProcessState(const char *driver)
425      : mDriverName(String8(driver))
426      , mDriverFD(open_driver(driver)) //打开驱动
427      , mVMStart(MAP_FAILED)
428      , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
429      , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
430      , mExecutingThreadsCount(0)
431      , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
432      , mStarvationStartTimeMs(0)
433      , mManagesContexts(false)
434      , mBinderContextCheckFunc(nullptr)
435      , mBinderContextUserData(nullptr)
436      , mThreadPoolStarted(false)
437      , mThreadPoolSeq(1)
438      , mCallRestriction(CallRestriction::NONE)
439  {
440      if (mDriverFD >= 0) {
441          // mmap the binder, providing a chunk of virtual address space to receive transactions.
              // 进行映射mmap,BINDER_VM_SIZE=1M-8k
442          mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
443          if (mVMStart == MAP_FAILED) {
444              // *sigh*
445              ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
446              close(mDriverFD);
447              mDriverFD = -1;
448              mDriverName.clear();
449          }
450      }
451  
452      LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
453  }
454  

其中我们需要注意:BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2) 也就是1M-8K 缓冲区大小。 因此,在跨进程调用的的时候,数据不能超过这个限制。

至此,IPCThreadState 中会持有 ProcessState 的引用。

我们回到 IPCThreadStatetransact()方法:

1.5 IPCThreadState::transact()

status_t IPCThreadState::transact(int32_t handle,
                                uint32_t code, const Parcel& data,
652                                    Parcel* reply, uint32_t flags)
653  {
653  {
654      status_t err;
655  
656      flags |= TF_ACCEPT_FDS;
664  
665      LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
666          (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
          // 把输入赋值给mOut
667      err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);
668  
669      if (err != NO_ERROR) {
670          if (reply) reply->setError(err);
671          return (mLastError = err);
672      }
673  
674      if ((flags & TF_ONE_WAY) == 0) {
675          if (UNLIKELY(mCallRestriction != ProcessState::CallRestriction::NONE)) {
676              if (mCallRestriction == ProcessState::CallRestriction::ERROR_IF_NOT_ONEWAY) {
677                  ALOGE("Process making non-oneway call but is restricted.");
678                  CallStack::logStack("non-oneway call", CallStack::getCurrent(10).get(),
679                      ANDROID_LOG_ERROR);
680              } else /* FATAL_IF_NOT_ONEWAY */ {
681                  LOG_ALWAYS_FATAL("Process may not make oneway calls.");
682              }
683          }
684  
685          #if 0
686          if (code == 4) { // relayout
687              ALOGI(">>>>>> CALLING transaction 4");
688          } else {
689              ALOGI(">>>>>> CALLING transaction %d", code);
690          }
691          #endif
692          if (reply) {
                    //等待返回
693              err = waitForResponse(reply);
694          } else {
695              Parcel fakeReply;
696              err = waitForResponse(&fakeReply);
697          }
698        
713      } else {
714          err = waitForResponse(nullptr, nullptr);
715      }
717      return err;
718  }
  1. writeTransactionData() 把数据写到 mOut 对象中。
  2. waitForResponse()

addService 一样的流程。

1.6 IPCThreadState::waitForResponse()

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
832  {
833      uint32_t cmd;
834      int32_t err;
835     // 循环 
836      while (1) {
             // 不断的从驱动 读写数据
837          if ((err=talkWithDriver()) < NO_ERROR) break;
838          err = mIn.errorCheck();
839          if (err < NO_ERROR) break;
840          if (mIn.dataAvail() == 0) continue;
841  
842          cmd = (uint32_t)mIn.readInt32();
843  
844          IF_LOG_COMMANDS() {
845              alog << "Processing waitForResponse Command: "
846                  << getReturnString(cmd) << endl;
847          }
848  
849          switch (cmd) {
850          case BR_TRANSACTION_COMPLETE:  ...
854          case BR_DEAD_REPLY:   ...
858          case BR_FAILED_REPLY:  ...
862          case BR_ACQUIRE_RESULT: ...
871          case BR_REPLY:
872              {
873                 // 构造对象,用来接收数据 
                    binder_transaction_data tr;
                     //从parcel mIn中读取数据
874                  err = mIn.read(&tr, sizeof(tr));
875                  ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
876                  if (err != NO_ERROR) goto finish;
877  
878                  if (reply) {
879                      if ((tr.flags & TF_STATUS_CODE) == 0) {
880                          reply->ipcSetDataReference(
881                              reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
882                              tr.data_size,
883                              reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
884                              tr.offsets_size/sizeof(binder_size_t),
885                              freeBuffer, this);
886                      } else {
887                          err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
888                          freeBuffer(nullptr,
889                              reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
890                              tr.data_size,
891                              reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
892                              tr.offsets_size/sizeof(binder_size_t), this);
893                      }
894                  } else {
895                      freeBuffer(nullptr,
896                          reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
897                          tr.data_size,
898                          reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
899                          tr.offsets_size/sizeof(binder_size_t), this);
900                      continue;
901                  }
902              }
903              goto finish;
904  
905          default:
906              err = executeCommand(cmd);
907              if (err != NO_ERROR) goto finish;
908              break;
909          }
910      }
911  
912  finish:
913      if (err != NO_ERROR) {
914          if (acquireResult) *acquireResult = err;
915          if (reply) reply->setError(err);
916          mLastError = err;
917      }
918  
919      return err;
920  }

1.7 talkWithDriver():

status_t IPCThreadState::talkWithDriver(bool doReceive)
923  {
924      if (mProcess->mDriverFD <= 0) {
925          return -EBADF;
926      }
927  
928      binder_write_read bwr;
929  
930      // Is the read buffer empty?
931      const bool needRead = mIn.dataPosition() >= mIn.dataSize();
932  
933      // We don't want to write anything if we are still reading
934      // from data left in the input buffer and the caller
935      // has requested to read the next data.
936      const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
937  
938      bwr.write_size = outAvail;
939      bwr.write_buffer = (uintptr_t)mOut.data();
940  
941      // This is what we'll read.
942      if (doReceive && needRead) {
943          bwr.read_size = mIn.dataCapacity();
944          bwr.read_buffer = (uintptr_t)mIn.data();
945      } else {
946          bwr.read_size = 0;
947          bwr.read_buffer = 0;
948      }
949       ...
967      bwr.write_consumed = 0;
968      bwr.read_consumed = 0;
969      status_t err;
970      do {
                // ioctl 不断的从驱动读写数据     
975          if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
976              err = NO_ERROR;
977          else
978              err = -errno;
979  #else
980          err = INVALID_OPERATION;
981  #endif
982          if (mProcess->mDriverFD <= 0) {
983              err = -EBADF;
984          }
985          IF_LOG_COMMANDS() {
986              alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
987          }
988      } while (err == -EINTR);
989  
994      }
995  
996      if (err >= NO_ERROR) {
997          if (bwr.write_consumed > 0) {
998              if (bwr.write_consumed < mOut.dataSize())
999                  mOut.remove(0, bwr.write_consumed);
1000              else {
1001                  mOut.setDataSize(0);
1002                  processPostWriteDerefs();
1003              }
1004          }
1005          if (bwr.read_consumed > 0) {
                  // 读到了数据
1006              mIn.setDataSize(bwr.read_consumed);
1007              mIn.setDataPosition(0);
1008          }
1009       
1019          return NO_ERROR;
1020      }
1021  
1022      return err;
1023  }

最终,getService() 通过talkWitchDriver()不断的往驱动中读写数据。

二、进入驱动层

总结几个关键点:

  1. 根据handle=0,找到binder大管家ServiceManager进程,把数据添加到binder_proc的todo链表中。唤醒ServiceManager进程,client进入休眠。
  2. ServiceManager从内核获取数据,进入用户空间,开始进行数据处理。

三、ServiceManager处理数据并返回

3.1 整体流程总结

clientgetService()通过往binder驱动写入数据后, 最终达到ServiceManager进程。

  1. SM的用户空间根据name,从svcInfos链表中找到该服务对应的desc
  2. 进入内核根据desc找到对应的binder_ref,获取里面的node节点(就是我们要找的服务),其中,该node节点中的proc字段就指向了Server进程
  3. client新建binder_ref节点,其中的node字段指向上一步找到的node节点,同时分配desc
  4. 唤醒client进程,SM进入休眠
  5. client进程被唤醒,用户空间得到刚才的desc,也就是handle,对应要找的某个服务service
  6. 后续向该服务发起请求,则根据handle值构建BpBinder代理对象,最终通过IPCThreadState 来完成通信。

3.2 binder_send_reply()

int binder_parse(struct binder_state *bs, struct binder_io *bio,
230                   uintptr_t ptr, size_t size, binder_handler func)
231  {
232      int r = 1;
233      uintptr_t end = ptr + (uintptr_t) size;
234  
235      while (ptr < end) {
236          uint32_t cmd = *(uint32_t *) ptr;
237          ptr += sizeof(uint32_t);
238  #if TRACE
239          fprintf(stderr,"%s:\n", cmd_name(cmd));
240  #endif
241          switch(cmd) {
242          case BR_NOOP:
243              break;
244          case BR_TRANSACTION_COMPLETE:
245              break;
246          case BR_INCREFS:
247          case BR_ACQUIRE:
248          case BR_RELEASE:
249          case BR_DECREFS: //...
255          case BR_TRANSACTION_SEC_CTX:
256          case BR_TRANSACTION: {
257              struct binder_transaction_data_secctx txn;
258              if (cmd == BR_TRANSACTION_SEC_CTX) {
259                  if ((end - ptr) < sizeof(struct binder_transaction_data_secctx)) {
260                      ALOGE("parse: txn too small (binder_transaction_data_secctx)!\n");
261                      return -1;
262                  }
263                  memcpy(&txn, (void*) ptr, sizeof(struct binder_transaction_data_secctx));
264                  ptr += sizeof(struct binder_transaction_data_secctx);
265              } else /* BR_TRANSACTION */ {
266                  if ((end - ptr) < sizeof(struct binder_transaction_data)) {
267                      ALOGE("parse: txn too small (binder_transaction_data)!\n");
268                      return -1;
269                  }
270                  memcpy(&txn.transaction_data, (void*) ptr, sizeof(struct binder_transaction_data));
271                  ptr += sizeof(struct binder_transaction_data);
272  
273                  txn.secctx = 0;
274              }
275  
276              binder_dump_txn(&txn.transaction_data);
277              if (func) {
278                  unsigned rdata[256/4];
279                  struct binder_io msg;
280                  struct binder_io reply;
281                  int res;
282  
283                  bio_init(&reply, rdata, sizeof(rdata), 4);
284                  bio_init_from_txn(&msg, &txn.transaction_data);
                        //回调func: svcmgr_handler函数
285                  res = func(bs, &txn, &msg, &reply);
286                  if (txn.transaction_data.flags & TF_ONE_WAY) {
287                      binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
288                  } else {
                          //发送数据回client
289                      binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);
290                  }
291              }
292              break;
293          }
           // ...
            
  1. 调用svcmgr_handler()处理函数
  2. 返送数据回client端

3.3 svcmgr_handler()

ServiceManager 一直处于 loop 循环中。因此,一旦数据到来,那么就会进入到 svcmgr_handler回调函数中。

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data_secctx *txn_secctx,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si; // 每个service都用svcinfo结构体表示
    ...
    switch(txn->code) {
    case SVC_MGR_GET_SERVICE: // getService
    case SVC_MGR_CHECK_SERVICE: //checkservice也走这里
        s = bio_get_string16(msg, &len); //先获取 service 的 name
        if (s == NULL) {
            return -1;
        }
        // 根据name查找得到handle 引用。
        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid,
                                 (const char*) txn_secctx->secctx);
        if (!handle)
            break;
           //返回数据处理
        bio_put_ref(reply, handle);
        return 0;

    case SVC_MGR_ADD_SERVICE: // addService
          ...
        return -1;
    }
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

  1. 先获取 servicename
  2. 根据name查找得到server进程对应的 handle 引用
  3. 构造返回数据

3.2.1 bio_put_ref()

 void bio_put_ref(struct binder_io *bio, uint32_t handle)
549  {
550      struct flat_binder_object *obj; //构造flat_binder_object对象
551  
552      if (handle)
553          obj = bio_alloc_obj(bio);
554      else
555          obj = bio_alloc(bio, sizeof(*obj));
556  
557      if (!obj)
558          return;
559  
560      obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
561      obj->hdr.type = BINDER_TYPE_HANDLE; // type修改为handle类型
562      obj->handle = handle; //赋值 handle
563      obj->cookie = 0;
564  }

构造 flat_binder_object 对象。把handle赋值给objtypeBINDER_TYPE_HANDLE类型。

3.4 binder_send_reply()

这个方法,上篇已经分析过了。

最终通过ioctl(),把数据写入reply,返回给client端。

四、client端读取数据

   public IBinder getService(String name) throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);// 1 写入服务的名字 
        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
       
        //2 从reply中读取 readStrongBinder得到IBinder对象
        IBinder binder = reply.readStrongBinder();
        reply.recycle(); // 回收parcel
        data.recycle();// 回收parcel
        return binder;
    }

回到最开始的地方,经过 BpBindertransact() 方法后, waitForResponse 等待返回。

4.1 waitForResponse

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
832  {
833      uint32_t cmd;
834      int32_t err;
835     // 循环 
836      while (1) {
             // 不断的从驱动 读写数据
837          if ((err=talkWithDriver()) < NO_ERROR) break;
838          err = mIn.errorCheck();
839          if (err < NO_ERROR) break;
840          if (mIn.dataAvail() == 0) continue;
841  
842          cmd = (uint32_t)mIn.readInt32();
848  
849          switch (cmd) {
850          case BR_TRANSACTION_COMPLETE:  ...
854          case BR_DEAD_REPLY:   ...
858          case BR_FAILED_REPLY:  ...
862          case BR_ACQUIRE_RESULT: ...
871          case BR_REPLY:
872              {
873                 // 构造对象,用来接收数据 
                    binder_transaction_data tr;
                     //从parcel mIn中读取数据
874                  err = mIn.read(&tr, sizeof(tr));
875                  ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
876                  if (err != NO_ERROR) goto finish;
877  
878                  if (reply) {
879                      if ((tr.flags & TF_STATUS_CODE) == 0) {
                            //把数据写入 replay 对象中。
880                          reply->ipcSetDataReference(
881                              reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
882                              tr.data_size,
883                              reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
884                              tr.offsets_size/sizeof(binder_size_t),
885                              freeBuffer, this);
886                      } else {
887                        //...
893                      }
894                  } else {
    895                //... 
901                  }
902              }
903              goto finish;
904  
905          default:
906              err = executeCommand(cmd);
907              if (err != NO_ERROR) goto finish;
908              break;
909          }
910      }
911  
912  //...
919      return err;
920  }

读取到返回数据后,把数据写入replay中:

4.2 ipcSetDataReference()

void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
2558      const binder_size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
2559  {
2560      binder_size_t minOffset = 0;
2561      freeDataNoInit();
2562      mError = NO_ERROR;
2563      mData = const_cast<uint8_t*>(data);
2564      mDataSize = mDataCapacity = dataSize;
2565      //ALOGI("setDataReference Setting data size of %p to %lu (pid=%d)", this, mDataSize, getpid());
2566      mDataPos = 0;
2567      ALOGV("setDataReference Setting data pos of %p to %zu", this, mDataPos);
2568      mObjects = const_cast<binder_size_t*>(objects);
2569      mObjectsSize = mObjectsCapacity = objectsCount;
2570      mNextObjectHint = 0;
2571      mObjectsSorted = false;
2572      mOwner = relFunc;
2573      mOwnerCookie = relCookie;
2574      for (size_t i = 0; i < mObjectsSize; i++) {
2575          binder_size_t offset = mObjects[i];
2576          if (offset < minOffset) {
2577              ALOGE("%s: bad object offset %" PRIu64 " < %" PRIu64 "\n",
2578                    __func__, (uint64_t)offset, (uint64_t)minOffset);
2579              mObjectsSize = 0;
2580              break;
2581          }
2582          minOffset = offset + sizeof(flat_binder_object);
2583      }
2584      scanForFds();
2585  }

至此,reply 这个parcel 已经有了数据。 我们回到java层获取binder对象的逻辑:

4.3 java层 readStrongBinder()

java 层最终会调用jni层:

static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr)
462  {
463      Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
464      if (parcel != NULL) {
              // BpBinder 转换成 java的 BinderProxy对象
465          return javaObjectForIBinder(env, parcel->readStrongBinder());
466      }
467      return NULL;
468  }
469  

4.4 native层 readStrongBinder()

sp<IBinder> Parcel::readStrongBinder() const
2214  {
2215      sp<IBinder> val;
2216      // Note that a lot of code in Android reads binders by hand with this
2217      // method, and that code has historically been ok with getting nullptr
2218      // back (while ignoring error codes).
2219      readNullableStrongBinder(&val);
2220      return val;
2221  }
status_t Parcel::readStrongBinder(sp<IBinder>* val) const
2200  {
2201      status_t status = readNullableStrongBinder(val);
2202      if (status == OK && !val->get()) {
2203          status = UNEXPECTED_NULL;
2204      }
2205      return status;
2206  }

4.4.1 readNullableStrongBinder()

status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const
2209  {
2210      return unflatten_binder(ProcessState::self(), *this, val);
2211  }

4.4.2 unflatten_binder()

 status_t unflatten_binder(const sp<ProcessState>& proc,
303      const Parcel& in, sp<IBinder>* out)
304  {
305      const flat_binder_object* flat = in.readObject(false);
306  
307      if (flat) {
308          switch (flat->hdr.type) {
309              case BINDER_TYPE_BINDER://如果是同一个进程,那么就获取BBinder
310                  *out = reinterpret_cast<IBinder*>(flat->cookie);
311                  return finish_unflatten_binder(nullptr, *flat, in);
312              case BINDER_TYPE_HANDLE:
                    // 不同进程,则获取代理对象 BpBinder
313                  *out = proc->getStrongProxyForHandle(flat->handle);
314                  return finish_unflatten_binder(
315                      static_cast<BpBinder*>(out->get()), *flat, in);
316          }
317      }
318      return BAD_TYPE;
319  }

SM 返回的是BINDER_TYPE_HANDLE类型,因此 走的是下面分支。

4.4.3 getStrongProxyForHandle()

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
260  {
261      sp<IBinder> result;
262  
263      AutoMutex _l(mLock);
264  
265      handle_entry* e = lookupHandleLocked(handle);
266  
267      if (e != nullptr) {
268          // We need to create a new BpBinder if there isn't currently one, OR we
269          // are unable to acquire a weak reference on this current one.  See comment
270          // in getWeakProxyForHandle() for more info about this.
271          IBinder* b = e->binder;
272          if (b == nullptr || !e->refs->attemptIncWeak(this)) {
273              if (handle == 0) {
274                  // Special case for context manager...
275                  // The context manager is the only object for which we create
276                  // a BpBinder proxy without already holding a reference.
277                  // Perform a dummy transaction to ensure the context manager
278                  // is registered before we create the first local reference
279                  // to it (which will occur when creating the BpBinder).
280                  // If a local reference is created for the BpBinder when the
281                  // context manager is not present, the driver will fail to
282                  // provide a reference to the context manager, but the
283                  // driver API does not return status.
284                  //
285                  // Note that this is not race-free if the context manager
286                  // dies while this code runs.
287                  //
288                  // TODO: add a driver API to wait for context manager, or
289                  // stop special casing handle 0 for context manager and add
290                  // a driver API to get a handle to the context manager with
291                  // proper reference counting.
292  
293                  Parcel data;
294                  status_t status = IPCThreadState::self()->transact(
295                          0, IBinder::PING_TRANSACTION, data, nullptr, 0);
296                  if (status == DEAD_OBJECT)
297                     return nullptr;
298              }
            299  //根据handle闯进BpBinder代理对象
300              b = BpBinder::create(handle);
301              e->binder = b;
302              if (b) e->refs = b->getWeakRefs();
303              result = b;
304          } else {
305              // This little bit of nastyness is to allow us to add a primary
306              // reference to the remote proxy when this team doesn't have one
307              // but another team is sending the handle to us.
308              result.force_set(b);
309              e->refs->decWeak(this);
310          }
311      }
312  
313      return result;
314  }

根据 handle 创建 BpBinder 代理对象。

4.4.4 finish_unflatten_binder()

inline static status_t finish_unflatten_binder(
296      BpBinder* /*proxy*/, const flat_binder_object& /*flat*/,
297      const Parcel& /*in*/)
298  {
299      return NO_ERROR;
300  }

空实现,返回了一个 code 。

最后,readStrongBinder() 返回了一个BpBinder对象。

4.5 javaObjectForIBinder()

获取到 BpBinder对象后,我们需要转换为java的 BinderProxy对象:

jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
666  {
667      if (val == NULL) return NULL;
668  
669      if (val->checkSubclass(&gBinderOffsets)) {
670          // It's a JavaBBinder created by ibinderForJavaObject. Already has Java object.
671          jobject object = static_cast<JavaBBinder*>(val.get())->object();
672          LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
673          return object;
674      }
    675  
676      BinderProxyNativeData* nativeData = new BinderProxyNativeData();
677      nativeData->mOrgue = new DeathRecipientList;
678      nativeData->mObject = val;
    679  //通过反射,构造了BinderProxy.java类,其中的nativeData指向了BpBinder.cpp对象
680      jobject object = env->CallStaticObjectMethod(gBinderProxyOffsets.mClass,
681              gBinderProxyOffsets.mGetInstance, (jlong) nativeData, (jlong) val.get());
682      if (env->ExceptionCheck()) {
683          // In the exception case, getInstance still took ownership of nativeData.
684          return NULL;
685      }
686      BinderProxyNativeData* actualNativeData = getBPNativeData(env, object);
687      if (actualNativeData == nativeData) {
688          // Created a new Proxy
689          uint32_t numProxies = gNumProxies.fetch_add(1, std::memory_order_relaxed);
690          uint32_t numLastWarned = gProxiesWarned.load(std::memory_order_relaxed);
691          if (numProxies >= numLastWarned + PROXY_WARN_INTERVAL) {
692              // Multiple threads can get here, make sure only one of them gets to
693              // update the warn counter.
694              if (gProxiesWarned.compare_exchange_strong(numLastWarned,
695                          numLastWarned + PROXY_WARN_INTERVAL, std::memory_order_relaxed)) {
696                  ALOGW("Unexpectedly many live BinderProxies: %d\n", numProxies);
697              }
698          }
699      } else {
700          delete nativeData;
701      }
702  
703      return object;
704  }

通过反射,构造了BinderProxy.java对象,其中的nativeData成员指向了BpBinder.cpp对象。

至此,client 端的 java 层就得到了 server 进程的代理对象。

五、总结

由 4.4.2 小结,我们可以知道:

  • 如果在不同进程发起请求,那么就根据handle生成代理的BpBinder对象。
  • 如果在同一个进程发起请求,那么就直接根据flat_binder_objcookies 强转为 BBinder对象

参考:

gityuan.com/2015/11/15/…

ljd1996.github.io/2020/06/30/…

sufushi.github.io/2018/01/24/…

3dobe.com/archives/22…