一、Binder机制分层解析(Native层→Framework层→应用层)
1.Native层:libbinder库的实现
源码路径:frameworks/native/libs/binder
核心类:BpBinder:客户端代理,通过IPCThreadState::transact()发送请求
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// Once a binder has died, it will never come back to life.
if (mAlive) {
bool privateVendor = flags & FLAG_PRIVATE_VENDOR;
// don't send userspace flags to the kernel
flags = flags & ~static_cast<uint32_t>(FLAG_PRIVATE_VENDOR);
// user transactions require a given stability level
if (code >= FIRST_CALL_TRANSACTION && code <= LAST_CALL_TRANSACTION) {
using android::internal::Stability;
int16_t stability = Stability::getRepr(this);
Stability::Level required = privateVendor ? Stability::VENDOR
: Stability::getLocalLevel();
if (!Stability::check(stability, required)) [[unlikely]] {
ALOGE("Cannot do a user transaction on a %s binder (%s) in a %s context.",
Stability::levelString(stability).c_str(),
String8(getInterfaceDescriptor()).c_str(),
Stability::levelString(required).c_str());
return BAD_TYPE;
}
}
status_t status;
if (isRpcBinder()) [[unlikely]] {
status = rpcSession()->transact(sp<IBinder>::fromExisting(this), code, data, reply,
flags);
} else {
if constexpr (!kEnableKernelIpc) {
LOG_ALWAYS_FATAL("Binder kernel driver disabled at build time");
return INVALID_OPERATION;
}
status = IPCThreadState::self()->transact(binderHandle(), code, data, reply, flags);
}
if (data.dataSize() > LOG_TRANSACTIONS_OVER_SIZE) {
RpcMutexUniqueLock _l(mLock);
ALOGW("Large outgoing transaction of %zu bytes, interface descriptor %s, code %d",
data.dataSize(), String8(mDescriptorCache).c_str(), code);
}
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
BBinder:服务端基类,onTransact()方法处理请求
IPCThreadState:线程本地存储(TLS),管理事务的读写
线程池机制
默认配置:ProcessState::startThreadPool()启动线程池(最大16线程)
void ProcessState::startThreadPool()
{
std::unique_lock<std::mutex> _l(mLock);
if (!mThreadPoolStarted) {
if (mMaxThreads == 0) {
// see also getThreadPoolMaxTotalThreadCount
ALOGW("Extra binder thread started, but 0 threads requested. Do not use "
"*startThreadPool when zero threads are requested.");
}
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
线程阻塞逻辑:IPCThreadState::getAndExecuteCommand()循环处理事务
2.Framework层:Java层的封装
核心类
Binder:Java层服务端基类,JNI映射到Native层BBinder
BinderProxy:Java层客户端代理,JNI映射到Native层BpBinder
ServiceManager:服务注册中心,通过getService/addService管理Binder服务
AIDL自动生成代码
Stub类:服务端实现,继承Binder并重写onTransact()
````public static abstract class Stub extends Binder implements IMyService {
@Override
protected boolean onTransact(int code, Parcel data, Parcel reply, int flags) {
switch(code) {
case TRANSACTION_doSomething:
this.doSomething();
return true;
}
}
}
Proxy类:客户端代理,封装transact()调用
应用层:AIDL的使用(以SystemServer注册AMS为例)
服务注册流程
SystemServer启动AMS:
mActivityManagerService = new ActivityManagerService(...);
ServiceManager.addService(Context.ACTIVITY_SERVICE, mActivityManagerService);
驱动层处理:在ServiceManager的svclist中添加Binder实体
二,各类IPC通信机制对比
三,Binder与AIDL的典型问题及解决方案
1. Binder层问题
TransactionTooLargeException
原因:单次事务数据超过内核缓冲区限制(默认1MB-8KB)
解决:
分片传输:将大数据拆分为多个Parcel发送
使用共享内存:通过ParcelFileDescriptor传递文件描述符
线程池耗尽
现象:服务端无法处理新请求(BinderThread数量达到上限16
解决:
// 调整线程池上限(需系统权限) BinderInternal.setMaxThreads(32);
Binder 风暴 ANR异常
原因: 频繁的IPC通信,导致大量的Binder创建,阻塞主线程
解决:
合并Binder 通信,减少Binder通信造成的线程阻塞问题,比如减少AMS与Launcher进程间的IPC次数(如合并bindApplication和scheduleLaunchActivity)
死亡通知丢失
原因:未正确处理linkToDeath/unlinkToDeath
解决:
`IBinder.DeathRecipient recipient = new IBinder.DeathRecipient() {
@Override
public void binderDied() {
mService.unlinkToDeath(this, 0); // 必须手动解除绑定
rebindService();
}
};
mService.linkToDeath(recipient, 0);`
2.AIDL遇到的问题
跨进程回调泄漏
-
现象:服务端持有已销毁客户端的回调对象导致内存泄漏
-
解决:使用
RemoteCallbackList代替普通集合
异步调用阻塞主线程
-
现象:服务端
oneway方法执行耗时操作导致客户端卡顿 -
解决:在服务端切换至工作线程执行逻辑
`
public void asyncMethod() {new Handler(Looper.getMainLooper()).post(() -> { // 耗时操作在子线程执行 }); } `