我们都知道应用的绘制是从主线程的绘制三部曲开始,那这些绘制数据又经历了什么?在什么上绘制?最终才能到达SurfaceFlinger 去合成从而显示在屏幕上?
应用进程渲染在一个buffer 上,SurfaceFliner 送显也需要访问这个buffer,那么就存在两个进程都需要访问buffer,所以Android 采用ion内存分配机制,从而实现buffer 在多个进程间的共享。那么如果只有一个buffer , 一边绘制一边显示就会出现画面显示异常,所以Android采用我们常常听到的三缓冲机制,这里就需要使用BufferQueue去管理buffer。
BufferQueue 采用生产者和消费者模式,使用一个buffer 队列管理buffer。
-
dequeueBuffer : 应用进程申请buffer 绘制
-
queueBuffer: 绘制结束queueBuffer,将绘制后的buffer 加入队列
-
acquireBuffer: SF消费绘制后的buffer,buffer 出队
-
releaseBuffer: buffer显示之后释放,应用进程可再次dequeue去使用这个buffer
本文基于Android 11 代码,分别从以下两个方面介绍BufferQueue 工作流程:
- BufferQueue工作流程
- BlastBufferQueue 机制
4.1 BufferQueue工作流程
4.1.1 BufferQueue创建
我们在3.1 -3.2 Layer的创建和销毁了解到创建Layer 时,标准的surface layer会创建BufferQueueLayer实例,那么我们从BufferQueueLayer::onFirstRef() 开始:
frameworks/native/services/surfaceflinger/BufferQueueLayer.cpp
void BufferQueueLayer::onFirstRef() {
BufferLayer::onFirstRef();
// Creates a custom BufferQueue for SurfaceFlingerConsumer to use
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferConsumer> consumer;
mFlinger->getFactory().createBufferQueue(&producer, &consumer, true);
//producer封装为MonitoredProducer类型
mProducer = mFlinger->getFactory().createMonitoredProducer(producer, mFlinger, this);
//consumer封装为BufferLayerConsumer类型
mConsumer =
mFlinger->getFactory().createBufferLayerConsumer(consumer, mFlinger->getRenderEngine(),
mTextureName, this);
mConsumer->setConsumerUsageBits(getEffectiveUsage(0));
mContentsChangedListener = new ContentsChangedListener(this);
mConsumer->setContentsChangedListener(mContentsChangedListener);
mConsumer->setName(String8(mName.data(), mName.size()));
}
调用BufferQueue::createBufferQueue,创建了一个buffer队列,一个buffer队列,有一个生产者producer,和一个消费者consumer。
frameworks/native/libs/gui/BufferQueue.cpp
void BufferQueue::createBufferQueue(sp<IGraphicBufferProducer>* outProducer,
sp<IGraphicBufferConsumer>* outConsumer,
bool consumerIsSurfaceFlinger) {
sp<BufferQueueCore> core(new BufferQueueCore());
LOG_ALWAYS_FATAL_IF(core == nullptr,
"BufferQueue: failed to create BufferQueueCore");
sp<IGraphicBufferProducer> producer(new BufferQueueProducer(core, consumerIsSurfaceFlinger));
LOG_ALWAYS_FATAL_IF(producer == nullptr,
"BufferQueue: failed to create BufferQueueProducer");
sp<IGraphicBufferConsumer> consumer(new BufferQueueConsumer(core));
LOG_ALWAYS_FATAL_IF(consumer == nullptr,
"BufferQueue: failed to create BufferQueueConsumer");
*outProducer = producer;
*outConsumer = consumer;
}
以下是BufferQueue 创建过程相关类图:
对象的对应关系:
-
一个Layer对应一个BufferQueue,一个BufferQueue中有多个Buffer,一般是2个或者3个。
-
一个Layer有一个Producer,一个Consumer
-
一个Surface和一个Layer也是一一对应的,和窗口也是一一对应的。
结合应用Surface 的绘制和系统对Surface属性的控制,关系如下图:
- 应用创建窗口后会binder call WMS relayoutWindow,创建SurfaceControl 对象
- WMS 创建SurfaceControl 对象和一个native 对象,SurfaceControl对象同时持有IGraphicBufferProducer的代理对象BpGraphicBufferProducer,和一个SurfaceComposerClient对象,用以和SF通信,控制layer 属性
- WMS将SurfaceControl 对象传给App进程,App进程通过它创建一个Surface对象,native的Surface对象也会持有BpGraphicBufferProducer,用于应用绘制时queueBuffer 或dequeueBuffer
- System server 中的SurfaceControl 可在不同进程传递,但其中的Handle 和 IGraphicBufferProducer地址都是相同的,这就保证了无论是在App 进程或系统进程操作的都是一个Layer
在上面对应关系的基础上,我们看下BufferQueue中的buffer 流转。
4.1.2 Buffer管理
我们再细化下上面官网提供的BufferQueue数据流图:
- 生产者:Surface在应用进程中封装了对buffer 的操作和数据
- 消费者:Layer 在SurfaceFlinger 进程中封装了buffer 的操作和数据
- 数据:GraphicBuffer :BufferQueue 操作的buffer数据,GraphicBuffer只持有指向buffer 的指针,并不是真正的buffer,下面为GraphicBuffer 内存布局,只有256字节,所以肯定不包含buffer
*** Dumping AST Record Layout
0 | class android::GraphicBuffer
0 | class android::ANativeObjectBase<struct ANativeWindowBuffer, class android::GraphicBuffer, class android::RefBase> (primary base)
0 | class android::RefBase (primary base)
0 | (RefBase vtable pointer)
8 | weakref_impl *const mRefs
//间接继承ANativeWindowBuffer
16 | struct ANativeWindowBuffer (base)
16 | struct android_native_base_t common
16 | int magic
20 | int version
24 | void *[4] reserved
56 | void (*)(struct android_native_base_t *) incRef
64 | void (*)(struct android_native_base_t *) decRef
72 | int width
76 | int height
80 | int stride
84 | int format
88 | int usage_deprecated
96 | uintptr_t layerCount
104 | void *[1] reserved
//指向buffer 指针
112 | const native_handle_t * handle
120 | uint64_t usage
128 | void *[7] reserved_proc
0 | class android::Flattenable<class android::GraphicBuffer> (base) (empty)
184 | uint8_t mOwner
192 | GraphicBufferMapper & mBufferMapper
200 | ssize_t mInitCheck
208 | uint32_t mTransportNumFds
212 | uint32_t mTransportNumInts
216 | uint64_t mId
224 | int32_t mBufferId
228 | uint32_t mGenerationNumber
232 | class std::vector<struct std::pair<class std::function<void (void *, unsigned long)>, void *> > mDeathCallbacks
232 | class std::__vector_base<struct std::pair<class std::function<void (void *, unsigned long)>, void *>, class std::allocator<struct std::pair<class std::function<void (void *, unsigned long)>, void *> > > (base)
232 | class std::__vector_base_common<true> (base) (empty)
232 | pointer __begin_
240 | pointer __end_
248 | class std::__compressed_pair<struct std::pair<class std::function<void (void *, unsigned long)>, void *> *, class std::allocator<struct std::pair<class std::function<void (void *, unsigned long)>, void *> > > __end_cap_
248 | struct std::__compressed_pair_elem<struct std::pair<class std::function<void (void *, unsigned long)>, void *> *, 0, false> (base)
248 | struct std::pair<class std::function<void (void *, unsigned long)>, void *> * __value_
248 | struct std::__compressed_pair_elem<class std::allocator<struct std::pair<class std::function<void (void *, unsigned long)>, void *> >, 1, true> (base) (empty)
248 | class std::allocator<struct std::pair<class std::function<void (void *, unsigned long)>, void *> > (base) (empty)
| //占 256 个字节,8 字节对齐
[sizeof=256, dsize=256, align=8,
| nvsize=256, nvalign=8]
GraphicBuffer 继承自ANativeObjectBase,ANativeObjectBase是模板类,最后GraphicBuffer继承ANativeWindowBuffer,ANativeWindowBuffer 中指针handle 指向native_handle_t,而native_handle_t为描述buffer 的结构体类型,由此可以看出GraphicBuffer 中只包含了指向真正buffer 的指针(找到真正的buffer还需要GraphicBufferMapper根据handle去在不同的进程中进map,map到同一块物理内存),并不是真正的buffer.
GraphicBuffer继承关系如下图:
上图出自文章: Android中native_handle private_handle_t ANativeWindowBuffer ANativeWindow GraphicBuffer Surface的关系_na
- mSlots:BufferSlot 类型数组
上面BufferQueue数据流图中,应用进程中的Surface 和 SF进程的BufferQueueCore 中都有一个BufferSlot 类型数组mSlots ,BufferSlot 封装了GraphicBuffer。我们看下mSlots在BufferQueueCore 中的注释:
frameworks/native/libs/gui/include/gui/BufferQueueCore.h
// mSlots is an array of buffer slots that must be mirrored on the producer
// side. This allows buffer ownership to be transferred between the producer
// and consumer without sending a GraphicBuffer over Binder. The entire
// array is initialized to NULL at construction time, and buffers are
// allocated for a slot when requestBuffer is called with that slot's index.
BufferQueueDefs::SlotsType mSlots;
// mQueue is a FIFO of queued buffers used in synchronous mode.
Fifo mQueue;
大概意思是mSlots是一个buffer 数组,在producer 端也有一个mSlots,他可以避免在生产者和消费者之间用binder传递GraphicBuffer ,初始化为null, 当通过requestBuffer 去分配一个buffer 时, mSlots中对应index 中的GraphicBuffer会被赋值。
上面这段注释,最吸引人的就是"可以避免在生产者和消费者之间用binder传递GraphicBuffer",其实生产者和消费者之间还传递GraphicBuffer还是要通过binder 的(Android 12 之前),只不过mSlots的存在,只有当需要分配buffer 时才需要用binder 传GraphicBuffer,如果不需要重新分配buffer,传递序号就可以,这样可以提高效率,不用每次都传GraphicBuffer。
下面是BufferQueue的具体流程:
当上层通过Surface.lockCanvas方法获取画布时最后会调用到native方法 Surface::dequeueBuffer,下面我们就从Surface::dequeueBuffer开始~
dequeueBuffer
frameworks/native/libs/gui/Surface.cpp
int Surface::dequeueBuffer(android_native_buffer_t** buffer, int* fenceFd) {
... ...
int buf = -1;
sp<Fence> fence;
nsecs_t startTime = systemTime();
FrameEventHistoryDelta frameTimestamps;
status_t result = mGraphicBufferProducer->dequeueBuffer(&buf, &fence, reqWidth, reqHeight,
reqFormat, reqUsage, &mBufferAge,
enableFrameTimestamps ? &frameTimestamps
: nullptr);
... ...
Mutex::Autolock lock(mMutex);
// Write this while holding the mutex
mLastDequeueStartTime = startTime;
sp<GraphicBuffer>& gbuf(mSlots[buf].buffer);
if ((result & IGraphicBufferProducer::BUFFER_NEEDS_REALLOCATION) || gbuf == nullptr) {
if (mReportRemovedBuffers && (gbuf != nullptr)) {
mRemovedBuffers.push_back(gbuf);
}
result = mGraphicBufferProducer->requestBuffer(buf, &gbuf);
if (result != NO_ERROR) {
ALOGE("dequeueBuffer: IGraphicBufferProducer::requestBuffer failed: %d", result);
mGraphicBufferProducer->cancelBuffer(buf, fence);
return result;
}
}
... ...
mDequeuedSlots.insert(buf);
return OK;
}
- 主要调用BnGraphicBufferProducer 的dequeueBuffer 去获取buffer 在mSlots中的序号
- 根据dequeueBuffer返回result 中是否有重新分配buffer 去申请分配buffer 调用BnGraphicBufferProducer::requestBuffer
frameworks/native/libs/gui/BufferQueueProducer.cpp
status_t BufferQueueProducer::dequeueBuffer(int* outSlot, sp<android::Fence>* outFence,
uint32_t width, uint32_t height, PixelFormat format,
uint64_t usage, uint64_t* outBufferAge,
FrameEventHistoryDelta* outTimestamps) {
... ...
int found = BufferItem::INVALID_BUFFER_SLOT;
while (found == BufferItem::INVALID_BUFFER_SLOT) {
status_t status = waitForFreeSlotThenRelock(FreeSlotCaller::Dequeue, lock, &found);
if (status != NO_ERROR) {
return status;
}
... ...
调用waitForFreeSlotThenRelock找到可用的BufferSlot, waitForFreeSlotThenRelock函数很长但是比较好理解。
status_t BufferQueueProducer::waitForFreeSlotThenRelock(FreeSlotCaller caller,
std::unique_lock<std::mutex>& lock, int* found) const {
bool tryAgain = true;
while (tryAgain) {
int dequeuedCount = 0;
int acquiredCount = 0;
for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isDequeued()) {
++dequeuedCount;
}
if (mSlots[s].mBufferState.isAcquired()) {
++acquiredCount;
}
}
// Producers are not allowed to dequeue more than
// mMaxDequeuedBufferCount buffers.
// This check is only done if a buffer has already been queued
if (mCore->mBufferHasBeenQueued &&
dequeuedCount >= mCore->mMaxDequeuedBufferCount) {
// Supress error logs when timeout is non-negative.
if (mDequeueTimeout < 0) {
BQ_LOGE("%s: attempting to exceed the max dequeued buffer "
"count (%d)", callerString,
mCore->mMaxDequeuedBufferCount);
}
return INVALID_OPERATION;
}
*found = BufferQueueCore::INVALID_BUFFER_SLOT;
// If we disconnect and reconnect quickly, we can be in a state where
// our slots are empty but we have many buffers in the queue. This can
// cause us to run out of memory if we outrun the consumer. Wait here if
// it looks like we have too many buffers queued up.
const int maxBufferCount = mCore->getMaxBufferCountLocked();
bool tooManyBuffers = mCore->mQueue.size()
> static_cast<size_t>(maxBufferCount);
if (tooManyBuffers) {
BQ_LOGV("%s: queue size is %zu, waiting", callerString,
mCore->mQueue.size());
} else {
// If in shared buffer mode and a shared buffer exists, always
// return it.
if (mCore->mSharedBufferMode && mCore->mSharedBufferSlot !=
BufferQueueCore::INVALID_BUFFER_SLOT) {
*found = mCore->mSharedBufferSlot;
} else {
if (caller == FreeSlotCaller::Dequeue) {
// If we're calling this from dequeue, prefer free buffers
int slot = getFreeBufferLocked();
if (slot != BufferQueueCore::INVALID_BUFFER_SLOT) {
*found = slot;
} else if (mCore->mAllowAllocation) {
*found = getFreeSlotLocked();
}
} else {
// If we're calling this from attach, prefer free slots
int slot = getFreeSlotLocked();
if (slot != BufferQueueCore::INVALID_BUFFER_SLOT) {
*found = slot;
} else {
*found = getFreeBufferLocked();
}
}
}
}
// If no buffer is found, or if the queue has too many buffers
// outstanding, wait for a buffer to be acquired or released, or for the
// max buffer count to change.
tryAgain = (*found == BufferQueueCore::INVALID_BUFFER_SLOT) ||
tooManyBuffers;
if (tryAgain) {
// Return an error if we're in non-blocking mode (producer and
// consumer are controlled by the application).
// However, the consumer is allowed to briefly acquire an extra
// buffer (which could cause us to have to wait here), which is
// okay, since it is only used to implement an atomic acquire +
// release (e.g., in GLConsumer::updateTexImage())
if ((mCore->mDequeueBufferCannotBlock || mCore->mAsyncMode) &&
(acquiredCount <= mCore->mMaxAcquiredBufferCount)) {
return WOULD_BLOCK;
}
if (mDequeueTimeout >= 0) {
std::cv_status result = mCore->mDequeueCondition.wait_for(lock,
std::chrono::nanoseconds(mDequeueTimeout));
if (result == std::cv_status::timeout) {
return TIMED_OUT;
}
} else {
mCore->mDequeueCondition.wait(lock);
}
}
} // while (tryAgain)
return NO_ERROR;
}
-
遍历mSlots数组,计算已经dequeue和acquired的buffer数量
-
mSlots是BufferSlotmSlots类型数组,存放所有buffer,这里说的buffer 是GraphicBuffer,GraphicBuffer不是真正的应用绘制的buffer,GraphicBuffer中保存了buffer 的句柄handle,mSlots大小64
-
已经dequeued buffer 的数量超过了最大可dequeue 数量(默认3个),不再dequeue,这种情况说明应用dequeue 了很多buffer 去绘制,但是没绘制完。
-
tooManyBuffers说明mCore->mQueue队列中太多buffer 没有被消费,所以此时也不再继续dequeueBuffer 了,BufferQueueCore 中队列mQueue是BufferQueue中的核心数据结构,队列中的buffer 是应用绘制后待合成的buffer
-
getFreeBufferLocked是从mCore->mFreeBuffers中取buffer,mFreeBuffers表示Buffer是Free的,这个序号对应的Buffer已经被分配出来了,只是现在没有被使用
-
如果还没取到,继续调用getFreeSlotLocked(),从数组mCore->mFreeSlots中取,mFreeSlots表示,序号是Free的,这些序号还没有被用过,说明对应的是没有Buffer,Buffer还没有分配
-
最后是如果没有取到buffer 是否要阻塞继续取buffer
再回到BufferQueueProducer::dequeueBuffer
status_t BufferQueueProducer::dequeueBuffer(int* outSlot, sp<android::Fence>* outFence,
uint32_t width, uint32_t height, PixelFormat format,
uint64_t usage, uint64_t* outBufferAge,
FrameEventHistoryDelta* outTimestamps) {
... ...
const sp<GraphicBuffer>& buffer(mSlots[found].mGraphicBuffer);
if (mCore->mSharedBufferSlot == found &&
buffer->needsReallocation(width, height, format, BQ_LAYER_COUNT, usage)) {
BQ_LOGE("dequeueBuffer: cannot re-allocate a shared"
"buffer");
return BAD_VALUE;
}
if (mCore->mSharedBufferSlot != found) {
mCore->mActiveBuffers.insert(found);
}
*outSlot = found;
ATRACE_BUFFER_INDEX(found);
attachedByConsumer = mSlots[found].mNeedsReallocation;
mSlots[found].mNeedsReallocation = false;
mSlots[found].mBufferState.dequeue();
if ((buffer == nullptr) ||
buffer->needsReallocation(width, height, format, BQ_LAYER_COUNT, usage))
{
mSlots[found].mAcquireCalled = false;
mSlots[found].mGraphicBuffer = nullptr;
mSlots[found].mRequestBufferCalled = false;
mSlots[found].mEglDisplay = EGL_NO_DISPLAY;
mSlots[found].mEglFence = EGL_NO_SYNC_KHR;
mSlots[found].mFence = Fence::NO_FENCE;
mCore->mBufferAge = 0;
mCore->mIsAllocating = true;
returnFlags |= BUFFER_NEEDS_REALLOCATION;
} else {
// We add 1 because that will be the frame number when this buffer
// is queued
mCore->mBufferAge = mCore->mFrameCounter + 1 - mSlots[found].mFrameNumber;
}
... ...
if (returnFlags & BUFFER_NEEDS_REALLOCATION) {
BQ_LOGV("dequeueBuffer: allocating a new buffer for slot %d", *outSlot);
sp<GraphicBuffer> graphicBuffer = new GraphicBuffer(
width, height, format, BQ_LAYER_COUNT, usage,
{mConsumerName.string(), mConsumerName.size()});
status_t error = graphicBuffer->initCheck();
{ // Autolock scope
std::lock_guard<std::mutex> lock(mCore->mMutex);
if (error == NO_ERROR && !mCore->mIsAbandoned) {
graphicBuffer->setGenerationNumber(mCore->mGenerationNumber);
mSlots[*outSlot].mGraphicBuffer = graphicBuffer;
if (mCore->mConsumerListener != nullptr) {
mCore->mConsumerListener->onFrameDequeued(
mSlots[*outSlot].mGraphicBuffer->getId());
}
}
mCore->mIsAllocating = false;
mCore->mIsAllocatingCondition.notify_all();
if (error != NO_ERROR) {
mCore->mFreeSlots.insert(*outSlot);
mCore->clearBufferSlotLocked(*outSlot);
BQ_LOGE("dequeueBuffer: createGraphicBuffer failed");
return error;
}
if (mCore->mIsAbandoned) {
mCore->mFreeSlots.insert(*outSlot);
mCore->clearBufferSlotLocked(*outSlot);
BQ_LOGE("dequeueBuffer: BufferQueue has been abandoned");
return NO_INIT;
}
VALIDATE_CONSISTENCY();
} // Autolock scope
}
... ...
return returnFlags;
}
- 获取可用buffer 序号found 后,获取与之对应mGraphicBuffer,将buffer加到 mActiveBuffers中, mActiveBuffers代码正在使用buffer
- 如果buffer 为null,则需要重新分配buffer,先将 mSlots[found]中的成员恢复初始值
- mSlots[*outSlot].mGraphicBuffer 赋值为新分配的buffer
- 分配失败则放入mFreeSlots数组
最后回到前面Suface::dequeueBuffer第二部requestBuffer,requestBuffer还是再SF 进程实现
frameworks/native/libs/gui/BufferQueueProducer.cpp
status_t BufferQueueProducer::requestBuffer(int slot, sp<GraphicBuffer>* buf) {
... ...
mSlots[slot].mRequestBufferCalled = true;
*buf = mSlots[slot].mGraphicBuffer;
return NO_ERROR;
}
前面新分配的buffer 都在SF进程,还没有传给应用进程去绘制, 那么mGraphicBuffer是如何传给应用进程的呢,BpGraphicBufferProducer申请远程调用REQUEST_BUFFER, 并将GraphicBuffer 写入binder传给服务端
class BpGraphicBufferProducer : public BpInterface<IGraphicBufferProducer>
{
public:
explicit BpGraphicBufferProducer(const sp<IBinder>& impl)
: BpInterface<IGraphicBufferProducer>(impl)
{
}
~BpGraphicBufferProducer() override;
virtual status_t requestBuffer(int bufferIdx, sp<GraphicBuffer>* buf) {
Parcel data, reply;
data.writeInterfaceToken(IGraphicBufferProducer::getInterfaceDescriptor());
data.writeInt32(bufferIdx);
status_t result =remote()->transact(REQUEST_BUFFER, data, &reply);
if (result != NO_ERROR) {
return result;
}
bool nonNull = reply.readInt32();
if (nonNull) {
*buf = new GraphicBuffer();
result = reply.read(**buf);
if(result != NO_ERROR) {
(*buf).clear();
return result;
}
}
result = reply.readInt32();
return result;
}
服务端收到REQUEST_BUFFER请求,将服务端创建的GraphicBuffer 写道reply 中,Bp端就可以获取Bn端的GraphicBuffer 了
status_t BnGraphicBufferProducer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case REQUEST_BUFFER: {
CHECK_INTERFACE(IGraphicBufferProducer, data, reply);
int bufferIdx = data.readInt32();
sp<GraphicBuffer> buffer;
int result = requestBuffer(bufferIdx, &buffer);
reply->writeInt32(buffer != nullptr);
if (buffer != nullptr) {
reply->write(*buffer);
}
reply->writeInt32(result);
return NO_ERROR;
}
总结:
上面流程中用到BufferQueueCore 的数组及队列如下:
- mSlots :BufferSlot数组,默认大小是64个,这个数组会被映射到BufferQueueProducer/BufferQueueConsuer类中;
- mQueue :BufferItem类型的数组,Producer调用queueBuffer后,其实就是queue到这个数组里面;
- mFreeSlots :没有绑定GraphicBuffer且状态为FREE的BufferSlot集合;
- mFreeBuffers :绑定了GraphicBuffer且状态为FREE的BufferSlot集合;
- mActiveBuffers :绑定了GraphicBuffer且状态为非FREE的BufferSlot集合;
dequeueBuffer 流程总结:
- 当上层通过Surface.lockCanvas方法会触发dequeueBuffer
- BufferQueueProducer 找到可用buffer 序号slot, 主要通过在mFreeBuffers 、mFreeSlots 数组中查找
- 如果需要分配buffer,则创建GraphicBuffer,并将其传回应用端。注意这里的buffer 传递并不是真正的buffer,只是传递封装了buffer 句柄,属性等的结构体。
- 标记buffer 状态DEQUEUED,和将其添加到mActiveBuffers
queueBuffer
dequeueBuffer后应用可以开始绘制了,绘制完成会将绘制后的buffer通过queueBuffer 提交到BufferQueueCore 的mQueue中等待合成。
渲染线程绘制完后会调用Surface.queueBuffer() -> mGraphicBufferProducer->queueBuffer() -> BufferQueueProducer::queueBuffer,我们就从这里开始看buffer 的传递:
frameworks/native/libs/gui/BufferQueueProducer.cpp
status_t BufferQueueProducer::queueBuffer(int slot,
const QueueBufferInput &input, QueueBufferOutput *output) {
ATRACE_CALL();
ATRACE_BUFFER_INDEX(slot);
... ...
sp<IConsumerListener> frameAvailableListener;
sp<IConsumerListener> frameReplacedListener;
int callbackTicket = 0;
uint64_t currentFrameNumber = 0;
BufferItem item;
{ // Autolock scope
... ...
// Increment the frame counter and store a local version of it
// for use outside the lock on mCore->mMutex.
++mCore->mFrameCounter;
currentFrameNumber = mCore->mFrameCounter;
mSlots[slot].mFrameNumber = currentFrameNumber;
item.mAcquireCalled = mSlots[slot].mAcquireCalled;
item.mGraphicBuffer = mSlots[slot].mGraphicBuffer;
item.mCrop = crop;
item.mTransform = transform &
~static_cast<uint32_t>(NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY);
item.mTransformToDisplayInverse =
(transform & NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY) != 0;
item.mScalingMode = static_cast<uint32_t>(scalingMode);
item.mTimestamp = requestedPresentTimestamp;
item.mIsAutoTimestamp = isAutoTimestamp;
item.mDataSpace = dataSpace;
item.mHdrMetadata = hdrMetadata;
item.mFrameNumber = currentFrameNumber;
item.mSlot = slot;
item.mFence = acquireFence;
item.mFenceTime = acquireFenceTime;
item.mIsDroppable = mCore->mAsyncMode ||
(mConsumerIsSurfaceFlinger && mCore->mQueueBufferCanDrop) ||
(mCore->mLegacyBufferDrop && mCore->mQueueBufferCanDrop) ||
(mCore->mSharedBufferMode && mCore->mSharedBufferSlot == slot);
item.mSurfaceDamage = surfaceDamage;
item.mQueuedBuffer = true;
item.mAutoRefresh = mCore->mSharedBufferMode && mCore->mAutoRefresh;
item.mApi = mCore->mConnectedApi;
... ...
output->bufferReplaced = false;
if (mCore->mQueue.empty()) {
// When the queue is empty, we can ignore mDequeueBufferCannotBlock
// and simply queue this buffer
mCore->mQueue.push_back(item);
frameAvailableListener = mCore->mConsumerListener;
} else {
// When the queue is not empty, we need to look at the last buffer
// in the queue to see if we need to replace it
const BufferItem& last = mCore->mQueue.itemAt(
mCore->mQueue.size() - 1);
if (last.mIsDroppable) {
... ...
} else {
mCore->mQueue.push_back(item);
frameAvailableListener = mCore->mConsumerListener;
}
}
mCore->mBufferHasBeenQueued = true;
mCore->mDequeueCondition.notify_all();
mCore->mLastQueuedSlot = slot;
... ...
int connectedApi;
sp<Fence> lastQueuedFence;
{ // scope for the lock
std::unique_lock<std::mutex> lock(mCallbackMutex);
while (callbackTicket != mCurrentCallbackTicket) {
mCallbackCondition.wait(lock);
}
if (frameAvailableListener != nullptr) {
frameAvailableListener->onFrameAvailable(item);
} else if (frameReplacedListener != nullptr) {
frameReplacedListener->onFrameReplaced(item);
}
connectedApi = mCore->mConnectedApi;
lastQueuedFence = std::move(mLastQueueBufferFence);
mLastQueueBufferFence = std::move(acquireFence);
mLastQueuedCrop = item.mCrop;
mLastQueuedTransform = item.mTransform;
++mCurrentCallbackTicket;
mCallbackCondition.notify_all();
}
return NO_ERROR;
}
- 根据slot序号在 mSlots[slot]数组中取出mGraphicBuffer
- 封装BufferItem, BufferQueue 中的元素都是BufferItem,BufferItem封装了GraphicBuffer等所有buffer 信息
- 将封装好的BufferItem 入队mQueue,mQueue 中元素代表应用已经绘制完成待合成的buffer
- 通过 frameAvailableListener通知Consumer 消费buffer
下面就从消费者接收到frameAvailableListener回调开始acquireBuffer流程~
acquireBuffer
ConsumerBase::onFrameAvailable 发起合成申请
-> SurfaceFlinger::handleMessageInvalidate()
->SurfaceFlinger::handlePageFlip()
-> Layer::latchBuffer
->BufferQueueConsumer::acquireBuffer
acquireBuffer主要逻辑都在BufferQueueConsumer::acquireBuffer 方法中:
frameworks/native/libs/gui/BufferQueueConsumer.cpp
status_t BufferQueueConsumer::acquireBuffer(BufferItem* outBuffer,
nsecs_t expectedPresent, uint64_t maxFrameNumber) {
ATRACE_CALL();
int numDroppedBuffers = 0;
sp<IProducerListener> listener;
{
std::unique_lock<std::mutex> lock(mCore->mMutex);
// Check that the consumer doesn't currently have the maximum number of
// buffers acquired. We allow the max buffer count to be exceeded by one
// buffer so that the consumer can successfully set up the newly acquired
// buffer before releasing the old one.
int numAcquiredBuffers = 0;
for (int s : mCore->mActiveBuffers) {
if (mSlots[s].mBufferState.isAcquired()) {
++numAcquiredBuffers;
}
}
//已经acquired 的buffer 数量比最大的可acquired 的数量还大1 时,放弃此次acquire
if (numAcquiredBuffers >= mCore->mMaxAcquiredBufferCount + 1) {
BQ_LOGE("acquireBuffer: max acquired buffer count reached: %d (max %d)",
numAcquiredBuffers, mCore->mMaxAcquiredBufferCount);
return INVALID_OPERATION;
}
bool sharedBufferAvailable = mCore->mSharedBufferMode &&
mCore->mAutoRefresh && mCore->mSharedBufferSlot !=
BufferQueueCore::INVALID_BUFFER_SLOT;
// In asynchronous mode the list is guaranteed to be one buffer deep,
// while in synchronous mode we use the oldest buffer.
if (mCore->mQueue.empty() && !sharedBufferAvailable) {
return NO_BUFFER_AVAILABLE;
}
//取得mQueue中第一个buffer
BufferQueueCore::Fifo::iterator front(mCore->mQueue.begin());
// If expectedPresent is specified, we may not want to return a buffer yet.
// If it's specified and there's more than one buffer queued, we may want
// to drop a buffer.
// Skip this if we're in shared buffer mode and the queue is empty,
// since in that case we'll just return the shared buffer.
if (expectedPresent != 0 && !mCore->mQueue.empty()) {
//如果mQueue中有多个buffer 丢弃前面的buffer ,再获取队列中第一个buffer作为要acquired 的
while (mCore->mQueue.size() > 1 && !mCore->mQueue[0].mIsAutoTimestamp) {
const BufferItem& bufferItem(mCore->mQueue[1]);
// If dropping entry[0] would leave us with a buffer that the
// consumer is not yet ready for, don't drop it.
if (maxFrameNumber && bufferItem.mFrameNumber > maxFrameNumber) {
break;
}
// If entry[1] is timely, drop entry[0] (and repeat). We apply an
// additional criterion here: we only drop the earlier buffer if our
// desiredPresent falls within +/- 1 second of the expected present.
// Otherwise, bogus desiredPresent times (e.g., 0 or a small
// relative timestamp), which normally mean "ignore the timestamp
// and acquire immediately", would cause us to drop frames.
//
// We may want to add an additional criterion: don't drop the
// earlier buffer if entry[1]'s fence hasn't signaled yet.
nsecs_t desiredPresent = bufferItem.mTimestamp;
if (desiredPresent < expectedPresent - MAX_REASONABLE_NSEC ||
desiredPresent > expectedPresent) {
// This buffer is set to display in the near future, or
// desiredPresent is garbage. Either way we don't want to drop
// the previous buffer just to get this on the screen sooner.
BQ_LOGV("acquireBuffer: nodrop desire=%" PRId64 " expect=%"
PRId64 " (%" PRId64 ") now=%" PRId64,
desiredPresent, expectedPresent,
desiredPresent - expectedPresent,
systemTime(CLOCK_MONOTONIC));
break;
}
BQ_LOGV("acquireBuffer: drop desire=%" PRId64 " expect=%" PRId64
" size=%zu",
desiredPresent, expectedPresent, mCore->mQueue.size());
if (!front->mIsStale) {
// Front buffer is still in mSlots, so mark the slot as free
mSlots[front->mSlot].mBufferState.freeQueued();
// After leaving shared buffer mode, the shared buffer will
// still be around. Mark it as no longer shared if this
// operation causes it to be free.
if (!mCore->mSharedBufferMode &&
mSlots[front->mSlot].mBufferState.isFree()) {
mSlots[front->mSlot].mBufferState.mShared = false;
}
// Don't put the shared buffer on the free list
if (!mSlots[front->mSlot].mBufferState.isShared()) {
mCore->mActiveBuffers.erase(front->mSlot);
mCore->mFreeBuffers.push_back(front->mSlot);
}
if (mCore->mBufferReleasedCbEnabled) {
listener = mCore->mConnectedProducerListener;
}
++numDroppedBuffers;
}
mCore->mQueue.erase(front);
front = mCore->mQueue.begin();
}
... ...
int slot = BufferQueueCore::INVALID_BUFFER_SLOT;
// 如果是共享buffer ,即使mQueue为空,也回返回共享buffer mSharedBufferSlot
if (sharedBufferAvailable && mCore->mQueue.empty()) {
// make sure the buffer has finished allocating before acquiring it
mCore->waitWhileAllocatingLocked(lock);
slot = mCore->mSharedBufferSlot;
// Recreate the BufferItem for the shared buffer from the data that
// was cached when it was last queued.
outBuffer->mGraphicBuffer = mSlots[slot].mGraphicBuffer;
outBuffer->mFence = Fence::NO_FENCE;
outBuffer->mFenceTime = FenceTime::NO_FENCE;
outBuffer->mCrop = mCore->mSharedBufferCache.crop;
outBuffer->mTransform = mCore->mSharedBufferCache.transform &
~static_cast<uint32_t>(
NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY);
outBuffer->mScalingMode = mCore->mSharedBufferCache.scalingMode;
outBuffer->mDataSpace = mCore->mSharedBufferCache.dataspace;
outBuffer->mFrameNumber = mCore->mFrameCounter;
outBuffer->mSlot = slot;
outBuffer->mAcquireCalled = mSlots[slot].mAcquireCalled;
outBuffer->mTransformToDisplayInverse =
(mCore->mSharedBufferCache.transform &
NATIVE_WINDOW_TRANSFORM_INVERSE_DISPLAY) != 0;
outBuffer->mSurfaceDamage = Region::INVALID_REGION;
outBuffer->mQueuedBuffer = false;
outBuffer->mIsStale = false;
outBuffer->mAutoRefresh = mCore->mSharedBufferMode &&
mCore->mAutoRefresh;
//返回mQueue中最前面的buffer
} else {
slot = front->mSlot;
*outBuffer = *front;
}
ATRACE_BUFFER_INDEX(slot);
BQ_LOGV("acquireBuffer: acquiring { slot=%d/%" PRIu64 " buffer=%p }",
slot, outBuffer->mFrameNumber, outBuffer->mGraphicBuffer->handle);
// If the buffer has previously been acquired by the consumer, set
// mGraphicBuffer to NULL to avoid unnecessarily remapping this buffer
// on the consumer side
if (outBuffer->mAcquireCalled) {
outBuffer->mGraphicBuffer = nullptr;
}
//front 出队
mCore->mQueue.erase(front);
return NO_ERROR;
}
acquireBuffer方法比较长,主要就是返回mQueue中符合条件的buffer :
- 如果mQueue中是多个,说明应用待合成的buffer 数量大于1 ,根据期望被显示时间expectedPresent和buffer 是否被释放等条件,判断是否要丢弃buffer
- 取出mQueue队列中第一个buffer作为返回结果
- 处理共享buffer
- 将 BufferState 改为 acquire 状态,将该 Buffer 从 mQueue 中移除
acquireBuffer方法中很多注释和log ,如果buffer 被丢弃都会有log ,我们在分析具体问题时可以直接打开log
releaseBuffer
在消费者acquireBuffer后会触发releaseBuffer 流程:
BufferLayerConsumer::updateTexImage
-> BufferLayerConsumer::updateAndReleaseLocked
-> ConsumerBase::releaseBufferLocked
-> BufferQueueConsumer::releaseBuffer
我们先简单看下updateTexImage
frameworks/native/services/surfaceflinger/BufferLayerConsumer.cpp
status_t BufferLayerConsumer::updateTexImage(BufferRejecter* rejecter, nsecs_t expectedPresentTime,
bool* autoRefresh, bool* queuedBuffer,
uint64_t maxFrameNumber) {
ATRACE_CALL();
BLC_LOGV("updateTexImage");
Mutex::Autolock lock(mMutex);
if (mAbandoned) {
BLC_LOGE("updateTexImage: BufferLayerConsumer is abandoned!");
return NO_INIT;
}
BufferItem item;
// Acquire the next buffer.
// In asynchronous mode the list is guaranteed to be one buffer
// deep, while in synchronous mode we use the oldest buffer.
//从BufferQueue中获取buffer
status_t err = acquireBufferLocked(&item, expectedPresentTime, maxFrameNumber);
if (autoRefresh) {
*autoRefresh = item.mAutoRefresh;
}
if (queuedBuffer) {
*queuedBuffer = item.mQueuedBuffer;
}
// We call the rejecter here, in case the caller has a reason to
// not accept this buffer. This is used by SurfaceFlinger to
// reject buffers which have the wrong size
int slot = item.mSlot;
//使用LayerRejecter判断buffer 是否符合条件,不符合直接release
if (rejecter && rejecter->reject(mSlots[slot].mGraphicBuffer, item)) {
releaseBufferLocked(slot, mSlots[slot].mGraphicBuffer);
return BUFFER_REJECTED;
}
// Release the previous buffer.
//更新Buffer,释放上一个Buffer
err = updateAndReleaseLocked(item, &mPendingRelease);
if (err != NO_ERROR) {
return err;
}
return err;
}
status_t BufferLayerConsumer::updateAndReleaseLocked(const BufferItem& item,
PendingRelease* pendingRelease) {
... ...
// release old buffer
if (mCurrentTexture != BufferQueue::INVALID_BUFFER_SLOT) {
if (pendingRelease == nullptr) {
status_t status =
releaseBufferLocked(mCurrentTexture, mCurrentTextureBuffer->graphicBuffer());
if (status < NO_ERROR) {
BLC_LOGE("updateAndRelease: failed to release buffer: %s (%d)", strerror(-status),
status);
err = status;
// keep going, with error raised [?]
}
} else {
pendingRelease->currentTexture = mCurrentTexture;
pendingRelease->graphicBuffer = mCurrentTextureBuffer->graphicBuffer();
pendingRelease->isPending = true;
}
}
// Update the BufferLayerConsumer state.
mCurrentTexture = slot;
mCurrentTextureBuffer = nextTextureBuffer;
mCurrentCrop = item.mCrop;
mCurrentTransform = item.mTransform;
mCurrentScalingMode = item.mScalingMode;
mCurrentTimestamp = item.mTimestamp;
mCurrentDataSpace = static_cast<ui::Dataspace>(item.mDataSpace);
mCurrentHdrMetadata = item.mHdrMetadata;
mCurrentFence = item.mFence;
mCurrentFenceTime = item.mFenceTime;
mCurrentFrameNumber = item.mFrameNumber;
mCurrentTransformToDisplayInverse = item.mTransformToDisplayInverse;
mCurrentSurfaceDamage = item.mSurfaceDamage;
mCurrentApi = item.mApi;
computeCurrentTransformMatrixLocked();
return err;
}
updateAndReleaseLocked 主要release 之前buffer ,再更新BufferLayerConsumer state.
最后调用到BufferQueueConsumer::releaseBuffer
frameworks/native/libs/gui/BufferQueueConsumer.cpp
status_t BufferQueueConsumer::releaseBuffer(int slot, uint64_t frameNumber,
const sp<Fence>& releaseFence, EGLDisplay eglDisplay,
EGLSyncKHR eglFence) {
... ...
sp<IProducerListener> listener;
{ // Autolock scope
std::lock_guard<std::mutex> lock(mCore->mMutex);
// If the frame number has changed because the buffer has been reallocated,
// we can ignore this releaseBuffer for the old buffer.
// Ignore this for the shared buffer where the frame number can easily
// get out of sync due to the buffer being queued and acquired at the
// same time.
if (frameNumber != mSlots[slot].mFrameNumber &&
!mSlots[slot].mBufferState.isShared()) {
return STALE_BUFFER_SLOT;
}
if (!mSlots[slot].mBufferState.isAcquired()) {
BQ_LOGE("releaseBuffer: attempted to release buffer slot %d "
"but its state was %s", slot,
mSlots[slot].mBufferState.string());
return BAD_VALUE;
}
mSlots[slot].mEglDisplay = eglDisplay;
mSlots[slot].mEglFence = eglFence;
mSlots[slot].mFence = releaseFence;
mSlots[slot].mBufferState.release();
// After leaving shared buffer mode, the shared buffer will
// still be around. Mark it as no longer shared if this
// operation causes it to be free.
if (!mCore->mSharedBufferMode && mSlots[slot].mBufferState.isFree()) {
mSlots[slot].mBufferState.mShared = false;
}
// Don't put the shared buffer on the free list.
if (!mSlots[slot].mBufferState.isShared()) {
mCore->mActiveBuffers.erase(slot);
mCore->mFreeBuffers.push_back(slot);
}
if (mCore->mBufferReleasedCbEnabled) {
listener = mCore->mConnectedProducerListener;
}
BQ_LOGV("releaseBuffer: releasing slot %d", slot);
mCore->mDequeueCondition.notify_all();
VALIDATE_CONSISTENCY();
} // Autolock scope
// Call back without lock held
if (listener != nullptr) {
listener->onBufferReleased();
}
return NO_ERROR;
}
releaseBuffer 中首先将序号为slot 设为release 状态,然后再将buffer 放入mFreeBuffers中。releaseFence发信号出来后,producer才可以dequeue mFreeBuffers中的buffer。
总结
上面BufferQueue中几个buffer 的流传过程,涉及下面几个数组记录buffer 状态:
-
mSlots mSlots 是存放所有Buffer序号的一个数组,初始状态时为空,当requestBuffer流程执行时,将去为对应的Buffer序号,分配真正的Buffer。
-
mQueue mQueue是一个先进先出的Vector,是同步模式下使用。里面就是处于QUEUED状态的Buffer。
-
mFreeSlots mFreeSlots包含所有是FREE状态,且还没有分配Buffer的,Buffer序号集合。刚开始时,mFreeSlots被初始化为MaxBufferCount个Buffer序号集合,dequeueBuffer的时候,将先从这个集合中获取。但是消费者消费完成,释放的Buffer并不返回到这个队列中,而是返回到mFreeBuffers中。
-
mFreeBuffers mFreeBuffers包含的是所有FREE状态,且已经分配Buffer的,Buffer序号的结合。
-
mUnusedSlots mUnusedSlots和mFreeSlots有些相似,只是mFreeSlots会被用到,而mUnusedSlots中的Buffer序号不会不用到。也就是,总的Buffer序号NUM_BUFFER_SLOTS中,除去MaxBufferCount个mFreeSlots,剩余的集合。
-
mActiveBuffers mActiveBuffers包含所有非FREE状态的Buffer。也就是包含了DEQUEUED,QUEUED,ACQUIRED以及SHARED这几个状态的。
4.2 BlastBufferQueue 机制
4.2.1 什么是BlastBufferQueue?
BlastBufferQueue是Android 11加入的,对BufferQueue功能的扩展,其主要作用是实现多个应用进程的绘制缓冲 与系统对窗口属性的设置实现同步合成。
在Android 11中的主要应用场景就是分屏场景,SystemUI 可以实现应用进程绘制buffer、 System server 进程的事务以及SystemUI 进程事务同步合成。
4.2.2 applySyncTransaction如何实现同步控制
SystemUI 使用实例
进入和退出分屏时SystemUI 会调用applySyncTransaction 方法,实现Task 的处理,如reparent,setPosition 等,下面就从applySyncTransaction方法的实现来看,SystemUI 如何实现同步控制多个应用的绘制和task 属性的最终合成。
frameworks/base/services/core/java/com/android/server/wm/WindowOrganizerController.java
@Override
public int applySyncTransaction(WindowContainerTransaction t,
IWindowContainerTransactionCallback callback) {
try {
synchronized (mGlobalLock) {
int effects = 0;
if (callback != null) {
syncId = startSyncWithOrganizer(callback);
}
mService.deferWindowLayout();
try {
// Hierarchy changes
final List<WindowContainerTransaction.HierarchyOp> hops = t.getHierarchyOps();
for (int i = 0, n = hops.size(); i < n; ++i) {
final WindowContainerTransaction.HierarchyOp hop = hops.get(i);
final WindowContainer wc = WindowContainer.fromBinder(hop.getContainer());
if (!wc.isAttached()) {
Slog.e(TAG, "Attempt to operate on detached container: " + wc);
continue;
}
if (syncId >= 0) {
addToSyncSet(syncId, wc);
}
effects |= sanitizeAndApplyHierarchyOp(wc, hop);
}
... ...
return syncId;
}
-
startSyncWithOrganizer 获取一个syncId
-
遍历所有SystemUI 传过来wct中的wc(WindowContainer),通过调用addToSyncSet, 然后调用每个wc 的prepareForSync
-
与非同步transaction 处理一样,都要调用sanitizeAndApplyHierarchyOp将 SystemUI 中传过来的task 属性(reparent position等)在WMS 中生效,区别是普通的马上传给SF,而同步的则要等应用绘制完统一送回SystemUI 再发给SF 合成
重点关注WindowState 的prepareForSync
frameworks/base/services/core/java/com/android/server/wm/WindowState.java
@Override
boolean prepareForSync(BLASTSyncEngine.TransactionReadyListener waitingListener,
int waitingId) {
boolean willSync = setPendingListener(waitingListener, waitingId);
if (!willSync) {
return false;
}
requestRedrawForSync();
mLocalSyncId = mBLASTSyncEngine.startSyncSet(this);
addChildrenToSyncSet(mLocalSyncId);
// In the WindowContainer implementation we immediately mark ready
// since a generic WindowContainer only needs to wait for its
// children to finish and is immediately ready from its own
// perspective but at the WindowState level we need to wait for ourselves
// to draw even if the children draw first our don't need to sync, so we omit
// the set ready call until later in finishDrawing()
mWmService.mH.removeMessages(WINDOW_STATE_BLAST_SYNC_TIMEOUT, this);
mWmService.mH.sendNewMessageDelayed(WINDOW_STATE_BLAST_SYNC_TIMEOUT, this,
BLAST_TIMEOUT_DURATION);
return true;
}
setPendingListener 中会将mUsingBLASTSyncTransaction 置true,这样 在触发绘制后useBLASTSync为true
应用绘制时会先调用wms 的relayoutWindow ,这里判断如果useBLASTSync为true,返回result 添加flag RELAYOUT_RES_BLAST_SYNC
回到应用端的绘制
frameworks/base/core/java/android/view/ViewRootImpl.java
private void performTraversals() {
... ...
if ((relayoutResult & WindowManagerGlobal.RELAYOUT_RES_BLAST_SYNC) != 0) {
reportNextDraw();
setUseBLASTSyncTransaction();
mSendNextFrameToWm = true;
}
这里主要是置mNextDrawUseBLASTSyncTransaction为true
private void performDraw() {
... ...
boolean usingAsyncReport = false;
boolean reportNextDraw = mReportNextDraw; // Capture the original value
if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) {
ArrayList<Runnable> commitCallbacks = mAttachInfo.mTreeObserver
.captureFrameCommitCallbacks();
final boolean needFrameCompleteCallback = mNextDrawUseBLASTSyncTransaction ||
(commitCallbacks != null && commitCallbacks.size() > 0) ||
mReportNextDraw;
usingAsyncReport = mReportNextDraw;
if (needFrameCompleteCallback) {
final Handler handler = mAttachInfo.mHandler;
//触发时机是 swap 缓冲区后,此时 mRtBLASTSyncTransaction 已包含了下一帧应用的所有
//数据更新。 ViewRootImpl 收到回调通知将 mRtBLASTSyncTransaction 事务所包含的所有
//操作合并到 mSurfaceChangedTransaction。
mAttachInfo.mThreadedRenderer.setFrameCompleteCallback((long frameNr) -> {
finishBLASTSync(!mSendNextFrameToWm);
handler.postAtFrontOfQueue(() -> {
if (reportNextDraw) {
// TODO: Use the frame number
pendingDrawFinished();
}
if (commitCallbacks != null) {
for (int i = 0; i < commitCallbacks.size(); i++) {
commitCallbacks.get(i).run();
}
}
});});
}
}
if (mNextDrawUseBLASTSyncTransaction) {
Slog.e(TAG, "lzh_sync performDraw 让mRtBLASTSyncTransaction准备接管BBQ下一帧提交");
// TODO(b/149747443)
// We aren't prepared to handle overlapping use of mRtBLASTSyncTransaction
// so if we are BLAST syncing we make sure the previous draw has
// totally finished.
if (mAttachInfo.mThreadedRenderer != null) {
mAttachInfo.mThreadedRenderer.pause();
}
mNextReportConsumeBLAST = true;
mNextDrawUseBLASTSyncTransaction = false;
if (mBlastBufferQueue != null) {
mBlastBufferQueue.setNextTransaction(mRtBLASTSyncTransaction);
}
}
... ...
重点看mBlastBufferQueue.setNextTransaction(mRtBLASTSyncTransaction);
frameworks/native/libs/gui/BLASTBufferQueue.cpp
void BLASTBufferQueue::setNextTransaction(SurfaceComposerClient::Transaction* t) {
std::lock_guard _lock{mMutex};
mNextTransaction = t;
}
这帧绘制完后,queueBuffer 后通知有buffer 待合成会回调onFrameAvailable,调用mCallbackCV.wait(_lock); 阻塞主线程,目的是为了这期间避免主线程再绘制新的buffer去合成
void BLASTBufferQueue::onFrameAvailable(const BufferItem& /*item*/) {
ATRACE_CALL();
std::unique_lock _lock{mMutex};
if (mNextTransaction != nullptr) {
while (mNumFrameAvailable > 0 || mNumAcquired == MAX_ACQUIRED_BUFFERS + 1) {
mCallbackCV.wait(_lock);
}
}
// add to shadow queue
mNumFrameAvailable++;
processNextBufferLocked(true);
}
SF合成当前帧后回调 transactionCallback
void BLASTBufferQueue::transactionCallback(nsecs_t /*latchTime*/, const sp<Fence>& /*presentFence*/,
const std::vector<SurfaceControlStats>& stats) {
std::unique_lock _lock{mMutex};
ATRACE_CALL();
... ...
mCallbackCV.notify_all();
decStrong((void*)transactionCallbackThunk);
}
主线程唤醒后继续执行processNextBufferLocked
void BLASTBufferQueue::processNextBufferLocked(bool useNextTransaction) {
SurfaceComposerClient::Transaction localTransaction;
bool applyTransaction = true;
SurfaceComposer Client::Transaction* t = &localTransaction;
if (mNextTransaction != nullptr && useNextTransaction) {
t = mNextTransaction;
mNextTransaction = nullptr;
applyTransaction = false;
}
BufferItem bufferItem;
status_t status = mBufferItemConsumer->acquireBuffer(&bufferItem, -1, false);
if (status != OK) {
return;
}
auto buffer = bufferItem.mGraphicBuffer;
mNumFrameAvailable--;
... ...
t->setBuffer(mSurfaceControl, buffer);
t->setAcquireFence(mSurfaceControl,
bufferItem.mFence ? new Fence(bufferItem.mFence->dup()) : Fence::NO_FENCE);
t->addTransactionCompletedCallback(transactionCallbackThunk, static_cast<void*>(this));
t->setFrame(mSurfaceControl, {0, 0, mWidth, mHeight});
t->setCrop(mSurfaceControl, computeCrop(bufferItem));
t->setTransform(mSurfaceControl, bufferItem.mTransform);
t->setTransformToDisplayInverse(mSurfaceControl, bufferItem.mTransformToDisplayInverse);
t->setDesiredPresentTime(bufferItem.mTimestamp);
if (applyTransaction) {
t->apply();
}
}
- 获取应用绘制buffer,并将其设置在 mNextTransaction
- mNextTransaction不为空,所以applyTransaction 为false,这里就不送给SF 合成
再次回到performDraw 的setFrameCompleteCallback,触发时机是RT 线程 swap buffer 后,此时mRtBLASTSyncTransaction中已经包含了应用绘制后的buffer
mAttachInfo.mThreadedRenderer.setFrameCompleteCallback((long frameNr) -> {
finishBLASTSync(!mSendNextFrameToWm);
handler.postAtFrontOfQueue(() -> {
if (reportNextDraw) {
// TODO: Use the frame number
pendingDrawFinished();
}
if (commitCallbacks != null) {
for (int i = 0; i < commitCallbacks.size(); i++) {
commitCallbacks.get(i).run();
}
}
});});
继续看finishBLASTSync
private void finishBLASTSync(boolean apply) {
mSendNextFrameToWm = false;
if (mNextReportConsumeBLAST) {
mNextReportConsumeBLAST = false;
if (apply) {
mRtBLASTSyncTransaction.apply();
} else {
mSurfaceChangedTransaction.merge(mRtBLASTSyncTransaction);
}
}
}
此时apply是false ,merge mRtBLASTSyncTransaction到mSurfaceChangedTransaction
最后binder 调用 mWindowSession.finishDrawing(mWindow, mSurfaceChangedTransaction);将包含了应用绘制buffer 的mSurfaceChangedTransaction 传给system server
又一次回到了System server
Wms 拿到包含应用绘制的事务后,将其merge 到mBLASTSyncTransaction,每个WindowContainer 都有一个mBLASTSyncTransaction对象
boolean finishDrawing(SurfaceControl.Transaction postDrawTransaction) {
if (!mUsingBLASTSyncTransaction) {
return mWinAnimator.finishDrawingLocked(postDrawTransaction);
}
if (postDrawTransaction != null) {
Slog.e(TAG, "lzh_sync finishDrawing " + this);
mBLASTSyncTransaction.merge(postDrawTransaction);
}
mNotifyBlastOnSurfacePlacement = true;
return mWinAnimator.finishDrawingLocked(null);
}
注意之前我们已经把task 属性都设在mBLASTSyncTransaction里面了,至此mBLASTSyncTransaction 包含了应用绘制buffer 以及SystemUI 设置的task 属性,但还没有送给SF 合成
所以下面看mBLASTSyncTransaction 是如何送到SystemUI 进程统一处理的
窗口绘制后会回调
@Override
void prepareSurfaces() {
mIsDimming = false;
applyDims();
updateSurfacePosition();
// Send information to SufaceFlinger about the priority of the current window.
updateFrameRateSelectionPriorityIfNeeded();
mWinAnimator.prepareSurfaceLocked(true);
notifyBlastSyncTransaction();
super.prepareSurfaces();
}
最后调用mergeBlastSyncTransaction将属于当前mSyncId 的所有 WC 的mBLASTSyncTransaction merge 到一个事务
frameworks/base/services/core/java/com/android/server/wm/WindowOrganizerController.java
@Override
public void onTransactionReady(int mSyncId, Set<WindowContainer> windowContainersReady) {
final IWindowContainerTransactionCallback callback =
mTransactionCallbacksByPendingSyncId.get(mSyncId);
SurfaceControl.Transaction mergedTransaction = new SurfaceControl.Transaction();
for (WindowContainer container : windowContainersReady) {
container.mergeBlastSyncTransaction(mergedTransaction);
}
try {
callback.onTransactionReady(mSyncId, mergedTransaction);
} catch (RemoteException e) {
// If there's an exception when trying to send the mergedTransaction to the client, we
// should immediately apply it here so the transactions aren't lost.
mergedTransaction.apply();
}
mTransactionCallbacksByPendingSyncId.remove(mSyncId);
}
frameworks/base/services/core/java/com/android/server/wm/WindowContainer.java
void mergeBlastSyncTransaction(Transaction t) {
Slog.e("lzh_sync", "lzh_sync mergeBlastSyncTransaction merge 各个应用窗口的 mBLASTSyncTransaction" + this, new RuntimeException());
t.merge(mBLASTSyncTransaction);
mUsingBLASTSyncTransaction = false;
}
将最终merge 的事务通过调用 onTransactionReady传给SystemUI ,至此SystemUI 进程获取了包含了他想要的task 内的全部事务,可通过调用SurfaceControl.Transaction.apply 去统一合成,合成前也可以对自己的一些surface 处理如divier 等,以达到divider 显示和分屏效果同步的目的.
以下为systemUI 获取同步transaction后中的一个操作:
service/stackdivider/Divider.java
mWindowManagerProxy.runInSync(
t -> mView.setMinimizedDockStack(mMinimized, mHomeStackResizable, t));
在home最小化的场景再次设置一遍分屏栈的大小和DividerView 的位置。
经过几个进程的传递和交互,大概的实现流程如下:
- SystemUI 向系统服务提交对Task操作同步请求,操作中包含对 1、2进程窗口的修改
- System server 设置窗口属性到各自的WindowContainer.mBlastSyncTransaction(图中简称WC.syncT)中
- System server 触发应用重绘
- 1、2应用进程内容绘制完毕,通过 BBQ 机制在内容绘制完毕后接管了BBQ的下一帧提交,并停止UI线程绘制,将各自接管的事务操作T1、T2反馈给系统服务,此时T1、T2 已经包含了应用绘制内容
- System server将应用事务合并到WindowContainer.mBlastSyncTransaction,在步骤2时WindowContainer.mBlastSyncTransaction中已经包含了systemui 设置的窗口属性
- System server将所有窗口的mBlastSyncTransaction合并在同步的事务(图中T3)中
- 系统服务通过 BLASTSyncEngine 将1、2进程对应窗口容器的同步事务T3发送给SystemUI ,Merged 事务包含了下一帧消费者的配置更改操作、1、2进程的应用内容绘制数据、SystemUI 提交的Task操作。
- SystemUI 收到合并后的事务后,可以同步divier 的surface 操作最后调用T3.apply(),将事务提交到SF 合成
如下图流程:
最终通过SystemUI 统一送给SF 去合成,而如果没有BBQ机制那么对task 属性的处理和应用绘制是不能在同一帧合成的,如下图,请注意对比这两个图中的绿色箭头