SurfaceFlinger合成layer过程详解

6,142 阅读27分钟

前言

本篇将对Android GUI系统SurfaceFlinger(简称SF)合成layer的具体过程进行分析。合成过程是SF最核心的任务,这个过程贯穿了整个SF业务逻辑,SF所有的工作都最终是为了合成显示做准备。所以,了解SF的合成过程对于我们进一步了解Android GUI系统是必不可少的一部分。

VSYNC控制合成

在介绍SF的VSYNC信号控制同步的篇节中,我们分析了VSYNC信号在绘制和合成过程中所发挥的作用,在SF的init方法中,我们创建了合成延时源并通过EventThread管理该延时源,同时我们通过SF的MessageQueue来为SF创建一个监听合成延时源的Connection并将其注册到EventThread的监听者集合中,这样当合成延时源接收到VSYNC信号后通知SF MessageQueue,然后MessageQueue将该事件通知给SF,随后就可以开始进行合成过程了。这之间MessageQueue可以看作是SF的“秘书”,它负责接收一些SF的消息事件。

void SurfaceFlinger::init() {
    ...
    sp<VSyncSource> sfVsyncSrc = new DispSyncSource(&mPrimaryDispSync,
            sfVsyncPhaseOffsetNs, false);//创建合成延时Vsync源
    mSFEventThread = new EventThread(sfVsyncSrc);
    mEventQueue.setEventThread(mSFEventThread);//将合成延时vysnc源与SF的事件管理关联,因为SF主要负责合成
    ...
}

这里mSFEventThread为管理合成延时源的EventThread,mEventQueue为SF的MessageQueue,这里通过其setEventThread将其注册为EventThread的监听者,负责接收来自合成延时源的VSYNC事件。

//frameworks/native/services/surfaceflinger/MessageQueue.cpp
//合成延时源的EventThread会和SF的MessageQueue关联起来
void MessageQueue::setEventThread(const sp<EventThread>& eventThread)
{
    mEventThread = eventThread;
    //为SF创建延时源的Connection监听
    mEvents = eventThread->createEventConnection();
    mEventTube = mEvents->getDataChannel();//取到BitTube
    //将其描述符fd添加到Looper中,当延时源的EventHandler收到VSYNC信号后触发cb_eventReceiver回调
    mLooper->addFd(mEventTube->getFd(), 0, ALOOPER_EVENT_INPUT,
            MessageQueue::cb_eventReceiver, this);
}

这里通过EventThead的createEventConnection方法创建一个Connection,这个Connection在第一次引用时被注册到EventThread的监听者队列中,随后获取到Connection对应的BitTube,然后将BitTube的文件描述符添加到MessageQueue的Looper中监听起来,监听的回调为MessageQueue::cb_eventReceiver,这样当EventThread通过Connection通知VSYNC信号到达时可以触发回调通知MessageQueue。

//当监听到Connection对应的BitTube的文件描述符有事件到达时,这个回调方法被Looper触发
int MessageQueue::cb_eventReceiver(int fd, int events, void* data) {
    MessageQueue* queue = reinterpret_cast<MessageQueue *>(data);
    return queue->eventReceiver(fd, events);
}

//如果MessageQueue请求了VSYNC信号,合成延时源收到VSYNC信号后触发该回调,再该回调中通知SF进行合成操作
int MessageQueue::eventReceiver(int fd, int events) {
    ssize_t n;
    DisplayEventReceiver::Event buffer[8];
    while ((n = DisplayEventReceiver::getEvents(mEventTube, buffer, 8)) > 0) {
        for (int i=0 ; i<n ; i++) {
            if (buffer[i].header.type == DisplayEventReceiver::DISPLAY_EVENT_VSYNC) {
#if INVALIDATE_ON_VSYNC 
                mHandler->dispatchInvalidate();
#else
                mHandler->dispatchRefresh();
#endif
                break;
            }
        }
    }
    return 1;
}

cb_eventReceiver被触发后调用eventReceiver处理VSYNC信号事件,这里先通过DisplayEventReceiver::getEvents方法读取到事件信息。然后对VSYNC事件进行处理,这里INVALIDATE_ON_VSYNC被定义为1,所以通过mHandler的dispatchInvalidate方法进行处理。mHandler是MessageQueue内部使用的Handler。

void MessageQueue::Handler::dispatchRefresh() {
    if ((android_atomic_or(eventMaskRefresh, &mEventMask) & eventMaskRefresh) == 0) {
        mQueue.mLooper->sendMessage(this, Message(MessageQueue::REFRESH));
    }
}

void MessageQueue::Handler::dispatchInvalidate() {
    if ((android_atomic_or(eventMaskInvalidate, &mEventMask) & eventMaskInvalidate) == 0) {
        mQueue.mLooper->sendMessage(this, Message(MessageQueue::INVALIDATE));
    }
}

void MessageQueue::Handler::dispatchTransaction() {
    if ((android_atomic_or(eventMaskTransaction, &mEventMask) & eventMaskTransaction) == 0) {
        mQueue.mLooper->sendMessage(this, Message(MessageQueue::TRANSACTION));
    }
}

void MessageQueue::Handler::handleMessage(const Message& message) {
    switch (message.what) {
        case INVALIDATE:
            android_atomic_and(~eventMaskInvalidate, &mEventMask);
            mQueue.mFlinger->onMessageReceived(message.what);
            break;
        case REFRESH:
            android_atomic_and(~eventMaskRefresh, &mEventMask);
            mQueue.mFlinger->onMessageReceived(message.what);
            break;
        case TRANSACTION:
            android_atomic_and(~eventMaskTransaction, &mEventMask);
            mQueue.mFlinger->onMessageReceived(message.what);
            break;
    }
}

MessageQueue内部的Handler定义了三个dispatch方法,dispatchRefresh,dispatchInvalidate,dispatchTransaction,对应的在SF一端会通过onMessageReceived分别处理这三个事件,合成的任务也是在其中做处理的。

void SurfaceFlinger::onMessageReceived(int32_t what) {
    ATRACE_CALL();
    switch (what) {
    case MessageQueue::TRANSACTION:
    	//负责处理Layer或者Display的属性变更,这些变更可能影响到图层可见区域脏区域的计算。
        handleMessageTransaction();
        break;
    case MessageQueue::INVALIDATE:
        handleMessageTransaction();
        //主要调用handlePageFlip,从各Layer的BufferQueue拿到最新的缓冲数据,并根据内容更新脏区域
        handleMessageInvalidate();
        signalRefresh();//会触发handleMessageRefresh
        break;
    case MessageQueue::REFRESH:
        handleMessageRefresh();//合并和渲染输出
        break;
    }
}

在SF的onMessageReceived方法中分别处理TRANSACTION,INVALIDATE以及REFRESH事件,TRANSACTION事件主要是处理Layer和Display的属性变更,这些变更更可能影响到图层可见区域及脏区域的计算。在INVALIDATE事件中除了处理TRANSACTION事件的内容外,还需要获取合成图层layer的最新帧数据,同时还要根据内容更新脏区域。REFRESH事件中主要是合并和渲染输出的处理。实际上我们可以看到,在INVALIDATE事件中包含了TRANSACTION和REFRESH事件的内容,它会完整的处理一次合并和渲染输出过程。大多数情况下SF的任务也是处理INVALIDATE事件。所以接下来我们分析的重点就从INVALIDATE事件开始。

触发合成的时机

SF合成机制依赖于VSYNC信号,但显示设备并不是任何时候都会去进行合成操作,显然,当显示设备的layer没有发生任何变化时候就不需要也不应该去让SF合成,只有当layer内部发生了变化,如最常见的应用绘制好了新的一帧数据,这时候会通过Layer BufferQueue的消费者接口onFrameAvaliable通知SF有新的一帧数据,这时候需要对layer进行合成渲染就需要去请求VSYNC信号,EventThread根据请求触发合成延时源的VSYNC信号通知给监听者也就是上面的MessageQueue,MessageQueue会通知SF去及逆行合成操作。

//frameworks/native/services/surfaceflinger/Layer.cpp
void Layer::onFrameAvailable() {
    android_atomic_inc(&mQueuedFrames);//mQueuedFrames加1
    mFlinger->signalLayerUpdate();//安排一次合成操作
}

有新的一帧数据准备好了,通过SF通知MessageQueue安排一次合成操作。

//
void SurfaceFlinger::signalLayerUpdate() {
    mEventQueue.invalidate();
}

通过MessageQueue的invalidate方法请求一次VSYNC信号。

////frameworks/native/services/surfaceflinger/MessageQueue.cpp
void MessageQueue::invalidate() {
#if INVALIDATE_ON_VSYNC
    mEvents->requestNextVsync();//请求一次VSYNC
#else
    mHandler->dispatchInvalidate();
#endif
}

invalidate会通过requestNextVsync请求一次VSYNC信号,这个会通过合成延时源的EventThread进行,当VSYNC信号到达后,会通知MessageQueue注册的监听者,从而将VSYNC信号事件传递给MessageQueue,MessageQueue最终告知SF进行合成操作。

处理Layer及Display属性变更

handleMessageTransaction的任务是处理Layer及Display的属性变更,这个什么意思呢?在Layer内部有实际上有两个State对象mCurrentState和mDrawingState,都维护着Layer的状态信息,当用户调用Layer的方法对其属性如大小,透明度等做了更改后 ,设置的值是保存在mCurrentState这个对象中,而mDrawingState是当前正在使用的状态,这样做的目的是不会因为用户的更改而影响到Layer的绘制合成过程。所以在handleMessageTransaction方法中会计算相关Layer的属性变更,这些变更可能影响到后续可见区域的计算。

//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::handleMessageTransaction() {
    uint32_t transactionFlags = peekTransactionFlags(eTransactionMask);
    if (transactionFlags) {
        handleTransaction(transactionFlags);
    }
}
void SurfaceFlinger::handleTransaction(uint32_t transactionFlags)
{
    Mutex::Autolock _l(mStateLock);
    ...
    transactionFlags = getTransactionFlags(eTransactionMask);
    //进一步处理
    handleTransactionLocked(transactionFlags);
    ...
}

handleMessageTransaction通过peekTransactionFlags查看transaction flag,这个标记决定是否需要进行transaction ,如果需要调用handleTransaction进一步通过调用handleTransactionLocked方法处理。

//frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::handleTransactionLocked(uint32_t transactionFlags)
{
    //取到当前状态的Layer集合
    const LayerVector& currentLayers(mCurrentState.layersSortedByZ);
    const size_t count = currentLayers.size();

    if (transactionFlags & eTraversalNeeded) {
        for (size_t i=0 ; i<count ; i++) {//遍历所有的Layer
            const sp<Layer>& layer(currentLayers[i]);
            //取到layer的transaction flag用以判断是否需要进行transaction
            uint32_t trFlags = layer->getTransactionFlags(eTransactionNeeded);
            if (!trFlags) continue;
			//调用layer的doTransaction处理变更,这些变更可能会影响到可视区域的计算,如果有一个layer影响到,
			//就将mVisibleRegionsDirty置true
            const uint32_t flags = layer->doTransaction(0);
            if (flags & Layer::eVisibleRegion)
                mVisibleRegionsDirty = true;
        }
    }

    //处理显示设备的变更
    if (transactionFlags & eDisplayTransactionNeeded) {
        //当前显示设备
        const KeyedVector<  wp<IBinder>, DisplayDeviceState>& curr(mCurrentState.displays);
        //上次的显示设备	
        const KeyedVector<  wp<IBinder>, DisplayDeviceState>& draw(mDrawingState.displays);
        ....
    }

    //处理transform hint
    if (transactionFlags & (eTraversalNeeded|eDisplayTransactionNeeded)) {
        ...
    }

    const LayerVector& layers(mDrawingState.layersSortedByZ);
    //说明layer增加了,这里置mVisibleRegionsDirty为true
    if (currentLayers.size() > layers.size()) {
        // layers have been added
        mVisibleRegionsDirty = true;
    }

    // some layers might have been removed, so
    // we need to update the regions they're exposing.
    //layer被移除了
    if (mLayersRemoved) {
        mLayersRemoved = false;
        mVisibleRegionsDirty = true;
        const size_t count = layers.size();
        for (size_t i=0 ; i<count ; i++) {
            const sp<Layer>& layer(layers[i]);
            if (currentLayers.indexOf(layer) < 0) {//如果这个layer不存在
                // this layer is not visible anymore
                // TODO: we could traverse the tree from front to back and
                //       compute the actual visible region
                // TODO: we could cache the transformed region
                const Layer::State& s(layer->getDrawingState());
                Region visibleReg = s.transform.transform(
                        Region(Rect(s.active.w, s.active.h)));
                //将删除的layer的可见区域置为无效以便后续进行更新
                invalidateLayerStack(s.layerStack, visibleReg);
            }
        }
    }
	//提交变更
    commitTransaction();
}

在SF内部也维护了两个State状态,mCurrentState和mDrawingState,它们同Layer内部的State定义是不同,mDrawingState是SF上次合成使用的绘图状态,而mCurrentState是SF当前最新的绘图状态。

struct State {
    LayerVector layersSortedByZ;
    DefaultKeyedVector< wp<IBinder>, DisplayDeviceState> displays;
};

在State的内部包含了一个LayerVector对象layersSortedByZ,从名称上看它是一个以Z序排序的layer集合。这个集合中的layer就是SF即将要使用进行合成的layer。另一个成员displays描述了显示设备的状态信息,它实际上一个Map,key值为设备的Token,也是一个IBinder,value为DisplayDeviceState代表显示设备的状态。

在handleTransactionLocked方法中,我们先从当前状态中取出Layer集合,然后针对每个layer进行doTransaction处理,这里面会对layer的属性变更做处理,我们后面再分析,根据其处理结果,我们可以直到是否影响到可见区域,如果影响到则标记 mVisibleRegionsDirty为true。随后处理显示设备的变更,为什么会在这里处理我想可能是为了支持显示设备的热插拔。通过对比前后两次设备状态,可以知道设备是被移除了还是添加了,或者是设备的其他信息发生了变化等等,针对这些做不同的处理。 设备的增加和删除都会分别从SF的mDisplays进行相应的增加和删除操作,mDisplays是一个DefaultKeyedVector< wp, sp >,它维护了SF使用的显示设备。

接下来处理transform hint的变更,transform hint被用来提高layer的系统性能,关于这个在本篇中不做介绍,略过。

最后的部分是处理layer的变更,前后两次的绘制合成过程,可能有新的layer添加进来,这时候我们同样需要置mVisibleRegionsDirty为true,表示可见区域的变化,但也有可能之前使用的layer被移除,那么它之前的显示区域也就成了脏区域,需要进行更新,这个是通过invalidateLayerStack处理的。随后通过commitTransaction提交transaction,将mCurrentState赋值给mDrawingState

void SurfaceFlinger::commitTransaction()
{
    ...
    mDrawingState = mCurrentState;
    mTransactionPending = false;
    mAnimTransactionPending = false;
    mTransactionCV.broadcast();
}
layer的属性变更
//处理layer的变更
uint32_t Layer::doTransaction(uint32_t flags) {
    const Layer::State& s(getDrawingState());
    const Layer::State& c(getCurrentState());
	//大小是否发生了变化
    const bool sizeChanged = (c.requested.w != s.requested.w) ||
                             (c.requested.h != s.requested.h);
    if (sizeChanged) {
        // record the new size, form this point on, when the client request
        // a buffer, it'll get the new size.
        //layer的大小发生了变化
        mSurfaceFlingerConsumer->setDefaultBufferSize(
                c.requested.w, c.requested.h);
    }

    if (!isFixedSize()) {

        const bool resizePending = (c.requested.w != c.active.w) ||
                                   (c.requested.h != c.active.h);
        if (resizePending) {
            // don't let Layer::doTransaction update the drawing state
            // if we have a pending resize, unless we are in fixed-size mode.
            // the drawing state will be updated only once we receive a buffer
            // with the correct size.
            //
            // in particular, we want to make sure the clip (which is part
            // of the geometry state) is latched together with the size but is
            // latched immediately when no resizing is involved.

            flags |= eDontUpdateGeometryState;
        }
    }

    // always set active to requested, unless we're asked not to
    // this is used by Layer, which special cases resizes.
    if (flags & eDontUpdateGeometryState)  {
    } else {
        Layer::State& editCurrentState(getCurrentState());
        editCurrentState.active = c.requested;
    }
    
	//活动大小也发生了变化
    if (s.active != c.active) {
        // invalidate and recompute the visible regions if needed
        flags |= Layer::eVisibleRegion;
    }
    
	//当Layer的position,Zorder,alpha,matrix,transparent region,flags,crops.等发生变化的时候,sequence就会自增。这里不相等说明属性发生了变更
    if (c.sequence != s.sequence) {
        // invalidate and recompute the visible regions if needed
        flags |= eVisibleRegion;
        this->contentDirty = true;

        // we may use linear filtering, if the matrix scales us
        const uint8_t type = c.transform.getType();
        mNeedsFiltering = (!c.transform.preserveRects() ||
                (type >= Transform::SCALE));
    }
    // Commit the transaction
    commitTransaction();//更新layer的绘图状态
    return flags;
}

layer的属性变更也是通过比较mDrawingState和mCurrentState进行计算的,最后同样会通过commitTransaction提交transaction。在这里会对影响可视区域计算的属性做了Layer::eVisibleRegion标记。

获取Layer的帧数据以及计算显示设备的脏区域

void SurfaceFlinger::handleMessageInvalidate() {
    ATRACE_CALL();
    handlePageFlip();
}

void SurfaceFlinger::handlePageFlip()
{
    Region dirtyRegion;

    bool visibleRegions = false;
    //取到layer列表
    const LayerVector& layers(mDrawingState.layersSortedByZ);
    const size_t count = layers.size();
    for (size_t i=0 ; i<count ; i++) {
        const sp<Layer>& layer(layers[i]);
        //使用latchBuffer更新layer的图像,并获取其最新的显示数据到mActiveBuffer
        const Region dirty(layer->latchBuffer(visibleRegions));
        const Layer::State& s(layer->getDrawingState());
        invalidateLayerStack(s.layerStack, dirty);//设置layer关联的设备的更新区域
    }

    mVisibleRegionsDirty |= visibleRegions;
}

更新完transcation后接下来就是获取layer显示数据,这个是通过layer的latchBuffer方法获取到的,通过latchBuffer可以得知layer的可见区域,这个可见区域就是显示设备需要更新的脏区域,脏区域通过invalidateLayerStack计算。

合成过程分析

合成主要流程

void SurfaceFlinger::handleMessageRefresh() {
    ...
    preComposition();
    rebuildLayerStacks();
    setUpHWComposer();
    ...
    doComposition();
    postComposition();
}
preComposition

合成的过程主要是在handleMessageRefresh中进行的,我们分别介绍,先看preComposition

//合成前的预处理
void SurfaceFlinger::preComposition()
{
    bool needExtraInvalidate = false;
    const LayerVector& layers(mDrawingState.layersSortedByZ);
    const size_t count = layers.size();
    //取到当前绘制的layer,对于每个layer调用其onPreComposition判断其是否还有未处理的的frame,如果
    //有就将needExtraInvalidate置为true,表示需要进行额外的合成和渲染操作
    for (size_t i=0 ; i<count ; i++) {
        if (layers[i]->onPreComposition()) {
            needExtraInvalidate = true;
        }
    }
    //如果有layer还有未处理的frame,则需要再进行一次合成和渲染操作
    if (needExtraInvalidate) {
        signalLayerUpdate();
    }
}

//合成预处理回调,这里判断layer是否还有未处理的queued frame
bool Layer::onPreComposition() {
    mRefreshPending = false;
    return mQueuedFrames > 0;
}

preComposition是合成的预处理部分,这部分是判断合成前是否有layer还有新的帧数据mQueuedFrames > 0,如果有的话就需要通过signalLayerUpdate再安排一次合成操作。

rebuildLayerStacks
//重建设备的可见Layer集合,并计算每个layer的可见区域和脏区域
void SurfaceFlinger::rebuildLayerStacks() {
    // rebuild the visible layer list per screen
    if (CC_UNLIKELY(mVisibleRegionsDirty)) {
        ATRACE_CALL();
        mVisibleRegionsDirty = false;
        invalidateHwcGeometry();
        const LayerVector& layers(mDrawingState.layersSortedByZ);
        //对每一个显示设备都需要重建可见layer
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            Region opaqueRegion;//不透明区域
            Region dirtyRegion;//透明区域
            Vector< sp<Layer> > layersSortedByZ;//可见layer集合
            const sp<DisplayDevice>& hw(mDisplays[dpy]);
            const Transform& tr(hw->getTransform());//设备的变换矩阵
            const Rect bounds(hw->getBounds());//设备的显示区域
            if (hw->canDraw()) {
                //为设备layer计算可见区域
                SurfaceFlinger::computeVisibleRegions(layers,
                        hw->getLayerStack(), dirtyRegion, opaqueRegion);
				//重建可见layer
                const size_t count = layers.size();
                for (size_t i=0 ; i<count ; i++) {
                    const sp<Layer>& layer(layers[i]);
                    const Layer::State& s(layer->getDrawingState());
                    //只有layer和显示设备的layerStack匹配才能在该设备上显示
                    if (s.layerStack == hw->getLayerStack()) {
                    	//绘制的区域为layer的可见非透明区域
                        Region drawRegion(tr.transform(
                                layer->visibleNonTransparentRegion));
                        drawRegion.andSelf(bounds);//如果layer的可见区域和当前的设备的窗口区域做交集
                        //如果可见区域和当前的设备区域有交集,则该layer需要显示出来,将其添加到可见layer的集合中
                        if (!drawRegion.isEmpty()) {
                            layersSortedByZ.add(layer);
                        }
                    }
                }
            }
            //设置设备的可见Layer集合,比如主屏幕可以有startus bar,app,navigation bar对应的layer,
            //这些layer存放在layersSortedByZ集合中
            hw->setVisibleLayersSortedByZ(layersSortedByZ);
            hw->undefinedRegion.set(bounds);//初始的未定义区域为设备的显示区域
            hw->undefinedRegion.subtractSelf(tr.transform(opaqueRegion));//未定义区域=当前未定义区域-不透明区域
            hw->dirtyRegion.orSelf(dirtyRegion);//设置脏区域
        }
    }
}

rebuildLayerStacks负责重建设备的可见Layer集合,并计算每个layer的可见区域和脏区域。每个显示设备都有一个可见layer集合,这个layer集合将最终被合成在显示设备上。rebuildLayerStacks针对每一个显示设备都进行处理,通过computeVisibleRegions为每个显示设备计算layer可见区域和脏区域,当layer的可见区域和设备窗口区域有交集说明该layer可以显示在该设备中,添加到其可见的layer集合layersSortedByZ中,最后通过setVisibleLayersSortedByZ将该集合设置给显示设备,同时更新显示设备的脏区域dirtyRegion。

setUpHWComposer
void SurfaceFlinger::setUpHWComposer() {
    for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
        mDisplays[dpy]->beginFrame();
    }

    HWComposer& hwc(getHwComposer());
    if (hwc.initCheck() == NO_ERROR) {
        // build the h/w work list
        if (CC_UNLIKELY(mHwWorkListDirty)) {
            mHwWorkListDirty = false;
            //针对每个设备创建workList
            for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
                sp<const DisplayDevice> hw(mDisplays[dpy]);
                const int32_t id = hw->getHwcDisplayId();
                if (id >= 0) {
                    const Vector< sp<Layer> >& currentLayers(
                        hw->getVisibleLayersSortedByZ());//取到每个设备的可见Layer集合
                    const size_t count = currentLayers.size();
                    //为设备创建workdList
                    if (hwc.createWorkList(id, count) == NO_ERROR) {
                        HWComposer::LayerListIterator cur = hwc.begin(id);
                        const HWComposer::LayerListIterator end = hwc.end(id);
                        //LayerListIterator用于遍历创建的worklist,具体为遍历DisplayData.list->hwLayers
                        for (size_t i=0 ; cur!=end && i<count ; ++i, ++cur) {
                            const sp<Layer>& layer(currentLayers[i]);
                            layer->setGeometry(hw, *cur);
                            if (mDebugDisableHWC || mDebugRegion || mDaltonize) {
                                cur->setSkip(true);
                            }
                        }
                    }
                }
            }
        }

        // set the per-frame data
        //遍历当前要显示的设备
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            sp<const DisplayDevice> hw(mDisplays[dpy]);
            const int32_t id = hw->getHwcDisplayId();
            if (id >= 0) {
               //取到设备的可见Layer集合,这个集合是在rebuildLayerStacks方法中设置的
                const Vector< sp<Layer> >& currentLayers(
                    hw->getVisibleLayersSortedByZ());
                const size_t count = currentLayers.size();
                HWComposer::LayerListIterator cur = hwc.begin(id);
                const HWComposer::LayerListIterator end = hwc.end(id);
                //为可见的layer设置当前帧的数据
                for (size_t i=0 ; cur!=end && i<count ; ++i, ++cur) {
                    /*
                     * update the per-frame h/w composer data for each layer
                     * and build the transparent region of the FB
                     */
                    //这里需要注意LayerListIterator的++操作会去迭代DisplayData.list->hwLayers
                    //同时*cur返回的实际上是LayerListIterator内部的HWCLayer,HWCLayerVersion1实现了抽象类HWCLayer
                    const sp<Layer>& layer(currentLayers[i]);
                    layer->setPerFrameData(hw, *cur);
                }
            }
        }
		//通过HWC的prepare确定合成方式
        status_t err = hwc.prepare();
        ALOGE_IF(err, "HWComposer::prepare failed (%s)", strerror(-err));

        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
            sp<const DisplayDevice> hw(mDisplays[dpy]);
            hw->prepareFrame(hwc);
        }
    }
}

setUpHWComposer负责为显示设备创建workList,为每个设备要输出显示的layer设置frame data,最后由HWC的prepare确定显示设备可见layer的合成方式,下面我们详细的分析这些内容。

创建WorkList
//frameworks/native/services/surfaceflinger/DisplayHardware/HWComposer.cpp
//设备创建worklist,这里的id为显示设备的id,numLayers是显示设备的可见layer集合数目
status_t HWComposer::createWorkList(int32_t id, size_t numLayers) {
    if (uint32_t(id)>31 || !mAllocatedDisplayIDs.hasBit(id)) {
        return BAD_INDEX;
    }

    if (mHwc) {
    	//每个设备都有一个DisplayData用来描述显示设备的信息
        DisplayData& disp(mDisplayData[id]);
        if (hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_1)) {
            // we need space for the HWC_FRAMEBUFFER_TARGET
            //如果版本为HWC_DEVICE_API_VERSION_1_1,则需要额外的一个hwc_layer_1_t用来存放合成后的纹理
            numLayers++;
        }
        //初始化worklist主要是为初始化DispalyData中的hwc_display_contents_1,为其开辟内存空间
        if (disp.capacity < numLayers || disp.list == NULL) {
            size_t size = sizeof(hwc_display_contents_1_t)
                    + numLayers * sizeof(hwc_layer_1_t);
            free(disp.list);
            //分配hwc_display_contents_1_t,其中存放的是要显示的layer
            disp.list = (hwc_display_contents_1_t*)malloc(size);
            disp.capacity = numLayers;
        }
        if (hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_1)) {
        	//hwc_display_contents_1内部的hwLayers的最后一个为FrameBufferTarget,合成后的纹理就存放在该对象中
        	//这里将它拿出来赋值给framebufferTarget
            disp.framebufferTarget = &disp.list->hwLayers[numLayers - 1];
            memset(disp.framebufferTarget, 0, sizeof(hwc_layer_1_t));
            const hwc_rect_t r = { 0, 0, (int) disp.width, (int) disp.height };
            //这里初始化这个用来存放合成后的hwc_layer_1_t
            //类型为HWC_FRAMEBUFFER_TARGET,表示它是由GPU合成的
            disp.framebufferTarget->compositionType = HWC_FRAMEBUFFER_TARGET;
            disp.framebufferTarget->hints = 0;
            disp.framebufferTarget->flags = 0;
            disp.framebufferTarget->handle = disp.fbTargetHandle;
            disp.framebufferTarget->transform = 0;
            disp.framebufferTarget->blending = HWC_BLENDING_PREMULT;
            if (hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_3)) {
                disp.framebufferTarget->sourceCropf.left = 0;
                disp.framebufferTarget->sourceCropf.top = 0;
                disp.framebufferTarget->sourceCropf.right = disp.width;
                disp.framebufferTarget->sourceCropf.bottom = disp.height;
            } else {
                disp.framebufferTarget->sourceCrop = r;
            }
            disp.framebufferTarget->displayFrame = r;
            disp.framebufferTarget->visibleRegionScreen.numRects = 1;
            disp.framebufferTarget->visibleRegionScreen.rects =
                &disp.framebufferTarget->displayFrame;
            disp.framebufferTarget->acquireFenceFd = -1;
            disp.framebufferTarget->releaseFenceFd = -1;
            disp.framebufferTarget->planeAlpha = 0xFF;
        }
        disp.list->retireFenceFd = -1;
        disp.list->flags = HWC_GEOMETRY_CHANGED;
        disp.list->numHwLayers = numLayers;
    }
    return NO_ERROR;
}

每个显示设备都有一个DisplayData对象用来描述显示数据。DisplayData的定义如下:

struct DisplayData {
        DisplayData();
        ~DisplayData();
        uint32_t width;
        uint32_t height;
        uint32_t format;    // pixel format from FB hal, for pre-hwc-1.1
        float xdpi;
        float ydpi;
        nsecs_t refresh;
        bool connected;
        bool hasFbComp;//标记GLES合成
        bool hasOvComp;//标记硬件合成
        size_t capacity;
        //list包括这个显示设备上所有的layer数据,layer数据放在hwLayers中,hwc_display_contents_1结构描述的是
        //这个结构描述的是输出到显示设备的内容,具体见hwcomposer.h中的定义
        //需要注意的是list->hwLayers的最后一个存放的是合成的layer
        hwc_display_contents_1* list;
        hwc_layer_1* framebufferTarget;//gup合成的layer放在framebufferTarget中
        buffer_handle_t fbTargetHandle;
        sp<Fence> lastRetireFence;  // signals when the last set op retires
        sp<Fence> lastDisplayFence; // signals when the last set op takes
                                    // effect on screen
        buffer_handle_t outbufHandle;
        sp<Fence> outbufAcquireFence;

        // protected by mEventControlLock
        int32_t events;
};

其中DisplayData的hwc_display_contents_1用来存放即将要显示到设备的layer,这个结构的定义如下

 //这个结构描述的是输出到显示设备的内容
typedef struct hwc_display_contents_1 {
    ...
    uint32_t flags;
    size_t numHwLayers;//指定了layer的个数
    hwc_layer_1_t hwLayers[0];//将要合成显示在设备中的layer,它是用hwc_layer_1_t描述的

} hwc_display_contents_1_t;

hwc_display_contents_1内部指定了要显示的layer的个数,这些layer是通过hwc_layer_1_t数组进行描述的。我们要创建的workList就是为hwc_display_contents_1以及其内部的hwc_layer_1_t数组分配内存用来存放即将要显示的layer信息。在createWorkList中,如果设备版本为HWC_DEVICE_API_VERSION_1_1,说明支持frameBufferTarget,需要额外的创建多一个hwc_layer_1_t,这个hwc_layer_1_t用来存放的是GLES合成后的layer的信息,它的合成类型被指定为HWC_FRAMEBUFFER_TARGET,在DisplayData中是以framebufferTarget描述的。它实际上是hwc_display_contents_1的hwLayers成员的最后一个hwc_layer_1_t。也就是说hwLayers包含了要合成的layer及合成后的layer的信息。

为layer设置帧数据

为显示设备创建完workList,这时候它只是有了容纳layer的结构体,我们还要告知它每个layer的帧数据,这样显示设备才知道layer如何获取这些数据并进行合成显示。这个是通过Layer的setPerFrameData处理的,还记得之前我们通过Layer的latchBuffer获取了layer最新的帧数据,它被放在mActiveBuffer中,这时候我们就可以将这个最新的帧数据的handle设置到显示设备的layer中了。

//为Layer设置当前帧数据
void Layer::setPerFrameData(const sp<const DisplayDevice>& hw,
        HWComposer::HWCLayerInterface& layer) {
    // we have to set the visible region on every frame because
    // we currently free it during onLayerDisplayed(), which is called
    // after HWComposer::commit() -- every frame.
    // Apply this display's projection's viewport to the visible region
    // before giving it to the HWC HAL.
    const Transform& tr = hw->getTransform();
    Region visible = tr.transform(visibleRegion.intersect(hw->getViewport()));
    layer.setVisibleRegionScreen(visible);

    // NOTE: buffer can be NULL if the client never drew into this
    // layer yet, or if we ran out of memory
    //将当前Layer的buffer通过接口HWCLayerInterface保存起来,具体见HWCLayerVersion1
    layer.setBuffer(mActiveBuffer);
}
//HWCLayerVersion1实现
//将GraphicBuffer保存在对应的hwc_layer_1_t中	
virtual void setBuffer(const sp<GraphicBuffer>& buffer) {
    if (buffer == 0 || buffer->handle == 0) {
        getLayer()->compositionType = HWC_FRAMEBUFFER;
        getLayer()->flags |= HWC_SKIP_LAYER;
        getLayer()->handle = 0;
    } else {
        getLayer()->handle = buffer->handle;//指定buffer的handle即可
    }
}
确定layer的合成方式

setUpHWComposer的最后一步是通过HWComposer的prepare确定显示设备layer的合成方式。

status_t HWComposer::prepare() {
    for (size_t i=0 ; i<mNumDisplays ; i++) {
        DisplayData& disp(mDisplayData[i]);//取到设备的DisplayData
        if (disp.framebufferTarget) {//它有待合成的hwc_layer_1_t
            // make sure to reset the type to HWC_FRAMEBUFFER_TARGET
            // DO NOT reset the handle field to NULL, because it's possible
            // that we have nothing to redraw (eg: eglSwapBuffers() not called)
            // in which case, we should continue to use the same buffer.
            LOG_FATAL_IF(disp.list == NULL);
            //确保framebufferTarget的合成类型为HWC_FRAMEBUFFER_TARGET	
            disp.framebufferTarget->compositionType = HWC_FRAMEBUFFER_TARGET;
        }
        if (!disp.connected && disp.list != NULL) {
            ALOGW("WARNING: disp %d: connected, non-null list, layers=%d",
                  i, disp.list->numHwLayers);
        }
        mLists[i] = disp.list;//取到DispalayData的list存放在mLists中一份,它是一个hwc_display_contents_1数组
        if (mLists[i]) {
            if (hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_3)) {
                mLists[i]->outbuf = disp.outbufHandle;
                mLists[i]->outbufAcquireFenceFd = -1;
            } else if (hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_1)) {
                // garbage data to catch improper use
                mLists[i]->dpy = (hwc_display_t)0xDEADBEEF;
                mLists[i]->sur = (hwc_surface_t)0xDEADBEEF;
            } else {
                mLists[i]->dpy = EGL_NO_DISPLAY;
                mLists[i]->sur = EGL_NO_SURFACE;
            }
        }
    }
	//为显示设备准备好缓冲区,由硬件合成模块决定哪些layer可以通过硬件合成,并为其打上HWC_OVERLAY标记,
	//默认的合成类型为HWC_FRAMEBUFFER
    int err = mHwc->prepare(mHwc, mNumDisplays, mLists);
    ALOGE_IF(err, "HWComposer: prepare failed (%s)", strerror(-err));

    if (err == NO_ERROR) {
        // here we're just making sure that "skip" layers are set
        // to HWC_FRAMEBUFFER and we're also counting how many layers
        // we have of each type.
        //
        // If there are no window layers, we treat the display has having FB
        // composition, because SurfaceFlinger will use GLES to draw the
        // wormhole region.
        //对于每个显示设备
        for (size_t i=0 ; i<mNumDisplays ; i++) {
        	//取到设备DisplayData
            DisplayData& disp(mDisplayData[i]);
            disp.hasFbComp = false;
            disp.hasOvComp = false;
            //取到hwc_display_contents_1
            if (disp.list) {
            	//对于显示设备的每一个hwc_layer_1_t都判断其合成类型	
                for (size_t i=0 ; i<disp.list->numHwLayers ; i++) {
                    hwc_layer_1_t& l = disp.list->hwLayers[i];

                    //ALOGD("prepare: %d, type=%d, handle=%p",
                    //        i, l.compositionType, l.handle);
					
                    if (l.flags & HWC_SKIP_LAYER) {//需要跳过的layer使用OPENGL合成
                        l.compositionType = HWC_FRAMEBUFFER;
                    }
                    if (l.compositionType == HWC_FRAMEBUFFER) {//合成类型为HWC_FRAMEBUFFER,则是OPENGL合成
                        disp.hasFbComp = true;
                    }
                    if (l.compositionType == HWC_OVERLAY) {//如果合成类型为HWC_OVERLAY则为硬件合成
                        disp.hasOvComp = true;
                    }
                }
                if (disp.list->numHwLayers == (disp.framebufferTarget ? 1 : 0)) {
                    disp.hasFbComp = true;
                }
            } else {
                disp.hasFbComp = true;//如果没有硬件合成使用OPENGL合成
            }
        }
    }
    return (status_t)err;
}

prepare将所有显示设备的hwc_display_contents_1防止在mLists数组中,然后通过hwc硬件的prepare方法决定每个显示设备的layer是否支持硬件合成,如果是就将其compositionType标记为HWC_OVERLAY,默认情况下compositionType是HWC_FRAMEBUFFER表示通过GLES合成。最后通过处理结果来更新DisplayData的hasFbComp和hasOvComp,它们分别表示是否有GLES合成和硬件合成的layer。

doComposition
void SurfaceFlinger::doComposition() {
    ATRACE_CALL();
    const bool repaintEverything = android_atomic_and(0, &mRepaintEverything);
	//同样针对每一个显示设备进行处理
    for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
        const sp<DisplayDevice>& hw(mDisplays[dpy]);
        if (hw->canDraw()) {
            // transform the dirty region into this screen's coordinate space
            const Region dirtyRegion(hw->getDirtyRegion(repaintEverything));

            // repaint the framebuffer (if needed)
            //处理需要进行软件合成的部分,也有可能没有需要软件合成的Layer。
            doDisplayComposition(hw, dirtyRegion);

            hw->dirtyRegion.clear();
            hw->flip(hw->swapRegion);
            hw->swapRegion.clear();
        }
        // inform the h/w that we're done compositing
        hw->compositionComplete();
    }
    //把不管是普通Layer的数据还是通过EGL合成后的数据都发送到硬件合成模块进行合成
    postFramebuffer();
}

doCompositiont负责处理那些需要进行GLES合成的layer。最后通过postFramebuffer提交给硬件合成模块进行合成显示。GLES合成layer是通过doDisplayComposition处理的,我们先看它的实现:

void SurfaceFlinger::doDisplayComposition(const sp<const DisplayDevice>& hw,
        const Region& inDirtyRegion)
{
    Region dirtyRegion(inDirtyRegion);

    // compute the invalid region
    hw->swapRegion.orSelf(dirtyRegion);

    uint32_t flags = hw->getFlags();
    if (flags & DisplayDevice::SWAP_RECTANGLE) {
        // we can redraw only what's dirty, but since SWAP_RECTANGLE only
        // takes a rectangle, we must make sure to update that whole
        // rectangle in that case
        dirtyRegion.set(hw->swapRegion.bounds());
    } else {
        if (flags & DisplayDevice::PARTIAL_UPDATES) {
            // We need to redraw the rectangle that will be updated
            // (pushed to the framebuffer).
            // This is needed because PARTIAL_UPDATES only takes one
            // rectangle instead of a region (see DisplayDevice::flip())
            dirtyRegion.set(hw->swapRegion.bounds());
        } else {
            // we need to redraw everything (the whole screen)
            dirtyRegion.set(hw->bounds());
            hw->swapRegion = dirtyRegion;
        }
    }

    if (CC_LIKELY(!mDaltonize)) {
    	//关键点1 合成layer	
        doComposeSurfaces(hw, dirtyRegion);
    } else {
        RenderEngine& engine(getRenderEngine());
        engine.beginGroup(mDaltonizer());
        doComposeSurfaces(hw, dirtyRegion);
        engine.endGroup();
    }

    // update the swap region and clear the dirty region
    hw->swapRegion.orSelf(dirtyRegion);

    // swap buffers (presentation)
    //关键点2 将合成的纹理渲染在EGL本地窗口中,这会触发本地窗口对应的BufferQueue,通知它的消费端FrameBufferSurface进行消费
    hw->swapBuffers(getHwComposer());
}

doDisplayComposition又通过doComposeSurfaces合成显示设备的layer,之前我们为设备创建了workList,知道合成的layer最终会被保存在frameBufferTarget对应的hwc_layer_1_t中,那么这到底是怎么样实现呢?我们接着看doComposeSurfaces的实现。

void SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& hw, const Region& dirty)
{
    RenderEngine& engine(getRenderEngine());
    const int32_t id = hw->getHwcDisplayId();
    HWComposer& hwc(getHwComposer());
    
    HWComposer::LayerListIterator cur = hwc.begin(id);
    const HWComposer::LayerListIterator end = hwc.end(id);
	//判断是否需要进行GLES合成,如果设备的layer集合中有需要GLES合成的layer则返回true
    bool hasGlesComposition = hwc.hasGlesComposition(id);
    if (hasGlesComposition) {//通过GLES进行合成
    	//设置EGL的display和Context ,这里为EGL设置本地窗口对象,合成的纹理渲染在该窗口中
        if (!hw->makeCurrent(mEGLDisplay, mEGLContext)) {
            ALOGW("DisplayDevice::makeCurrent failed. Aborting surface composition for display %s",
                  hw->getDisplayName().string());
            return;
        }
        ...
    }

    /*
     * and then, render the layers targeted at the framebuffer
     */
	//获取到显示设备的可见layer集合
    const Vector< sp<Layer> >& layers(hw->getVisibleLayersSortedByZ());
    const size_t count = layers.size();
    const Transform& tr = hw->getTransform();
    if (cur != end) {
        // we're using h/w composer
        //遍历处理这些layer
        for (size_t i=0 ; i<count && cur!=end ; ++i, ++cur) {
            const sp<Layer>& layer(layers[i]);
            //layer的设置裁剪区域
            const Region clip(dirty.intersect(tr.transform(layer->visibleRegion)));
            if (!clip.isEmpty()) {
                switch (cur->getCompositionType()) {
                    case HWC_OVERLAY: {//如果当前layer是通过硬件进行合成,则不需要进行任何处理,合成工作交给硬件处理
                        ...
                        break;
                    }
                    //当前layer需要软件进行合成,调用draw方法通过EGL合成为纹理
                    //需要注意的是对于需要GLES进行合成的Layer,其都会绘制在同一个纹理上,这个纹理的Buffer会在后面的通过swapBuffer提交给设备的frameBufferTarget
                    case HWC_FRAMEBUFFER: {
                        layer->draw(hw, clip);
                        break;
                    }
                    ...
                }
            }
            layer->setAcquireFence(hw, *cur);
        }
    } else { ... }
}

在doComposeSurfaces方法中,我们首先通过DisplayDevice的makeCurrent方法配置EGL display和context,要合成的对象最终被渲染到DisplayDevice的本地窗口中,后面我们对此进行分析,随后取到设备的可见layer集合,然后通过循环遍历这个集合,在循环中我们只需要关心通过GLES合成部分的layer,这些layer的合成会调用draw方法进行处理。

//Layer进行软件合成
void Layer::draw(const sp<const DisplayDevice>& hw, const Region& clip) const {
    onDraw(hw, clip);
}

void Layer::draw(const sp<const DisplayDevice>& hw) {
    onDraw( hw, Region(hw->bounds()) );
}

void Layer::onDraw(const sp<const DisplayDevice>& hw, const Region& clip) const
{
    ATRACE_CALL();
    ……
    // Bind the current buffer to the GL texture, and wait for it to be
    // ready for us to draw into.
    //绑定当前Buffer到GL纹理,等待渲染
    status_t err = mSurfaceFlingerConsumer->bindTextureImage();
    if (err != NO_ERROR) {
        ALOGW("onDraw: bindTextureImage failed (err=%d)", err);
        // Go ahead and draw the buffer anyway; no matter what we do the screen
        // is probably going to have something visibly wrong.
    }

    bool blackOutLayer = isProtected() || (isSecure() && !hw->isSecure());

    RenderEngine& engine(mFlinger->getRenderEngine());

    if (!blackOutLayer) {
        // TODO: we could be more subtle with isFixedSize()
        const bool useFiltering = getFiltering() || needsFiltering(hw) || isFixedSize();
        // Query the texture matrix given our current filtering mode.
        float textureMatrix[16];
        mSurfaceFlingerConsumer->setFilteringEnabled(useFiltering);
        mSurfaceFlingerConsumer->getTransformMatrix(textureMatrix);
        ……
        // Set things up for texturing.
        mTexture.setDimensions(mActiveBuffer->getWidth(), mActiveBuffer->getHeight());
        mTexture.setFiltering(useFiltering);
        mTexture.setMatrix(textureMatrix);
		//为渲染引擎设置图层纹理
        engine.setupLayerTexturing(mTexture);
    } else {
        engine.setupLayerBlackedOut();
    }
    //通过OPENGL渲染纹理,这里面会计算layer的绘制区域和纹理坐标等
    drawWithOpenGL(hw, clip);
    engine.disableTexturing();
}

Layer通过draw合成渲染之前,我们通过DisplayDevice的makeCurrent已经配置好了EGL,在onDraw方法中先通过bindTextureImage将当前layer的buffer绑定到GL纹理,这个是通过GLConsumer的bindTextureImageLocked实现的,因为这里的mSurfaceFlingerConsumer它是个SurfaceFlingerConsumer,继承自GLConusmer。

status_t GLConsumer::bindTextureImageLocked() {
    ...
    glBindTexture(mTexTarget, mTexName);//绑定渲染的纹理
    if (mCurrentTexture == BufferQueue::INVALID_BUFFER_SLOT) {
        ...
    } else {
        EGLImageKHR image = mEglSlots[mCurrentTexture].mEglImage;

        glEGLImageTargetTexture2DOES(mTexTarget, (GLeglImageOES)image);

        while ((error = glGetError()) != GL_NO_ERROR) {
            ST_LOGE("bindTextureImage: error binding external texture image %p"
                    ": %#04x", image, error);
            return UNKNOWN_ERROR;
        }
    }

    // Wait for the new buffer to be ready.
    return doGLFenceWaitLocked();
}

在bindTextureImaageLocked中glBindTexture绑定渲染的纹理,其中mTexName为纹理ID,这个纹理ID在Layer构造的时候就生成了,渲染的纹理mTexture也是在Layer构造的时候进行初始化的,它是通过SF创建的RenderEngine创建的纹理ID mTextureName,该纹理ID被传递给了Layer的消费者GLConsumer。

Layer(...){
    ...
    mFlinger->getRenderEngine().genTextures(1, &mTextureName);
    mTexture.init(Texture::TEXTURE_EXTERNAL, mTextureName);
    ...
}
void Layer::onFirstRef() {
    // Creates a custom BufferQueue for SurfaceFlingerConsumer to use
    mBufferQueue = new SurfaceTextureLayer(mFlinger);//创建一个BufferQueue
    //BufferQueue的消费者
    mSurfaceFlingerConsumer = new SurfaceFlingerConsumer(mBufferQueue, mTextureName);
    ...
}
class SurfaceFlingerConsumer : public GLConsumer {
public:
    SurfaceFlingerConsumer(const sp<BufferQueue>& bq, uint32_t tex)
        : GLConsumer(bq, tex, GLConsumer::TEXTURE_EXTERNAL, false)
    {}
    ...
}

在SurfaceFlingerConsumer的构造中将纹理ID传递给GLConsumer,GLConsumer将其保存在成员mTexName,所以layer通过onDraw渲染的纹理和消费者SurfaceFlingerConsumer使用的是同一个纹理。

前面我们知道Layer渲染纹理前会通过DisplayDevice通过makeCurrent配置EGL,我们看看DisplayDevice是如何创建为EGL创建本地窗口的。

DisplayDevice::DisplayDevice(
        const sp<SurfaceFlinger>& flinger,
        DisplayType type,
        int32_t hwcId,
        bool isSecure,
        const wp<IBinder>& displayToken,
        const sp<DisplaySurface>& displaySurface,
        const sp<IGraphicBufferProducer>& producer,
        EGLConfig config)
    : mFlinger(flinger),
      mType(type), mHwcDisplayId(hwcId),
      mDisplayToken(displayToken),
      mDisplaySurface(displaySurface),//这个是FrameBufferSurface
      mDisplay(EGL_NO_DISPLAY),
      mSurface(EGL_NO_SURFACE),
      mDisplayWidth(), mDisplayHeight(), mFormat(),
      mFlags(),
      mPageFlipCount(),
      mIsSecure(isSecure),
      mSecureLayerVisible(false),
      mScreenAcquired(false),
      mLayerStack(NO_LAYER_STACK),
      mOrientation()
{
	/**
	* 通过BufferQueue创建一个Surface本地窗口,这个Surface是作为生产者的,而FrameBufferSurface作为消费端,
	* 它们共享同一个BufferQueue,同时Surface是作为EGL创建WindowSurface的本地窗口,当Layer通过EGL合成纹理后,
	* eglSwapBuffers方法会通过其ANativeWindow的QueueBuffer方法将绘制好的纹理缓冲区入队列,并通过
	* FrameBufferSurface的OnFrameAvaliable回调通知给消费端,消费端将取出该GraphicBuffer并将通过HWC的fbPost
	* 将其设置到显示设备的DisplayData的FramebufferTarget,随后通过HWC的commit将其提交给显示设备。
	*/
    mNativeWindow = new Surface(producer, false);
    ANativeWindow* const window = mNativeWindow.get();

    int format;
    window->query(window, NATIVE_WINDOW_FORMAT, &format);

    // Make sure that composition can never be stalled by a virtual display
    // consumer that isn't processing buffers fast enough. We have to do this
    // in two places:
    // * Here, in case the display is composed entirely by HWC.
    // * In makeCurrent(), using eglSwapInterval. Some EGL drivers set the
    //   window's swap interval in eglMakeCurrent, so they'll override the
    //   interval we set here.
    if (mType >= DisplayDevice::DISPLAY_VIRTUAL)
        window->setSwapInterval(window, 0);

    /*
     * Create our display's surface
     */

    EGLSurface surface;
    EGLint w, h;
    EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
 	//通过Surface的ANativeWindow创建EGLSurface对象,EGL合成后的纹理数据被存放在Surface的BufferQueue中
    surface = eglCreateWindowSurface(display, config, window, NULL);
    eglQuerySurface(display, surface, EGL_WIDTH,  &mDisplayWidth);
    eglQuerySurface(display, surface, EGL_HEIGHT, &mDisplayHeight);

    mDisplay = display;//将创建的EGLDisplay保存
    mSurface = surface;//将创建的EGLSurface保存
    mFormat  = format;
    mPageFlipCount = 0;
    ...
}

在SF初始化显示设备的时候会为其创建DisplayDevice对象,这个对象内部会创建EGL本地窗口EGLSurface mSurface,它内部使用的BufferQueue和其参数mDisplaySurface指定的FrameBufferSurface使用同一个BufferQueue,这里又是一个生产-消费模型,本地窗口Surface负责成产数据,而FrameBufferSurface负责对数据进行消费。具体来说就是layer渲染的纹理数据最终是交给EGL本地窗口了,而本地窗口和FrameBufferSurface使用同一个BufferQueue,FrameBufferSurface作为BufferQueue的消费端最终会接收来自于DisplayDevice本地窗口的Buffer数据,这个是通过DisplayDevice的swapBuffer触发的,swapBuffer会使本地窗口的Buffer数据通过BufferQueue的queueBuffer入队,这样就能触发FrameBufferSurface的onFrameAvaliable回调。

//frameworks/native/services/surfaceflinger/DisplayHardware/FramebufferSurface.cpp
//EGL合成好的Buffer最终会通过该回调通知FramebufferSurface
void FramebufferSurface::onFrameAvailable() {
    sp<GraphicBuffer> buf;
    sp<Fence> acquireFence;
    status_t err = nextBuffer(buf, acquireFence);//取到合成好的纹理Buffer
    if (err != NO_ERROR) {
        ALOGE("error latching nnext FramebufferSurface buffer: %s (%d)",
                strerror(-err), err);
        return;
    }
    //将纹理Buffer通过HWC设置到显示设备的缓冲区中,准确来说是放在DisplayData的FramebufferTarget中
    err = mHwc.fbPost(mDisplayType, acquireFence, buf);
    if (err != NO_ERROR) {
        ALOGE("error posting framebuffer: %d", err);
    }
}

通过EGL合成好的数据最终会通过FramebufferSurface的onFrameAvaliable回调消费,这里先通过nexBuffer取到合成好的GraphicBuffer的数据,然后通过HWComposer的fbPost方法将取到的GraphicBuffer保存到设备的framebufferTarget中。

status_t FramebufferSurface::nextBuffer(sp<GraphicBuffer>& outBuffer, sp<Fence>& outFence) {
    Mutex::Autolock lock(mMutex);

    BufferQueue::BufferItem item;
    status_t err = acquireBufferLocked(&item, 0);//获取BufferItem
    ...
    //根据获取的BufferItem取到对应的GrapicBuffer
    mCurrentBufferSlot = item.mBuf;
    //去对应槽内的Buffer
    mCurrentBuffer = mSlots[mCurrentBufferSlot].mGraphicBuffer;
    outFence = item.mFence;
    outBuffer = mCurrentBuffer;
    return NO_ERROR;
}

nextBuffer通过acquireBufferLocked取到BufferItem,然后从item中取到对应的槽索引mCurrentBufferSlot,最后根据该索引取到对应的GraphicBuffer。

//将合成好的Buffer保存在DisplayData 的frameBufferTarget成员中
int HWComposer::fbPost(int32_t id,
        const sp<Fence>& acquireFence, const sp<GraphicBuffer>& buffer) {
    //硬件合成模块的API版本要是1.1才支持framebufferTarget    
    if (mHwc && hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_1)) {
        return setFramebufferTarget(id, acquireFence, buffer);
    } else {
        acquireFence->waitForever("HWComposer::fbPost");
        return mFbDev->post(mFbDev, buffer->handle);
    }
}

//将EGL合成好的纹理buffer设置到显示设备DispData的framebufferTarget中
status_t HWComposer::setFramebufferTarget(int32_t id,
        const sp<Fence>& acquireFence, const sp<GraphicBuffer>& buf) {
    if (uint32_t(id)>31 || !mAllocatedDisplayIDs.hasBit(id)) {
        return BAD_INDEX;
    }
    //要设置的显示设备的DisplayData
    DisplayData& disp(mDisplayData[id]);
    if (!disp.framebufferTarget) {
        // this should never happen, but apparently eglCreateWindowSurface()
        // triggers a Surface::queueBuffer()  on some
        // devices (!?) -- log and ignore.
        ALOGE("HWComposer: framebufferTarget is null");
        return NO_ERROR;
    }

    int acquireFenceFd = -1;
    if (acquireFence->isValid()) {
        acquireFenceFd = acquireFence->dup();
    }

    // ALOGD("fbPost: handle=%p, fence=%d", buf->handle, acquireFenceFd);
    //设置taget handle为buffer的handle
    disp.fbTargetHandle = buf->handle;
    disp.framebufferTarget->handle = disp.fbTargetHandle;
    //设置fence
    disp.framebufferTarget->acquireFenceFd = acquireFenceFd;
    return NO_ERROR;
}

fbPost将取到的GraphicBuffer通过setFramebufferTarget保存到相应设备的framebufferTarget中,这样就完成设备layer GLES合成的内容。

postFramebuffer

在doComposition的最后一步是将设备的合成好的layer和需要硬件合成的layer一起提交给硬件合成模块,让其进行最终的显示。这个是通过postFramebuffer完成的。

//通知硬件合成模块进行合成
void SurfaceFlinger::postFramebuffer()
{
    ...
    HWComposer& hwc(getHwComposer());
    if (hwc.initCheck() == NO_ERROR) {
        if (!hwc.supportsFramebufferTarget()) {
            // EGL spec says:
            //   "surface must be bound to the calling thread's current context,
            //    for the current rendering API."
            getDefaultDisplayDevice()->makeCurrent(mEGLDisplay, mEGLContext);
        }
        //提交给硬件合成显示
        hwc.commit();
    }
    ……
}
//将layer数据提交给硬件合成模块
status_t HWComposer::commit() {
    int err = NO_ERROR;
    if (mHwc) {
		//将mLists提交给硬件合成,注意mLists是一个hwc_display_contents_1指针数组,它存放了显示设备最终要显示的图层数据。
        err = mHwc->set(mHwc, mNumDisplays, mLists);
        ...
    }
    return (status_t)err;
}

最终通过HWComposer的commit方法,将设备数和hwc_display_contents_1指针数据一起传递给硬件的合成模块,hwc_display_contents_1包含了通过GLES合成的layer,它的信息保存hwLayers的最后一个hwc_layer_l_t中,同时hwLayers还可能有需要进行硬件合成的layer。不管怎么样,它们都是一起提交给硬件合成模块的,硬件负责对这些layer进行最终的合成渲染。