Android 硬件加速流程和源码分析(四)
4. 同步和绘制 syncAndDrawFrame
4.1 同步和绘制流程
DisplayList 在主线程更新完成后,在RenderThread进行同步和绘制,此时主线程阻塞等待,整体流程如下, 实际"同步和绘制"的工作会调用CanvasContext的prepareTree()和 draw()方法
ThreaderRender在java层通过JNI调用到android_view_ThreadedRenderer.cpp的方法 ,android_view_ThreadedRenderer.cpp连接了java层和navtive层
android_view_ThreadedRenderer_syncAndDrawFrame
746 static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz,
747 jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
...
751 RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
752 env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
753 return proxy->syncAndDrawFrame();
754}
DrawFrameTask的具体作用是提交一个绘制任务到渲染线程RenderThread,并使UI线程阻塞, 绘制任务完成后解除UI线程的阻塞
DrawFrameTask::postAndWait()
78 void DrawFrameTask::postAndWait() {
79 AutoMutex _lock(mLock);
//切线程到RenderThread
80 mRenderThread->queue().post([this]() { run(); });
//MainThrad 在 添加post一个任务到RenderThread后进入阻塞等待
81 mSignal.wait(mLock);
82}
void DrawFrameTask::run() {
86
87 bool canUnblockUiThread;
88 bool canDrawThisFrame;
89 {
//TreeInfo用于记录应用程序窗口绘制信息RenderNode树的同步的结果,TreeInfo::MODE_FULL表示 DrawFrameTask 发起的一个同步任务,主线程等待
90 TreeInfo info(TreeInfo::MODE_FULL, *mContext);
// 1. 同步应用程序窗口绘制信息 syncFrameState
// canUnblockUiThread 表示是否'不阻塞' UI线程 , 如果为true, 会提前唤醒UI线程 ,由当前window中的RenderNode树中所有的bitmap是否都已经作为纹理上传到GPU所决定
91 canUnblockUiThread = syncFrameState(info);
// canDrawThisFrame 是否可以绘制这一帧,这个变量和垂直同步相关,
// 在syncFrameState(info)过程中TreeInfo会进行赋值
92 canDrawThisFrame = info.out.canDrawThisFrame;
93
94 if (mFrameCompleteCallback) {
95 mContext->addFrameCompleteListener(std::move(mFrameCompleteCallback));
96 mFrameCompleteCallback = nullptr;
97 }
98 }
99
100 // Grab a copy of everything we need
101 CanvasContext* context = mContext;
102 std::function<void(int64_t)> callback = std::move(mFrameCallback);
103 mFrameCallback = nullptr;
104
105 // From this point on anything in "this" is *UNSAFE TO ACCESS*
106 if (canUnblockUiThread) {
107 unblockUiThread(); //调用 mSignal.signal();
108 }
109
110 // Even if we aren't drawing this vsync pulse the next frame number will still be accurate
111 if (CC_UNLIKELY(callback)) {
112 context->enqueueFrameWork([callback, frameNr = context->getFrameNumber()]() {
113 callback(frameNr);
114 });
115 }
116 // 2. 绘制
117 if (CC_LIKELY(canDrawThisFrame)) {
118 context->draw();
119 } else {
120 // wait on fences so tasks don't overlap next frame
//不要绘制了,避免和下一帧的时间重合
121 context->waitOnFences();
122 }
123
// 如果阻塞了UI线程, 那么硬件加速绘制完成后解除阻塞
124 if (!canUnblockUiThread) {
125 unblockUiThread();
126 }
127}
DrawFrameTask::run()中比较重要的两步分别是 同步帧状态 syncFrameState(info) 和 绘制 context->draw(). canUnblockUiThread是由TreeInfo.prepareTextures决定,如果需要需要作为纹理上传到GPU的bitmap的缓存还有效,那么可以解锁主线程的同步,不然需要等待bitmap上传GPU,避免UI线程和渲染线程bitmap的不一致.
info.out.canDrawThisFrame 由TreeInfo记录,会在syncFrameState(info)过程中从top RenderNode向下一级一级同步时进行赋值.
4.2 同步帧状态 syncFrameState
-
为什么要同步FrameState?
同步FrameState主要是同步DisplayList,为使硬件加速过程中 DisplayList的更新和绘制不相互干扰, 主线程(Main Thread)和 渲染线程(Render Thread)都各自维护了一份窗口视图信息. 主线程在 ThreadedRenderer#updateRootDisplayList 后每个View的暂存绘制信息保存在RenderNode的mStagingDisplayList 和mStagingProperties , 而RenderThread渲染时使用的是mDisplayList 和 mProperties中,注意区别是
mStaging.同步就是经过各种逻辑后调用RenderNode::syncProperties(..)和RenderNode::syncDisplayList(..)把相关绘制记录和属性的变化同步到RenderThread,用以接下来真正的硬件加速的绘制过程.
4.2.1 同步帧状态时序
从 DrawFrameTask::syncFrameState(TreeInfo& info)开始,同步帧状态的流程如下:
同步帧状态的所有步骤都在 DrawFrameTask::syncFrameState(TreeInfo& info)中.
DrawFrameTask::syncFrameState(TreeInfo& info)
//应用程序窗口绘制信息的同步
bool DrawFrameTask::syncFrameState(TreeInfo& info) { //info 刚被初始化,然后传入进来,表示同步的结果
//1. makeCurrent()关联EGL渲染上下文 ,EGL是连接Android Surface和OpenGL ES等绘图api的接口)
133 bool canDraw = mContext->makeCurrent();
//2.标记硬件渲染的bitmap失效 会调用渲染管道 mRenderPipeline->unpinImages() ,然后调用TextureCache::resetMarkInUse(void* ownerToken)
134 mContext->unpinImages();
135
136 for (size_t i = 0; i < mLayers.size(); i++) {
//3. textureView 纹理的处理 mLayers是用于记录需要渲染的TextureView的
137 mLayers[i]->apply();
138 }
...
//设置绘制内容边界
140 mContext->setContentDrawBounds(mContentDrawBounds);
//4.准备绘制tree
141 mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);
...
...
163 // If prepareTextures is false, we ran out of texture cache space
//这里是在检查renderNode中硬件渲染的Bitmap缓存是否还生效,bitmap 在硬件加速开启时会作为一个纹理渲染
//如果是false,那么就是用完了texture cache空间
164 return info.prepareTextures;
165}
同步帧状态中比较重要的两步是
-
关联EGL
-
prepareTree
-
关于 info.prepareTextures
面这段引用自老罗的Display List渲染过程分析:
当这个TreeInfo对象的成员变量prepareTextures的值等于true时,表示应用程序窗口的Display List引用到的Bitmap均已作为Open GL纹理上传到了GPU。这意味着应用程序窗口的Display List引用到的Bitmap已全部同步完成。在这种情况下,Render Thread在渲染下一帧之前,就可以唤醒Main Thread。另一方面,如果上述TreeInfo对象的成员变量prepareTextures的值等于false,就意味着应用程序窗口的Display List引用到的某些Bitmap不能成功地作为Open GL纹理上传到GPU,这时候Render Thread在渲染下一帧之后,才可以唤醒Main Thread,防止这些未能作为Open GL纹理上传到GPU的Bitmap一边被Render Thread渲染,一边又被Main Thread修改。那么什么时候应用程序窗口的Display List引用到的Bitmap会不能成功地作为Open GL纹理上传到GPU呢?一个应用程序进程可以创建的Open GL纹理是有大小限制的,如果超出这个限制,那么就会导至某些Bitmap不能作为Open GL纹理上传到GPU。
DisplayList::prepareListAndChildren()
107 bool DisplayList::prepareListAndChildren( 108 TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer, 109 std::function<void(RenderNode*, TreeObserver&, TreeInfo&, bool)> childFn) { 110 info.prepareTextures = info.canvasContext.pinImages(bitmapResources); ...DisplayList 源码中可以看到prepareTextures 和bitmap相关.
4.2.2 关联EGL上下文
前面有简单说明EGL是连接渲染api和native窗口系统间的接口,makeCurrent()这一步的作用是设置当前native窗口需要渲染的surface设置给EGL.
CanvasContext::makeCurrent()
bool CanvasContext::makeCurrent() {
....
248 auto result = mRenderPipeline->makeCurrent();
...
265}
266
硬件加速的渲染管道
OpenGLPipeline::makeCurrent(SkiaOpenGLPipeline) SkiaOpenGLPipeline.cpp类似
38 MakeCurrentResult OpenGLPipeline::makeCurrent() {
....
42 bool haveNewSurface = mEglManager.makeCurrent(mEglSurface, &error);
....
49}
EglManager ::makeCurrent(EGLSurface surface, EGLint* errOut)
bool EglManager::makeCurrent(EGLSurface surface, EGLint* errOut) {
...
374 if (!eglMakeCurrent(mEglDisplay, surface, surface, mEglContext)) {
....
383 }
384 mCurrentSurface = surface;
//eglMakeCurrent() 和 mCurrentSurface = surface关联了EGL和当前视图窗
385 if (Properties::disableVsync) {
386 eglSwapInterval(mEglDisplay, 0);
387 }
388 return true;
389}
137 EGLAPI EGLBoolean EGLAPIENTRY eglMakeCurrent (EGLDisplay dpy, EGLSurface draw, EGLSurface read, EGLContext ctx);
eglMakeCurrent()最终会把需要渲染窗口的surface和EGL关联起来,就是切下上下文.
4.2.2.1 创建 EGLSurface
source.android.google.cn/devices/gra…
EGLSurface 可以是由 EGL 分配的离屏缓冲区(称为“pbuffer”),也可以是由操作系统分配的窗口。调用
eglCreateWindowSurface()函数可创建 EGL 窗口 Surface。eglCreateWindowSurface()将“窗口对象”作为参数,在 Android 上,该对象是 Surface。Surface 是 BufferQueue 的生产方端。消费方(SurfaceView、SurfaceTexture、TextureView 或 ImageReader)创建 Surface。当您调用eglCreateWindowSurface()时,EGL 将创建一个新的 EGLSurface 对象,并将其连接到窗口对象的 BufferQueue 的生产方接口。此后,渲染到该 EGLSurface 会导致一个缓冲区离开队列、进行渲染,然后排队等待消费方使用。
EGLSurface测作用是给GLES图形api提供绘制的地方.
当 ThreadedRenderer.java 初始化时会一步步调用到CanvasContext::setSurface
460 boolean initialize(Surface surface) throws OutOfResourcesException {
...
464 nInitialize(mNativeProxy, surface);
...
466 }
private static native void nInitialize(long nativeProxy, Surface window);
689 static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz,
690 jlong proxyPtr, jobject jsurface) {
691 RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
692 sp<Surface> surface = android_view_Surface_getSurface(env, jsurface);
693 proxy->initialize(surface);
694}
CanvasContext::setSurface(sp&& surface)
186 void CanvasContext::setSurface(sp<Surface>&& surface) {
...
189 mNativeSurface = std::move(surface);
190
191 ColorMode colorMode = mWideColorGamut ? ColorMode::WideColorGamut : ColorMode::Srgb;
192 bool hasSurface = mRenderPipeline->setSurface(mNativeSurface.get(), mSwapBehavior, colorMode);
...
203}
SkiaOpenGLPipeline::setSurface
04 bool SkiaOpenGLPipeline::setSurface(Surface* surface, SwapBehavior swapBehavior,
205 ColorMode colorMode) {
...
//调用EGL创建surface
213 mEglSurface = mEglManager.createSurface(surface, wideColorGamut);
...
223}
EglManager::createSurface(EGLNativeWindowType window, bool wideColorGamut)
278 EGLSurface EglManager::createSurface(EGLNativeWindowType window, bool wideColorGamut) {
279 ...
initialize();
281 ....
//EGLNativeWindowType/ANativeWindow/Surface 是同一个东西
//EglSurface 通过 传入 Android层的Surface创建的
//EGLSurface 是渲染的目的地
325 EGLSurface surface = eglCreateWindowSurface(
326 mEglDisplay, wideColorGamut ? mEglConfigWideGamut : mEglConfig, window, attribs);
....
338 return surface;
339}
126 EGLAPI EGLSurface EGLAPIENTRY eglCreateWindowSurface (EGLDisplay dpy, EGLConfig config, EGLNativeWindowType win, const EGLint *attrib_list);
1747 EGLBoolean eglMakeCurrent( EGLDisplay dpy, EGLSurface draw,
1748 EGLSurface read, EGLContext ctx)
1749{...
EGLNativeWindowType/ANativeWindow/Surface 是同一个东西,EGLNativeWindowType就Android native层的Surface.
30 ANativeWindow* ANativeWindow_fromSurface(JNIEnv* env, jobject surface) {
31 sp<ANativeWindow> win = android_view_Surface_getNativeWindow(env, surface);
32 if (win != NULL) {
33 win->incStrong((void*)ANativeWindow_fromSurface);
34 }
35 return win.get();
36}
37
38jobject ANativeWindow_toSurface(JNIEnv* env, ANativeWindow* window) {
39 if (window == NULL) {
40 return NULL;
41 }
42 sp<Surface> surface = static_cast<Surface*>(window);
43 return android_view_Surface_createFromSurface(env, surface);
44}
45
公开的 Surface 类以 Java 编程语言实现。C/C++ 中的同等项是 ANATIONWindow 类,由 Android NDK 半公开。您可以使用
ANativeWindow_fromSurface()调用从 Surface 获取 ANativeWindow。就像它的 Java 语言同等项一样,您可以对 ANativeWindow 进行锁定、以软件形式进行渲染,以及解锁并发布。基本的“原生窗口”类型是 BufferQueue 的生产方端。如需从原生代码创建 EGL 窗口 Surface,可将 EGLNativeWindowType 的实例传递到
eglCreateWindowSurface()。EGLNativeWindowType 是 ANativeWindow 的同义词,因此可以互换使用。
4.2.2.2 EglManager的初始化
34 class EglManager {
35 public:
36 .....
63 private:
64 ...
//渲染线程
75 RenderThread& mRenderThread;
76 //物理显示设备的一个抽象
77 EGLDisplay mEglDisplay;
78 EGLConfig mEglConfig;
79 EGLConfig mEglConfigWideGamut;
80 EGLContext mEglContext;
81 EGLSurface mPBufferSurface;
82
83 EGLSurface mCurrentSurface;
84
85 enum class SwapBehavior {
86 Discard,
87 Preserved,
88 BufferAge,
89 };
90 SwapBehavior mSwapBehavior = SwapBehavior::Discard;
91};
//初始化时调用,在createSurface(..)之前
97 void EglManager::initialize() {
98
//1. 获取Display 物理显示设备
102 mEglDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);//EGLDisplay
106 EGLint major, minor;
//2 .eglInitialize EGL初始化
107 LOG_ALWAYS_FATAL_IF(eglInitialize(mEglDisplay, &major, &minor) == EGL_FALSE,
108
112 initExtensions();
113
114 // Now that extensions are loaded, pick a swap behavior
115 if (Properties::enablePartialUpdates) {
116 // An Adreno driver bug is causing rendering problems for SkiaGL with
117 // buffer age swap behavior (b/31957043). To temporarily workaround,
118 // we will use preserved swap behavior.
119 if (Properties::useBufferAge && EglExtensions.bufferAge) {
120 mSwapBehavior = SwapBehavior::BufferAge;
121 } else {
122 mSwapBehavior = SwapBehavior::Preserved;
123 }
124 }
125
126 loadConfigs();
127 createContext();
//PixmapSurface和PBufferSurface,这两种都不是可显示的Surface,PixmapSurface是保存在系统内存中的位图,PBuffer则是保存在显存中的帧
128 createPBufferSurface();
129 makeCurrent(mPBufferSurface);
//设备信息初始化
130 DeviceInfo::initialize();
...
149}
150
4.2.3 prepareTree
CanvasContext的 CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,RenderNode* target)会从 Root RenderNode开始向下遍历每一个RenderNode. 过程中会完成bitmap和TextureView等的处理.硬件加速过程中一般的drawXXX()的绘制记录和drawBitmap()的绘制记录的同步方式有所不同。drawBitmap 的Bitmap 将作为 一个Open GL纹理上传到GPU去被RenderThread使用。prepareTree遍历整个视图,同步属性和DisplayList.
4.2.3.1 TreeInfo
TreeInfo用于记录同步过程信息和结果.比如记录TextureView的更新层,脏区累加计算,bitmap上传到GPU的结果,是否有动画需要刷新下一帧等....
62 enum TraversalMode {
65 //UI线程和渲染线程都阻塞等待
66 MODE_FULL,
67 //不阻塞UI线程
70 MODE_RT_ONLY,
71 };
72
//遍历模式
76 TraversalMode mode;
77 //用于记录bitmap问题上传到GPU的结果
79 bool prepareTextures;
//画布上下文
80 renderthread::CanvasContext& canvasContext;
90 //需要单独作为一个帧缓冲对象FBO(frame buffer objexct)进行渲染的RenderNode集合,包含 TextureView 和 做动画设置LAYER_TYPE_HARDWARE且调用了buildLayer的View
91 LayerUpdateQueue* layerUpdateQueue = nullptr;
// 屏幕需要刷新的区域的累加计数器
89 DamageAccumulator* damageAccumulator = nullptr;
//Root RenderNode同步后的结果
96 struct Out {
//是否有复杂绘制过程的函数
97 bool hasFunctors = false;
//是否有动画
99 bool hasAnimations = false;
//有动画时需要UI重绘
103 bool requiresUiRedraw = false;
//true就是同步完成后可以绘制 false表示需要等到下次屏幕刷新
110 bool canDrawThisFrame = true;
117 } out;
layerUpdateQueue是需要单独作为一个帧缓冲对象FBO(frame buffer objexct)进行渲染的View的RenderNode集合,包含textureView和 为做动画设置了硬件渲染的view 的更新队列.
当view调用buildLayer() 时比如 对于 ViewPropertyAnimator
public ViewPropertyAnimator withLayer() {
mPendingSetupAction= new Runnable() {
@Override
public void run() {
mView.setLayerType(View.LAYER_TYPE_HARDWARE, null);
if (mView.isAttachedToWindow()) {
mView.buildLayer();
}
}
};
final int currentLayerType = mView.getLayerType();
mPendingCleanupAction = new Runnable() {
@Override
public void run() {
mView.setLayerType(currentLayerType, null);
}
};
if (mAnimatorSetupMap == null) {
mAnimatorSetupMap = new HashMap<Animator, Runnable>();
}
if (mAnimatorCleanupMap == null) {
mAnimatorCleanupMap = new HashMap<Animator, Runnable>();
}
return this;
}
view先设置了layerType为硬件加速 mView.setLayerType(View.LAYER_TYPE_HARDWARE, null); 然后调用buildLayer, mView.buildLayer() 会调用到native层的CanvasContext::buildLayer(RenderNode* node)
584 void CanvasContext::buildLayer(RenderNode* node) {
585 ATRACE_CALL();
586 if (!mRenderPipeline->isContextReady()) return;
587
588 // buildLayer() will leave the tree in an unknown state, so we must stop drawing
589 stopDrawing();
590
591 TreeInfo info(TreeInfo::MODE_FULL, *this);
592 info.damageAccumulator = &mDamageAccumulator;
593 info.layerUpdateQueue = &mLayerUpdateQueue;
594 info.runAnimations = false;
595 node->prepareTree(info);
596 SkRect ignore;
597 mDamageAccumulator.finish(&ignore);
598 // Tickle the GENERIC property on node to mark it as dirty for damaging
599 // purposes when the frame is actually drawn
600 node->setPropertyFieldsDirty(RenderNode::GENERIC);
601
602 mRenderPipeline->renderLayers(mLightGeometry, &mLayerUpdateQueue, mOpaque, mWideColorGamut,
603 mLightInfo);
604
605 node->incStrong(nullptr);
606 mPrefetchedLayers.insert(node);
607}
如上图, Java层的buildLayer最后会调用到 RenderNode::pushLayerUpdate(TreeInfo& info)
向TreeInfo的LayerUpdateQueue中添加需要作为一个单独的layer进行渲染的View的RenderNode. 见于LayerUpdateQueue::enqueueLayerWithDamage(..).
47 DamageAccumulator::DamageAccumulator() {
48 mHead = mAllocator.create_trivial<DirtyStack>();
49 memset(mHead, 0, sizeof(DirtyStack));
50 // Create a root that we will not pop off
51 mHead->prev = mHead;
52 mHead->type = TransformNone;
53}
54
91 void DamageAccumulator::pushTransform(const RenderNode* transform) {
92 pushCommon();
93 mHead->type = TransformRenderNode;
94 mHead->renderNode = transform;
95}
96
97 void DamageAccumulator::pushTransform(const Matrix4* transform) {
98 pushCommon();
99 mHead->type = TransformMatrix4;
100 mHead->matrix4 = transform;
101}
....
235 void DamageAccumulator::finish(SkRect* totalDirty) {
236 LOG_ALWAYS_FATAL_IF(mHead->prev != mHead, "Cannot finish, mismatched push/pop calls! %p vs. %p",
237 mHead->prev, mHead);
238 // Root node never has a transform, so this is the fully mapped dirty rect
239 *totalDirty = mHead->pendingDirty;
240 totalDirty->roundOut(totalDirty);
241 mHead->pendingDirty.setEmpty();
242}
Tree在Root RenderNode 从上至下的遍历过程中记录这同步的数据和结果, DamageAccumulator 累加计数器记录内部有个DirtyStack* mHead链表记录这屏幕需要更新的区域. 调用DamageAccumulator::pushTransform(..)时会计算RenderNode自己的需要更新的屏幕脏区,当调用 DamageAccumulator::popTransform()时会把当前RenderNode的脏区累加到parent RenderNode的脏区中去,最终调用 DamageAccumulator::finish(SkRect* totalDirty)累加计算Root RenderNode对应的需要刷新的屏幕的Rect.
CanvasContext ::prepareTree(..)
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
312 RenderNode* target) {
313 ...
329 for (const sp<RenderNode>& node : mRenderNodes) {
...
334 node->prepareTree(info);
336 }
...
通常情况下(非分屏多窗口等)一个window对应一个CanvasContext对应一个root RenderNode,ConvasContext创建时会调用 mRenderNodes.emplace_back(rootRenderNode)向集合mRenderNodes添加当前window的root RenderNode. 接着对Root RenderNode进行同步.
RenderNode::prepareTree(TreeInfo& info)
6 void RenderNode::prepareTree(TreeInfo& info) {
....
195 prepareTreeImpl(observer, info, functorsNeedLayer);
196 }
272 void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
//记录需要更新区域Rect
273 info.damageAccumulator->pushTransform(this);
274 //TreeInfo::MODE_FULL 同步模式
275 if (info.mode == TreeInfo::MODE_FULL) {
//1.同步属性变化
276 pushStagingPropertiesChanges(info);
277 }
...
297 if (info.mode == TreeInfo::MODE_FULL) {
//2.同步displayList 变化
298 pushStagingDisplayListChanges(observer, info);
299 }
300
301 if (mDisplayList) {
...
303 bool isDirty = mDisplayList->prepareListAndChildren(
304 observer, info, childFunctorsNeedLayer,
305 [](RenderNode* child, TreeObserver& observer, TreeInfo& info,
306 bool functorsNeedLayer) {
//3.递归处理rootRenderNode中的子Rendernode
307 child->prepareTreeImpl(observer, info, functorsNeedLayer);
308 });
309 if (isDirty) {
//isDirty实际由该RenderNode的vectorDrawable决定,如果该renderNode isDirty,会调用damageSelf把整个RenderNode的范围加入脏区
310 damageSelf(info);
311 }
312 }
//4.textureView 纹理 和 设置了硬件渲染层的view的处理
313 pushLayerUpdate(info);
314 //累加计算屏幕需要刷新的Rect脏区
315 info.damageAccumulator->popTransform();
316}
在prepareTree中比较重要的四步是:
-
同步属性变化
-
同步displayList
-
递归调用child 的同步
-
push textureViwe纹理
同步displayList
void RenderNode::pushStagingDisplayListChanges(TreeObserver& observer, TreeInfo& info) {
...
364 syncDisplayList(observer, &info);
365 ...
366 }
void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) {
345 // Make sure we inc first so that we don't fluctuate between 0 and 1,
346 // which would thrash the layer cache
347 if (mStagingDisplayList) {
//遍历每一个child,调用child->incParentRefCount,记录child是否有parent ,没有parent表示已经不在View Tree里面了,会被清除
348 mStagingDisplayList->updateChildren([](RenderNode* child) { child->incParentRefCount(); });
349 }
//删除旧的displayList
350 deleteDisplayList(observer, info);
//更新displayList
351 mDisplayList = mStagingDisplayList;
//置空暂存
352 mStagingDisplayList = nullptr;
353 if (mDisplayList) {
//遍历调用displayList中functs 的绘制方法,还有同步vectorDrawable属性
354 mDisplayList->syncContents();
355 }
356}
同步属性变化
void RenderNode::pushStagingPropertiesChanges(TreeInfo& info) {
323
// 用来标记RenderNode的属性发生了变化,变化的类型在 RenderNode.h 中的enum DirtyPropertyMask中,mDirtyPropertyFields 的每一个二进制位代表一种变化, 只要有一个位变了 就需要走以下逻辑
329 if (mDirtyPropertyFields) {
//重置标记
330 mDirtyPropertyFields = 0;
332 info.damageAccumulator->popTransform();
//同步
333 syncProperties();
339 info.damageAccumulator->pushTransform(this);
340 damageSelf(info);
341 }
342}
//同步renderNode属性变化
void RenderNode::syncProperties() {
// mStagingProperties
//Render Node在内部通过一个RenderProperties对象保存了它的一些属性。当这些属性发生变化时,不必重新构建View的Display List,而只需要修改上述的RenderProperties对象相应成员变量值即可。通过这种方式,就可以提到应用程序窗口的渲染效率
319 mProperties = mStagingProperties;
320}
//调用child->incParentRefCount 更新renderNode的父引用计数
101 void DisplayList::updateChildren(std::function<void(RenderNode*)> updateFn) {
102 for (auto&& child : children) {
103 updateFn(child->renderNode);
104 }
105}
//遍历调用displayList中functs 的绘制方法,还有同步vectorDrawable属性
92 void DisplayList::syncContents() {
93 for (auto& iter : functors) {
94 (*iter.functor)(DrawGlInfo::kModeSync, nullptr);
95 }
96 for (auto& vectorDrawable : vectorDrawables) {
97 vectorDrawable->syncProperties();
98 }
99}
递归调用child 的同步
DisplayList ::prepareListAndChildren
bool DisplayList::prepareListAndChildren(
108 TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer,
109 std::function<void(RenderNode*, TreeObserver&, TreeInfo&, bool)> childFn) {
110
//这里是在检查renderNode中硬件渲染的Bitmap缓存是否还生效
info.prepareTextures = info.canvasContext.pinImages(bitmapResources);
111
112 for (auto&& op : children) {
113 RenderNode* childNode = op->renderNode;
//pushTransform(&op->localMatrix)记录需要更新的区域
114 info.damageAccumulator->pushTransform(&op->localMatrix);
11
//传入的函数是在递归调用renderNode.prepareTreeImpl()
117 childFn(childNode, observer, info, childFunctorsNeedLayer);
//脏区累加
118 info.damageAccumulator->popTransform();
119 }
120
121 bool isDirty = false;
122 for (auto& vectorDrawable : vectorDrawables) {
123 // If any vector drawable in the display list needs update, damage the node.
124 if (vectorDrawable->isDirty()) {
125 isDirty = true;
126 }
127 vectorDrawable->setPropertyChangeWillBeConsumed(true);
128 }
129 return isDirty;
130}
prepareListAndChildren(..)中childFn(..) 实际依次遍历child RenderNode 调用 renderNode.prepareTreeImpl(), 可以看出 RenderNode除了会在TreeInfo中累加计算保存当前RenderNode的脏区外,还会根据该RenderNode的vectorDrawable是否 isDirty() 来判断该renderNoder isDirty, 如果该renderNode isDirty,会调用 renderNode的damageSelf()把整个RenderNode的范围加入脏区.
4.2.3.2 bitmap 的同步
java层canvas api 绘制bitmap时会通过jni调用到RecordingCanvas的:drawBitmap(Bitmap& bitmap, const SkPaint* paint)
RecordingCanvas:::drawBitmap(Bitmap& bitmap, const SkPaint* paint)
535 void RecordingCanvas::drawBitmap(Bitmap& bitmap, const SkPaint* paint) {
536 addOp(alloc().create_trivial<BitmapOp>(Rect(bitmap.width(), bitmap.height()),
537 *(mState.currentSnapshot()->transform),
538 getRecordedClip(), refPaint(paint), refBitmap(bitmap))); //注意 refBitmap(bitmap)
539}
5 inline Bitmap* refBitmap(Bitmap& bitmap) {
293 bitmap.ref();
//向mDisplayList 的 bitmapResources 中 添加了一个bitmap
294 mDisplayList->bitmapResources.emplace_back(&bitmap);
295 return &bitmap;
296 }
Canvas api调用绘制bitmap的方法后会向DisplayList的bitmapResources中添加一个bitmap.
在displayList创建后同步的过程中 :
DisplayList.cpp::prepareListAndChildren(..)
107 bool DisplayList::prepareListAndChildren(
108 TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer,
109 std::function<void(RenderNode*, TreeObserver&, TreeInfo&, bool)> childFn) {
110 info.prepareTextures = info.canvasContext.pinImages(bitmapResources);
95 bool pinImages(LsaVector<sk_sp<Bitmap>>& images) { return mRenderPipeline->pinImages(images); }
pinImages 表示'钉'固定图像像素到GPU缓存,在uppinned之前不失效.
pipeLine根据设置可以是:
- OpenGLPipeline (Android 8默认)
- SkiaOpenGLPipeline (Android 9默认)
- SkiaVulkanPipeline
OpenGL处理bitmap
bitmapResources 是当前通过硬件加速绘制时需要处理的的bitmap,会把bitmap处理成一个纹理给GPU渲染
OpenGLPipeline.::pinImages(bitmapResources)
53 bool OpenGLPipeline::pinImages(LsaVector<sk_sp<Bitmap>>& images) {
254 TextureCache& cache = Caches::getInstance().textureCache;
255 bool prefetchSucceeded = true;
256 for (auto& bitmapResource : images) {
257 prefetchSucceeded &= cache.prefetchAndMarkInUse(this, bitmapResource.get());
258 }
259 return prefetchSucceeded;
260}
261
Caches::getInstance().textureCache用来缓存应用程序进程使用的OpenGL纹理.
//预获取 and 标记为使用
166 bool TextureCache::prefetchAndMarkInUse(void* ownerToken, Bitmap* bitmap) {
// 从渲染管道的纹理缓存中拿bitmap的纹理缓存, 如果没有,就创建并上传
167 Texture* texture = getCachedTexture(bitmap);
168 if (texture) {
169 texture->isInUse = ownerToken;
170 }
171 return texture;
172}
173
//从缓存中拿到 bitmap对应的纹理, 如缓存没有加入缓存
112 Texture* TextureCache::getCachedTexture(Bitmap* bitmap) {
//硬件位图(Bitmap.Config.HARDWARE),android 8加入,图像只保存在显缓中,在应用进程中不占用内存
113 if (bitmap->isHardware()) {
114 auto textureIterator = mHardwareTextures.find(bitmap->getStableID());
115 if (textureIterator == mHardwareTextures.end()) {
116 Texture* texture = createTexture(bitmap);
117 mHardwareTextures.insert(
118 std::make_pair(bitmap->getStableID(), std::unique_ptr<Texture>
122 return texture;
123 }
124 return textureIterator->second.get();
125 }
126
//在缓存中找bitmap的纹理
127 Texture* texture = mCache.get(bitmap->getStableID());
128 //texture是null 没有找到缓存, 创建纹理并添加到缓存
129 if (!texture) {
//bitmap不能创建纹理, 返回null
130 if (!canMakeTextureFromBitmap(bitmap)) {
131 return nullptr;
132 }
133
134 const uint32_t size = bitmap->rowBytes() * bitmap->height();
135 bool canCache = size < mMaxSize;
//缓存空间判断
136 // Don't even try to cache a bitmap that's bigger than the cache
137 while (canCache && mSize + size > mMaxSize) {
138 Texture* oldest = mCache.peekOldestValue();
139 if (oldest && !oldest->isInUse) {
140 mCache.removeOldest();
141 } else {
142 canCache = false;
143 }
144 }
145
146 if (canCache) {
//如果bitmap可以缓存,那么创建一个texture
147 texture = createTexture(bitmap);
148 mSize += size;
154 mCache.put(bitmap->getStableID(), texture);
155 }
156 } else if (!texture->isInUse && bitmap->getGenerationID() != texture->generation) {
157 // Texture was in the cache but is dirty, re-upload
158 // TODO: Re-adjust the cache size if the bitmap's dimensions have changed
//有缓存但是dirty重新上传bitmap为纹理
159 texture->upload(*bitmap);
160 texture->generation = bitmap->getGenerationID();
161 }
162
163 return texture;
164}
//为bitmap创建一个纹理,上传
102 Texture* TextureCache::createTexture(Bitmap* bitmap) {
103 Texture* texture = new Texture(Caches::getInstance());
104 texture->bitmapSize = bitmap->rowBytes() * bitmap->height();
//当Bitmap被修改时,generationId会变,可以通过这个值来判断图片有没有被修改。
105 texture->generation = bitmap->getGenerationID();
106 texture->upload(*bitmap); //上传
107 return texture;
108}
void Texture::upload(Bitmap& bitmap) {
...
//上传到纹理
363 uploadToTexture(needsAlloc, internalFormat, format, type, bitmap.rowBytesAsPixels(),
364 bitmap.info().bytesPerPixel(), bitmap.width(), bitmap.height(),
365 bitmap.pixels());
...
379}
static void uploadToTexture(bool resize, GLint internalFormat, GLenum format, GLenum type,
151 GLsizei stride, GLsizei bpp, GLsizei width, GLsizei height,
152 const GLvoid* data) {
...
160 if (resize) {
//根据data(bitmap)生成纹理
161 glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, 0, format, type, data);
162 } else {
//根据data(bitmap)生成纹理
163 glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, format, type, data);
164 }
...
192}
glTexImage2D 和 glTexSubImage2D会调用OpenGL在不同平台的实现.
32#include <GLES2/gl2.h>
33#include <GLES3/gl3.h>
631 GL_APICALL void GL_APIENTRY glTexImage2D (GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const void *pixels);
28#include <KHR/khrplatform.h>
31#define GL_APICALL KHRONOS_APICALL
95/*-------------------------------------------------------------------------
96 * Definition of KHRONOS_APICALL
97 *-------------------------------------------------------------------------
98 * This precedes the return type of the function in the function prototype.
99 */
100#if defined(_WIN32) && !defined(__SCITECH_SNAP__)
101# define KHRONOS_APICALL __declspec(dllimport)
102#elif defined (__SYMBIAN32__)
103# define KHRONOS_APICALL IMPORT_C
104#elif defined(ANDROID)
105# define KHRONOS_APICALL __attribute__((visibility("default")))
106#else
107# define KHRONOS_APICALL
108#endif
GLES
OpenGL ES (OpenGL for Embedded Systems) 是 OpenGL 三维图形 API的子集,针对 手机、PDA和游戏主机等嵌入式设备而设计。该API由Khronos集团定义推广,Khronos是一个图形软硬件行业协会,该协会主要关注图形和多媒体方面的开放标准。
Definition of KHRONOS_APICALL 定义跨平台调用,对openGL没什么研究,只能大概了解到这里.
Android 9开始默认的渲染管道是Skia,9.0以下默认是OpenGL.Skia是Android默认的2D图形库,Flutter跨平台也是使用skia.
Skia 对bitmap的处理和OpenGl有点不同.Skia作为一个图形API,会根据硬件和平台调用更低级的图形API接口,比如Vulkan,OpenGL,Metal.在Android上vulkan对应渲染管道是 SkiaVulkanPipeline ,Skia调用OpenGL对应的渲染管道是SkiaOpenGLPipeline,这俩都继承于SkiaPipeline.
使用Skia时,View对应native层RecordingCanvas的真实类型是SkiaRecordingCanvas.cpp,SkiaRecordingCanvas的成员变量mDisplayList的真实类型是 SkiaDisplayList,SkiaDisplayList继承于DisplayList. SkiaDisplayList保存bimap在 std::vector<SkImage*> mMutableImages 中而不是 DisplayList的bitmapResources中.
Skia绘制bitmap时:
SkiaRecordingCanvas::drawBitmap(Bitmap& bitmap, float left, float top, const SkPaint* paint)
152 void SkiaRecordingCanvas::drawBitmap(Bitmap& bitmap, float left, float top, const SkPaint* paint) {
...
160 if (!bitmap.isImmutable() && image.get() && !image->unique()) {
161 mDisplayList->mMutableImages.push_back(image.get());
162 }
163}
接着回到同步过程的SkiaDisplayList::prepareListAndChildren(..) Skia pin bitmap的过程如下,简单贴下代码流程,不做分析.
SkiaPipeline ::pinImages(std::vector<SkImage*>& mutableImages)
bool SkiaPipeline::pinImages(std::vector<SkImage*>& mutableImages) {
61 for (SkImage* image : mutableImages) {
62 if (SkImage_pinAsTexture(image, mRenderThread.getGrContext())) {
63 mPinnedImages.emplace_back(sk_ref_sp(image));
64 } else {
65 return false;
66 }
67 }
68 return true;
69}
SkImage SkImage_pinAsTexture(const SkImage* image, GrContext* ctx)
7bool bool SkImage_pinAsTexture(const SkImage* image, GrContext* ctx) {
...
// return static_cast<SkImage_Base*>(image); 强转
400 return as_IB(image)->onPinAsTexture(ctx);
401}
402
5bool SkImage_Raster::onPinAsTexture(GrContext* ctx) const {
206 if (fPinnedProxy) {
..
209 } else {
..
//返回 sk_sp<GrTextureProxy>
212 fPinnedProxy = GrRefCachedBitmapTextureProxy(ctx, fBitmap, GrSamplerState::ClampNearest(), nullptr);
214 if (!fPinnedProxy) {
215 return false;
216 }
217 fPinnedUniqueID = fBitmap.getGenerationID();
218 }
219 // Note: we only increment if the texture was successfully pinned
220 ++fPinnedCount;
221 return true;
222}
69//看方法名是 上传bitmap到纹理代理
70 sk_sp<GrTextureProxy> GrUploadBitmapToTextureProxy(GrProxyProvider* proxyProvider,
71 const SkBitmap& bitmap,
72 SkColorSpace* dstColorSpace) {
...
//通过skbitmpa 创建一个SKImage
90 sk_sp<SkImage> image = SkMakeImageFromRasterBitmap(bitmap, cpyMode);
91 //创建一个代理
92 return proxyProvider->createTextureProxy(std::move(image), kNone_GrSurfaceFlags,
93 kTopLeft_GrSurfaceOrigin, 1, SkBudgeted::kYes,
94 SkBackingFit::kExact);
95}
96
SkMakeImageFromRasterBitmap(bitmap, cpyMode) 把一个bitmap转化为SkImage给GPU处理,SkImage 描述了一个要绘制的二维数组.
201sk_sp<GrTextureProxy> GrProxyProvider::createTextureProxy(sk_sp<SkImage> srcImage,
202 GrSurfaceFlags flags,
203 GrSurfaceOrigin origin,
204 int sampleCnt,
205 SkBudgeted budgeted,
206 SkBackingFit fit) {
...
246 sk_sp<GrTextureProxy> proxy = this->createLazyProxy(
247 [desc, budgeted, srcImage, fit]
248 (GrResourceProvider* resourceProvider) {
249 if (!resourceProvider) {
250 // Nothing to clean up here. Once the proxy (and thus lambda) is deleted the ref
251 // on srcImage will be released.
252 return sk_sp<GrTexture>();
253 }
254 SkPixmap pixMap;
255 SkAssertResult(srcImage->peekPixels(&pixMap));
256 GrMipLevel mipLevel = { pixMap.addr(), pixMap.rowBytes() };
257 //创建纹理
258 return resourceProvider->createTexture(desc, budgeted, fit, mipLevel);
259 }, desc, GrMipMapped::kNo, renderTargetFlags, fit, budgeted);
260
.....
268 return proxy;
269}
75 sk_sp<GrTexture> GrResourceProvider::createTexture(const GrSurfaceDesc& desc, SkBudgeted budgeted, const GrMipLevel texels[], int mipLevelCount,SkDestinationSurfaceColorMode mipColorMode) {
.....
// 调用 sk_sp<GrTexture> GrGpu::createTexture(...)
90 sk_sp<GrTexture> tex(fGpu->createTexture(desc, budgeted, texels, mipLevelCount));
...
95 return tex;
96}
260 private:
294 .....
295 GrResourceCache* fCache;
296 GrGpu* fGpu;
297 sk_sp<const GrCaps> fCaps;
298 GrUniqueKey fQuadIndexBufferKey;
299
302};
sk_sp<GrTexture> GrGpu::createTexture(const GrSurfaceDesc& origDesc, SkBudgeted budgeted,
76 const GrMipLevel texels[], int mipLevelCount) {
....
//子类实现 ...
97 sk_sp<GrTexture> tex = this->onCreateTexture(desc, budgeted, texels, mipLevelCount);
.....
109 return tex;
110}
4.2.3.3 TextureView的硬件加速处理
TextureView 对象会对 SurfaceTexture 进行包装,从而响应回调以及获取新的缓冲区。在 TextureView 获取新的缓冲区时,TextureView 会发出 View 失效请求,并使用最新缓冲区的内容作为数据源进行绘图,根据 View 状态的指示,以相应的方式在相应的位置进行呈现。
OpenGL ES (GLES) 可以将 SurfaceTexture 传递到 EGL 创建调用,从而在 TextureView 上呈现内容,但这样会引发问题。当 GLES 在 TextureView 上呈现内容时,BufferQueue 生产方和使用方位于同一线程中,这可能导致缓冲区交换调用暂停或失败。例如,如果生产方以快速连续的方式从界面线程提交多个缓冲区,则 EGL 缓冲区交换调用需要使一个缓冲区从 BufferQueue 出列。不过,由于使用方和生产方位于同一线程中,因此不存在任何可用的缓冲区,而且交换调用会挂起或失败。
为了确保缓冲区交换不会停止,BufferQueue 始终需要有一个可用的缓冲区能够出列。为了实现这一点,BufferQueue 在新缓冲区加入队列时舍弃之前加入队列的缓冲区的内容,并对最小缓冲区计数和最大缓冲区计数施加限制,以防使用方一次性消耗所有缓冲区。
TextureView 的硬件加速绘制过程中要区别于其他通常的View处理. TextureView必须在硬件加速开启时使用,处理的内容为SurfaceTexture接收到的数据.
source.android.google.cn/devices/gra…
SurfaceTexture是 Surface 和 OpenGL ES (GLES) 纹理的组合。SurfaceTexture实例用于提供输出到 GLES 纹理的接口。
SurfaceTexture包含一个以应用为使用方的BufferQueue实例。当生产方将新的缓冲区排入队列时,onFrameAvailable()回调会通知应用。然后,应用调用updateTexImage(),这会释放先前占用的缓冲区,从队列中获取新缓冲区并执行 EGL 调用,从而使 GLES 可将此缓冲区作为外部纹理使用
- SurfaceView 的 BufferQueue
SurfaceTexture::SurfaceTexture_init(..)
static void SurfaceTexture_init(JNIEnv* env, jobject thiz, jboolean isDetached,
259 jint texName, jboolean singleBufferMode, jobject weakThiz)
260{
//图形生产者
261 sp<IGraphicBufferProducer> producer;
//图形消费者
262 sp<IGraphicBufferConsumer> consumer;
//BufferQueue
263 BufferQueue::createBufferQueue(&producer, &consumer);
264
265 if (singleBufferMode) {
266 consumer->setMaxBufferCount(1);
267 }
268 //创建GLConsumer
269 sp<GLConsumer> surfaceTexture;
270 if (isDetached) {
271 surfaceTexture = new GLConsumer(consumer, GL_TEXTURE_EXTERNAL_OES,
272 true, !singleBufferMode);
273 } else {
274 surfaceTexture = new GLConsumer(consumer, texName,
275 GL_TEXTURE_EXTERNAL_OES, true, !singleBufferMode);
276 }
277
...
290
291 SurfaceTexture_setSurfaceTexture(env, thiz, surfaceTexture);
292 SurfaceTexture_setProducer(env, thiz, producer);
293
...
native层SurfaceTexture初始化时就会创建IGraphicBufferProducer,IGraphicBufferConsumer,
SurfaceTexture 的生产者类型就是IGraphicBufferProducer, SurfaceTexture会创建GLConsumer作为图像消费者. GLConsumer会把图像数据交给GPU处理为纹理.
BufferQueue::createBufferQueue(...)
85 void BufferQueue::createBufferQueue(sp<IGraphicBufferProducer>* outProducer,
86 sp<IGraphicBufferConsumer>* outConsumer,
87 bool consumerIsSurfaceFlinger) {
...
92 //创建BufferQueen
93 sp<BufferQueueCore> core(new BufferQueueCore());
...
96 //创建生产者
97 sp<IGraphicBufferProducer> producer(new BufferQueueProducer(core, consumerIsSurfaceFlinger));
...
100 //创建消费者
101 sp<IGraphicBufferConsumer> consumer(new BufferQueueConsumer(core));
...
105 *outProducer = producer;
106 *outConsumer = consumer;
107}
108
native层的SurfaceTexture初始化时会创建 BufferQueue ,图形生产者和图形消费者,生产者消费者类型分别是 BufferQueueProducer和BufferQueueConsumer,BufferQueue调用createBufferQueue(..)的参数consumerIsSurfaceFlinger 默认是false,这个在BufferQueue.h的方法定义中可以看看到,所以这里图形的直接消费者不是SurfaceFlinger进程,我理解是这里GPU处理纹理直接交给了TextureView渲染显示,而不需要交给SurfaceFlinger进程处理(不知道理解对没).
graph LR
BufferQueueProducer --> BufferQueueCore
BufferQueueCore --> BufferQueueConsumer
51 struct fields_t {
52 jfieldID surfaceTexture;
53 jfieldID producer;
...
56 };
//图形生产者和图形消费者
57 static fields_t fields;
225#define ANDROID_GRAPHICS_SURFACETEXTURE_JNI_ID "mSurfaceTexture"
226#define ANDROID_GRAPHICS_PRODUCER_JNI_ID "mProducer"
227#define ANDROID_GRAPHICS_FRAMEAVAILABLELISTENER_JNI_ID \
228 "mFrameAvailableListener"
229
230 static void SurfaceTexture_classInit(JNIEnv* env, jclass clazz)
231{
232 fields.surfaceTexture = env->GetFieldID(clazz,
233 ANDROID_GRAPHICS_SURFACETEXTURE_JNI_ID, "J");
.
238 fields.producer = env->GetFieldID(clazz,
239 ANDROID_GRAPHICS_PRODUCER_JNI_ID, "J");
...
244 fields.frameAvailableListener = env->GetFieldID(clazz,
245 ANDROID_GRAPHICS_FRAMEAVAILABLELISTENER_JNI_ID, "J");
...
256}
//设置 图形生产者和消费者
82 static void SurfaceTexture_setSurfaceTexture(JNIEnv* env, jobject thiz,
83 const sp<GLConsumer>& surfaceTexture)
84{
85 GLConsumer* const p =
86 (GLConsumer*)env->GetLongField(thiz, fields.surfaceTexture);
87 if (surfaceTexture.get()) {
88 surfaceTexture->incStrong((void*)SurfaceTexture_setSurfaceTexture);
89 }
90 if (p) {
91 p->decStrong((void*)SurfaceTexture_setSurfaceTexture);
92 }
93 env->SetLongField(thiz, fields.surfaceTexture, (jlong)surfaceTexture.get());
94}
95
96 static void SurfaceTexture_setProducer(JNIEnv* env, jobject thiz,
97 const sp<IGraphicBufferProducer>& producer)
98{
99 IGraphicBufferProducer* const p =
100 (IGraphicBufferProducer*)env->GetLongField(thiz, fields.producer);
101 if (producer.get()) {
102 producer->incStrong((void*)SurfaceTexture_setProducer);
103 }
104 if (p) {
105 p->decStrong((void*)SurfaceTexture_setProducer);
106 }
107 env->SetLongField(thiz, fields.producer, (jlong)producer.get());
108}
109
//获取 图形生产者和消费者
125 sp<GLConsumer> SurfaceTexture_getSurfaceTexture(JNIEnv* env, jobject thiz) {
126 return (GLConsumer*)env->GetLongField(thiz, fields.surfaceTexture);
127}
128
129 sp<IGraphicBufferProducer> SurfaceTexture_getProducer(JNIEnv* env, jobject thiz) {
130 return (IGraphicBufferProducer*)env->GetLongField(thiz, fields.producer);
131}
native层的SurfaceTexture持有图形生产者和**图形消费者 **
在TextureView调用draw()时,会把surfaceTexture中的图形生产者producer作为创建Android native的Surface的初始化参数. 在Android中,使用不同的渲染api渲染的内容都会渲染到Surface上,所以可以说Surface是图形生产者
115 static void android_view_TextureView_createNativeWindow(JNIEnv* env, jobject textureView, jobject surface) {
117 //创建了一个图形生产者
118 sp<IGraphicBufferProducer> producer(SurfaceTexture_getProducer(env, surface));
//创建surface接收图像数据
119 sp<ANativeWindow> window = new Surface(producer, true);
120
121 window->incStrong((void*)android_view_TextureView_createNativeWindow);
122 SET_LONG(textureView, gTextureViewClassInfo.nativeWindow, jlong(window.get()));
123}
124
从上面代码可以看出TextureView的surface是生产者 , 生产者生产 GraphicBuffer. 比如我们在用TextureView显示相机的视频数据时,surface作为生产者的数据就来自于相机. Surface 给SurfaceTexture中的BufferQuene提供 Graphic buffer . 生产者向BufferQuene里面放, 消费者从BufferQuene里面取.关系如下:

- DeferredLayerUpdater
在硬件加速的DrawFrameTask.h 中定义的std::vector<sp<DeferredLayerUpdater> > mLayers用于记录一个渲染任务中所有需要渲染的TextureView.
86 private:
98 //记录View树中需要更新的TextureView
102 std::vector<sp<DeferredLayerUpdater> > mLayers;
...
当调用TextureView的draw()方法时
public class TextureView extends View {
//对应native层的DeferredLayerUpdater,用来处理textureView的渲染相关
private TextureLayer mLayer;
//这个mSurface就是用来绘制的表面,播放视频或者显示内容啥的....
//硬件加速渲染也是处理的这个surface上的数据
private SurfaceTexture mSurface;
326 */
327 @Override
328 public final void draw(Canvas canvas) {
...
336 if (canvas.isHardwareAccelerated()) {
337 DisplayListCanvas displayListCanvas = (DisplayListCanvas) canvas;
338
339 TextureLayer layer = getTextureLayer();
340 if (layer != null) {
341 applyUpdate();
..
345 displayListCanvas.drawTextureLayer(layer);
346 }
347 }
348 }
TextureView的draw(Canvas canvas)可以看出只有在硬件加速生效时才会绘制.
TextureLayer getTextureLayer(){
if(mLayer==null){
//mLayer类型是TextureLayer,实际对应的是native层的DeferredLayerUpdater
mLayer=mAttachInfo.mThreadedRenderer.createTextureLayer();
boolean createNewSurface=(mSurface==null);
if(createNewSurface){
// Create a new SurfaceTexture for the layer.
mSurface=new SurfaceTexture(false);
//创建nativewindow 关联java层和native层的surface
nCreateNativeWindow(mSurface);
}
//java层的TextureLayer对应native层的DeferredLayerUpdater
mLayer.setSurfaceTexture(mSurface);
...
return mLayer;
}
TextureLayer#getTextureLayer() 中 setSurfaceTexture(mSurface) 会通过jni调用,给native层textureView对应ayer->setSurfaceTexture(surfaceTexture) 会调给DeferredLayerUpdater设置GLConsumer图形消费者. DeferredLayerUpdater中的GLConsumer 实际上就是SurfaceTexture中的消费者对象.
68 static void TextureLayer_setSurfaceTexture(JNIEnv* env, jobject clazz,
69 jlong layerUpdaterPtr, jobject surface) {
70 DeferredLayerUpdater* layer = reinterpret_cast<DeferredLayerUpdater*>(layerUpdaterPtr);
71 sp<GLConsumer> surfaceTexture(SurfaceTexture_getSurfaceTexture(env, surface));
72 layer->setSurfaceTexture(surfaceTexture);
73}
layer->setSurfaceTexture(surfaceTexture) 会调给DeferredLayerUpdater设置GLConsumer图像消费者
3 ANDROID_API void setSurfaceTexture(const sp<GLConsumer>& texture) {
74 if (texture.get() != mSurfaceTexture.get()) {
75 mSurfaceTexture = texture;
...
81 }
mAttachInfo.mThreadedRenderer.createTextureLayer() 会通过jni在native层创建一个DeferredLayerUpdater
android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createTextureLayer(JNIEnv* env, jobject clazz,
785 jlong proxyPtr) {
786 RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
787 DeferredLayerUpdater* layer = proxy->createTextureLayer();
788 return reinterpret_cast<jlong>(layer);
789}
会调用到CanvasContext::CanvasContext::createTextureLayer 返回一个DeferredLayerUpdater指针
DeferredLayerUpdater* CanvasContext::createTextureLayer() {
660 return mRenderPipeline->createTextureLayer();
661}
SkiaOpenGLPipeline::createTextureLayer()
3 DeferredLayerUpdater* SkiaOpenGLPipeline::createTextureLayer() {
194 mEglManager.initialize();
195 return new DeferredLayerUpdater(mRenderThread.renderState(), createLayer, Layer::Api::OpenGL);
196}
可以看到java层TextureView 的 private TextureLayer mLayer 实际对应的是native层DeferredLayerUpdater
106 private:
107 RenderState& mRenderState;
108
109 // Generic properties
110 int mWidth = 0;
111 int mHeight = 0;
112 bool mBlend = false;
113 sk_sp<SkColorFilter> mColorFilter;
114 int mAlpha = 255;
115 SkBlendMode mMode = SkBlendMode::kSrcOver;
//GL图形消费者
116 sp<GLConsumer> mSurfaceTexture;
117 SkMatrix* mTransform;
118 bool mGLContextAttached;
119 bool mUpdateTexImage;
120
121 Layer* mLayer;
122 Layer::Api mLayerApi;
123 CreateLayerFn mCreateLayerFn;
124
125 void doUpdateTexImage();
126 void doUpdateVkTexImage();
127};
TextureView 调用draw绘制时
TextureView -> TextureView:draw()
TextureView -> TextureLayer: applyUpdate()
TextureLayer -> ThreadedRenderer: updateSurfaceTexture()
TextureLayer#updateSurfaceTexture()
137 public void updateSurfaceTexture() {
138 nUpdateSurfaceTexture(mFinalizer.get());
139 mRenderer.pushLayerUpdate(this);
140 }
nUpdateSurfaceTexture(mFinalizer.get()); 会通过jni调用到DeferredLayerUpdater 更新 mUpdateTexImage标志位, 并且向DrawFrameTask的mLayers集合中添加一个DeferredLayerUpdater.
5static void TextureLayer_updateSurfaceTexture(JNIEnv* env, jobject clazz,
76 jlong layerUpdaterPtr) {
77 DeferredLayerUpdater* layer = reinterpret_cast<DeferredLayerUpdater*>(layerUpdaterPtr);
78 layer->updateTexImage();
79}
80
83 ANDROID_API void updateTexImage() { mUpdateTexImage = true; }
layer->updateTexImage() 仅仅改了一个标志位 ,后面会同步过程中 DeferredLayerUpdater::apply()中会根据该标志位调用doUpdateTexImage();
ThreadedRenderer#pushLayerUpdate(TextureLayer layer)
void pushLayerUpdate(TextureLayer layer){
873 nPushLayerUpdate(mNativeProxy,layer.getDeferredLayerUpdater());
874}
会把这个DeferredLayerUpdater添加到DrawFrameTask的mLayers中
47 void DrawFrameTask::pushLayerUpdate(DeferredLayerUpdater* layer) {
...
56 mLayers.push_back(layer);
57}
3.TextureView绘制前更新
TextureView#applyUpdate()
private void applyUpdate(){
...
mLayer.prepare(getWidth(),getHeight(),mOpaque);
mLayer.updateSurfaceTexture();
if(mListener!=null){
mListener.onSurfaceTextureUpdated(mSurface);
}
}
TextureLayer通过jni调用native
public boolean prepare(int width,int height,boolean isOpaque){
return nPrepare(mFinalizer.get(),width,height,isOpaque);
}
static jboolean TextureLayer_prepare(JNIEnv* env, jobject clazz,
44 jlong layerUpdaterPtr, jint width, jint height, jboolean isOpaque) {
45 DeferredLayerUpdater* layer = reinterpret_cast<DeferredLayerUpdater*>(layerUpdaterPtr);
46 bool changed = false;
47 changed |= layer->setSize(width, height);
48 changed |= layer->setBlend(!isOpaque);
49 return changed;
50}
applyUpdate() 会调用 DrawFrameTask中TexureView对应的DeferredLayerUpdater 做一些准备工作,设置下宽高,混合模式.
-
TextureView 添加TexureLayerOp到DisplayList
draw方法中调用 recordingCanvas.drawTextureLayer(layer);
然后通过jni调用到android_view_DisplayListCanvas.cpp
162static void android_view_DisplayListCanvas_drawTextureLayer(jlong canvasPtr, jlong layerPtr) { 163 Canvas* canvas = reinterpret_cast<Canvas*>(canvasPtr); 164 DeferredLayerUpdater* layer = reinterpret_cast<DeferredLayerUpdater*>(layerPtr); 165 canvas->drawLayer(layer); 166}继续
RecordingCanvas::drawLayer(DeferredLayerUpdater* layerHandle) 添加了一个 TextureLayerOp到displayList中
561 void RecordingCanvas::drawLayer(DeferredLayerUpdater* layerHandle) { ... 568 addOp(alloc().create_trivial<TextureLayerOp>( 569 Rect(layerHandle->getWidth(), layerHandle->getHeight()), 570 *(mState.currentSnapshot()->transform), getRecordedClip(), layerHandle)); 571}
5.在RenderNode preepareTree 同步渲染数据时
void RenderNode::prepareTreeImpl(TreeObserver& observer, TreeInfo& info, bool functorsNeedLayer) {
273 ...
313 pushLayerUpdate(info);
...
316}
233 void RenderNode::pushLayerUpdate(TreeInfo& info) {
234 LayerType layerType = properties().effectiveLayerType();
....
//给textureView 设置或者更新OffscreenLayer
246 if (info.canvasContext.createOrUpdateLayer(this, *info.damageAccumulator, info.errorHandler)) {
247 damageSelf(info);
248 }
...
262}
canvasContext::createOrUpdateLayer(..) -> pipeLine 的 createOrUpdateLayer(..)比如在 OpenGLPipeline::createOrUpdateLayer(..)
205 bool OpenGLPipeline::createOrUpdateLayer(RenderNode* node,
206 const DamageAccumulator& damageAccumulator,
207 bool wideColorGamut,
208 ErrorHandler* errorHandler) {
209 RenderState& renderState = mRenderThread.renderState();
210 OffscreenBufferPool& layerPool = renderState.layerPool();
//从 RenderState 的`OffscreenBufferPool* mLayerPool`
213 node->setLayer(layerPool.get(renderState, node->getWidth(), node->getHeight(), wideColorGamut));
...
251}
从 RenderState的 OffscreenBufferPool * mLayerPool 取出一个 OffscreenBuffer* layer 设置给 textureView或者其他需要进行离屏渲染的View对应的renderNode,这个OffscreenBuffer保存到RenderNode的 mLayer,后续渲染管道pipeline调用draw(..)时会调用RenderNode的getLayer()对这个buffer进行操作.
OffscreenBuffer的创建如下
OffscreenBufferPool::get(..)
45 OffscreenBuffer* OffscreenBufferPool::get(RenderState& renderState, const uint32_t width,
146 const uint32_t height, bool wideColorGamut) {
147 OffscreenBuffer* layer = nullptr;
148
149 Entry entry(width, height, wideColorGamut);
150 auto iter = mPool.find(entry);
151
152 if (iter != mPool.end()) {
153 entry = *iter;
154 mPool.erase(iter);
155
156 layer = entry.layer;
157 layer->viewportWidth = width;
158 layer->viewportHeight = height;
159 mSize -= layer->getSizeInBytes();
160 } else {
161 layer = new OffscreenBuffer(renderState, Caches::getInstance(), width, height,
162 wideColorGamut);
163 }
164
165 return layer;
166}
167
-
textureView纹理的生成
在 DrawFrameTask::syncFrameState(TreeInfo& info)同步帧状态时会调用到
DeferredLayerUpdater::apply()更新TextureView纹理.
DeferredLayerUpdater::apply()
void DeferredLayerUpdater::apply() {
77 if (!mLayer) {
//创建Layer Layer.cpp
78 mLayer = mCreateLayerFn(mRenderState, mWidth, mHeight, mColorFilter, mAlpha, mMode, mBlend);
79 }
...
84 if (mSurfaceTexture.get()) {
85 if (mLayer->getApi() == Layer::Api::Vulkan) {
...
90 } else {
.....
94 if (!mGLContextAttached) {
95 mGLContextAttached = true;
96 mUpdateTexImage = true;
//mSurfaceTexture 的类型是 sp<GLConsumer>
97 mSurfaceTexture->attachToContext(static_cast<GlLayer*>(mLayer)->getTextureId());
98 }
99 if (mUpdateTexImage) {
100 mUpdateTexImage = false;
//更新纹理
101 doUpdateTexImage();
102 }
103 GLenum renderTarget = mSurfaceTexture->getCurrentTextureTarget();
//
104 static_cast<GlLayer*>(mLayer)->setRenderTarget(renderTarget);
105 }
...
110 }
111}
//更新纹理 mSurfaceTexture->updateTexImage()
3 void DeferredLayerUpdater::doUpdateTexImage() {
....
///!!!!!!!!! -> GLConsumer::updateTexImage()
117 if (mSurfaceTexture->updateTexImage() == NO_ERROR) {
118 float transform[16];
119
120 int64_t frameNumber = mSurfaceTexture->getFrameNumber();
121 // If the GLConsumer queue is in synchronous mode, need to discard all
122 // but latest frame, using the frame number to tell when we no longer
123 // have newer frames to target. Since we can't tell which mode it is in,
124 // do this unconditionally.
125 int dropCounter = 0;
126 while (mSurfaceTexture->updateTexImage() == NO_ERROR) {
127 int64_t newFrameNumber = mSurfaceTexture->getFrameNumber();
128 if (newFrameNumber == frameNumber) break;
129 frameNumber = newFrameNumber;
130 dropCounter++;
131 }
132
....
149 }
150}
39/*
40 * GLConsumer consumes buffers of graphics data from a BufferQueue,
41 * and makes them available to OpenGL as a texture.
42 *
43 * A typical usage pattern is to set up the GLConsumer with the
44 * desired options, and call updateTexImage() when a new frame is desired.
45 * If a new frame is available, the texture will be updated. If not,
46 * the previous contents are retained.
47 *
48 * By default, the texture is attached to the GL_TEXTURE_EXTERNAL_OES
49 * texture target, in the EGL context of the first thread that calls
50 * updateTexImage().
51 *
52 * This class was previously called SurfaceTexture.
53 */
54class GLConsumer : public ConsumerBase {
GLConsumer 消费来自于BufferQueue 的图形数据buffer,并把这些buffer处理为openGL纹理. 典型的使用模式是设置GLConsumer,当新的一帧需要更新时调用updateTexImage()以更新纹理. (这一段是直接翻译的注释...)
这张图在各种博客里面出现很多次了. 如上图, 在使用TextureView显示相机预览时,图像生产者是 Camere Preview, Buffer Data 会交给 native 的 surface, 在TexureView 中 SurfaceTexure的 GLConsumer消费者 会把 Buffer Data 交给 Image Stream Comsumer OpenGL .
stateDiagram
state SurfaceTexture{
state Surface{
BufferQueueProducer
}
state GLConsumer{
BufferQueueConsumer
}
Surface --> BufferQueueCore
BufferQueueCore --> GLConsumer
}
GLConsumer::updateTexImage()
9 //更新里面需要消费的纹理以及图像
status_t GLConsumer::updateTexImage() {
200 ...
209 // Make sure the EGL state is the same as in previous calls.
210 status_t err = checkAndUpdateEglStateLocked();
211 if (err != NO_ERROR) {
212 return err;
213 }
214
215 BufferItem item;
216
// 1.获取帧缓冲
217 // Acquire the next buffer.
218 // In asynchronous mode the list is guaranteed to be one buffer
219 // deep, while in synchronous mode we use the oldest buffer.
220 err = acquireBufferLocked(&item, 0);
...
//2. 更新帧缓冲
234 // Release the previous buffer.
235 err = updateAndReleaseLocked(item);
....
242 // Bind the new buffer to the GL texture, and wait until it's ready.
243 return bindTextureImageLocked();
244}
GLConsumer更新纹理
-
1.
acquireBufferLocked(..)获取生产者提交到BufferQueue的的BufferItem(GraphicBuffer) 图形缓冲
-
2.
updateAndReleaseLocked(..)更新第一步获取到的GraphicBuffer为GLConsumer更新纹理当前的EglImage(GraphicBuffer)
-
3.
bindTextureImageLocked()Bind the new buffer to the GL texture, and wait until it's ready. 绑定新的buffer到GL纹理
第一步,acquireBufferLocked 层层调用最终从 BufferQueue中取出BufferItem ,代码位于
BufferQueueConsumer::acquireBuffer(BufferItem* outBuffer..)
53 status_t BufferQueueConsumer::acquireBuffer(BufferItem* outBuffer,
54 nsecs_t expectedPresent, uint64_t maxFrameNumber) {
....
第二步 updateAndReleaseLocked ()会把第一步拿到的BufferItem
GLConsumer::updateAndReleaseLocked(const BufferItem& item,..)
381status_t GLConsumer::updateAndReleaseLocked(const BufferItem& item,
382 PendingRelease* pendingRelease)
383{
384 status_t err = NO_ERROR;
385
386 int slot = item.mSlot;
...
403
404 // Ensure we have a valid EglImageKHR for the slot, creating an EglImage
405 // if nessessary, for the gralloc buffer currently in the slot in
406 // ConsumerBase.
407 // We may have to do this even when item.mGraphicBuffer == NULL (which
408 // means the buffer was previously acquired).
409 err = mEglSlots[slot].mEglImage->createIfNeeded(mEglDisplay, item.mCrop);
...
437 // Hang onto the pointer so that it isn't freed in the call to
438 // releaseBufferLocked() if we're in shared buffer mode and both buffers are
439 // the same.
440 sp<EglImage> nextTextureImage = mEglSlots[slot].mEglImage;
....
464 // Update the GLConsumer state.
465 mCurrentTexture = slot;
466 mCurrentTextureImage = nextTextureImage;
467 mCurrentCrop = item.mCrop;
468 mCurrentTransform = item.mTransform;
469 mCurrentScalingMode = item.mScalingMode;
470 mCurrentTimestamp = item.mTimestamp;
471 mCurrentDataSpace = item.mDataSpace;
472 mCurrentFence = item.mFence;
473 mCurrentFenceTime = item.mFenceTime;
474 mCurrentFrameNumber = item.mFrameNumber;
475
476 computeCurrentTransformMatrixLocked();
477
478 return err;
479}
480
updateAndReleaseLocked()把从bufferQuene里面acquire 到的新的 GraphicBuffer 更新给GLconsumer.
BufferQueue中取出的BufferItem
54 // mGraphicBuffer points to the buffer allocated for this slot, or is NULL
55 // if the buffer in this slot has been acquired in the past (see
56 // BufferSlot.mAcquireCalled).
57 sp<GraphicBuffer> mGraphicBuffer;
65 // mCrop is the current crop rectangle for this buffer slot.
66 Rect mCrop;
67
97 // mSlot is the slot index of this buffer (default INVALID_BUFFER_SLOT).
98 int mSlot;
GLComsumer中用于保存BufferItem中的GraphicBuffer的 EglImage.
EglImage
301 class EglImage : public LightRefBase<EglImage> {
302 public:
303 EglImage(sp<GraphicBuffer> graphicBuffer);
304
305 // createIfNeeded creates an EGLImage if required (we haven't created
306 // one yet, or the EGLDisplay or crop-rect has changed).
307 status_t createIfNeeded(EGLDisplay display,
308 const Rect& cropRect,
309 bool forceCreate = false);
310
311 // This calls glEGLImageTargetTexture2DOES to bind the image to the
312 // texture in the specified texture target.
313 void bindToTextureTarget(uint32_t texTarget);
314
315 const sp<GraphicBuffer>& graphicBuffer() { return mGraphicBuffer; }
316 const native_handle* graphicBufferHandle() {
317 return mGraphicBuffer == NULL ? NULL : mGraphicBuffer->handle;
318 }
319
320 private:
....
333 // mGraphicBuffer is the buffer that was used to create this image.
334 sp<GraphicBuffer> mGraphicBuffer;
335
336 // mEglImage is the EGLImage created from mGraphicBuffer.
337 EGLImageKHR mEglImage;
338
339 // mEGLDisplay is the EGLDisplay that was used to create mEglImage.
340 EGLDisplay mEglDisplay;
341
342 // mCropRect is the crop rectangle passed to EGL when mEglImage
343 // was created.
344 Rect mCropRect;
345 };
447 // EGLSlot contains the information and object references that
448 // GLConsumer maintains about a BufferQueue buffer slot.
449 struct EglSlot {
450 EglSlot() : mEglFence(EGL_NO_SYNC_KHR) {}
451
452 // mEglImage is the EGLImage created from mGraphicBuffer.
453 sp<EglImage> mEglImage;
454
455 // mFence is the EGL sync object that must signal before the buffer
456 // associated with this buffer slot may be dequeued. It is initialized
457 // to EGL_NO_SYNC_KHR when the buffer is created and (optionally, based
458 // on a compile-time option) set to a new sync object in updateTexImage.
459 EGLSyncKHR mEglFence;
460 };
474 // mEGLSlots stores the buffers that have been allocated by the BufferQueue
475 // for each buffer slot. It is initialized to null pointers, and gets
476 // filled in with the result of BufferQueue::acquire when the
477 // client dequeues a buffer from a
478 // slot that has not yet been used. The buffer allocated to a slot will also
479 // be replaced if the requested buffer usage or geometry differs from that
480 // of the buffer allocated to a slot.
481 EglSlot mEglSlots[BufferQueueDefs::NUM_BUFFER_SLOTS];
EGLImage是一个用来记录跟踪和创建EglImageKHR的工具类,内部持有从bufferQueue中拿到的Graphic Buffer和通过Graphic Buffer生成的EGLImageKHR,EGLImageKHR的真实类型是ANativeWindowBuffer
bufferQuene中取出的GraphicBuffer 由GLconsumer的EglImage保存.
第三步 bindTextureImageLocked()
GLConsumer::bindTextureImageLocked()
481 status_t GLConsumer::bindTextureImageLocked() {
482 if (mEglDisplay == EGL_NO_DISPLAY) {
...
499 status_t err = mCurrentTextureImage->createIfNeeded(mEglDisplay,
500 mCurrentCrop);
...
506 mCurrentTextureImage->bindToTextureTarget(mTexTarget);
...
529 // Wait for the new buffer to be ready.
530 return doGLFenceWaitLocked();
531}
GLConsumer::EglImage::createImage(EGLDisplay dpy, const sp& graphicBuffer, const Rect& crop)
1121 EGLImageKHR GLConsumer::EglImage::createImage(EGLDisplay dpy,
1122 const sp<GraphicBuffer>& graphicBuffer, const Rect& crop) {
1123 EGLClientBuffer cbuf =
1124 static_cast<EGLClientBuffer>(graphicBuffer->getNativeBuffer());
1125 const bool createProtectedImage =
1126 (graphicBuffer->getUsage() & GRALLOC_USAGE_PROTECTED) &&
1127 hasEglProtectedContent();
1128 EGLint attrs[] = {
1129 EGL_IMAGE_PRESERVED_KHR, EGL_TRUE,
1130 EGL_IMAGE_CROP_LEFT_ANDROID, crop.left,
1131 EGL_IMAGE_CROP_TOP_ANDROID, crop.top,
1132 EGL_IMAGE_CROP_RIGHT_ANDROID, crop.right,
1133 EGL_IMAGE_CROP_BOTTOM_ANDROID, crop.bottom,
1134 createProtectedImage ? EGL_PROTECTED_CONTENT_EXT : EGL_NONE,
1135 createProtectedImage ? EGL_TRUE : EGL_NONE,
1136 EGL_NONE,
1137 };
1138 if (!crop.isValid()) {
1139 // No crop rect to set, so leave the crop out of the attrib array. Make
1140 // sure to propagate the protected content attrs if they are set.
1141 attrs[2] = attrs[10];
1142 attrs[3] = attrs[11];
1143 attrs[4] = EGL_NONE;
1144 } else if (!isEglImageCroppable(crop)) {
1145 // The crop rect is not at the origin, so we can't set the crop on the
1146 // EGLImage because that's not allowed by the EGL_ANDROID_image_crop
1147 // extension. In the future we can add a layered extension that
1148 // removes this restriction if there is hardware that can support it.
1149 attrs[2] = attrs[10];
1150 attrs[3] = attrs[11];
1151 attrs[4] = EGL_NONE;
1152 }
1153 eglInitialize(dpy, 0, 0);
1154 EGLImageKHR image = eglCreateImageKHR(dpy, EGL_NO_CONTEXT,
1155 EGL_NATIVE_BUFFER_ANDROID, cbuf, attrs);
...
1161 return image;
1162}
创建 EGLImageKHR
2087 EGLImageKHR eglCreateImageKHR(EGLDisplay dpy, EGLContext ctx, EGLenum target,
2088 EGLClientBuffer buffer, const EGLint* /*attrib_list*/)
2089{
...
2100 ANativeWindowBuffer* native_buffer = (ANativeWindowBuffer*)buffer;
2101
...
2108 switch (native_buffer->format) {
2109 case HAL_PIXEL_FORMAT_RGBA_8888:
2110 case HAL_PIXEL_FORMAT_RGBX_8888:
2111 case HAL_PIXEL_FORMAT_RGB_888:
2112 case HAL_PIXEL_FORMAT_RGB_565:
2113 case HAL_PIXEL_FORMAT_BGRA_8888:
2114 break;
2115 default:
2116 return setError(EGL_BAD_PARAMETER, EGL_NO_IMAGE_KHR);
2117 }
2118
2119 native_buffer->common.incRef(&native_buffer->common);
2120 return (EGLImageKHR)native_buffer;
2121}
2122
创建了EglImage后调用 mCurrentTextureImage->bindToTextureTarget(mTexTarget)
GLConsumer::EglImage::bindToTextureTarget()
1116 void GLConsumer::EglImage::bindToTextureTarget(uint32_t texTarget) {
1117 glEGLImageTargetTexture2DOES(texTarget,
1118 static_cast<GLeglImageOES>(mEglImage));
1119}
大概就酱紫吧.. 完成绑定新的buffer到GL纹理