掌握Android图像显示原理中(一)

6,464 阅读22分钟

前言

在上一篇文章《Android图形渲染原理(上)》中,详细的讲解了图像消费者,我们已经了解了Android中的图像元数据是如何被SurfaceFlinger,HWComposer或者OpenGL ES消费的,那么,图像元数据又是怎么生成的呢?这一篇文章就来详细介绍Android中的图像生产者——SKIA,OPenGL ES,Vulkan,他们是Android中最重要的三支画笔。 在这里插入图片描述

图像生产者

OpenGL ES

什么是OpenGL呢?OpenGL是一套图像编程接口,对于开发者来说,其实就是一套C语言编写的API接口,通过调用这些函数,便可以调用显卡来进行计算机的图形开发。虽然OpenGL是一套API接口,但它并没有具体的实现这些接口,接口的实现是由显卡的驱动程序来完成的。在前一篇文章中介绍过,显卡驱动是其他模块和显卡沟通的入口,开发者通过调用OpenGL的图像编程接口发出渲染命令,这些渲染命令被称为DrawCall,显卡驱动会将渲染命令翻译能GPU能理解的数据,然后通知GPU读取数据进行操作。OpenGL ES又是什么呢?它是为了更好的适应嵌入式等硬件较差的设备,推出的OpenGL的剪裁版,基本和OpenGL是一致的。Android从4.0开始默认开启硬件加速,也就是默认使用OpenGL ES来进行图形的生成和渲染工作。

我们接着来看看如何使用OpenGL ES。

如何使用OpenGL ES?

想要在Android上使用OpenGL ES,我们要先了解EGL。OpenGL虽然是跨平台的,但是在各个平台上也不能直接使用,因为每个平台的窗口都是不一样的,而EGL就是适配Android本地窗口系统和OpenGL ES桥接层。

OpenGL ES 定义了平台无关的 GL 绘图指令,EGL则定义了控制 displays,contexts 以及 surfaces 的统一的平台接口 在这里插入图片描述

那么如何使用EGL和OpenGL ES生成图形呢?其实比较简单,主要有这三步

  1. EGL初始化Display,Context和Surface
  2. OpenGL ES调用绘制指令
  3. EGL提交绘制后的buffer

我们详细来看一下每一步的流程

1,EGL进行初始化:主要初始化Display,Context 和Surface三个元素就可以了。

  • Display(EGLDisplay) 是对实际显示设备的抽象
//创建于本地窗口系统的连接
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
//初始化display
eglInitialize(display, NULL, NULL);
  • Context (EGLContext) 存储 OpenGL ES绘图的一些状态信息
/* create an EGL rendering context */
context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);
  • Surface(EGLSurface)是对用来存储图像的内存区域
//设置Surface配置
eglChooseConfig(display, attribute_list, &config, 1, &num_config);
//创建本地窗口
native_window = createNativeWindow();
//创建surface
surface = eglCreateWindowSurface(display, config, native_window, NULL);
  • 初始化完成后,需要绑定上下文
//绑定上下文
eglMakeCurrent(display, surface, surface, context);

2,OpenGL ES调用绘制指令:主要通过使用 OpenGL ES API ——gl_*(),接口进行绘制图形

//绘制点
glBegin(GL_POINTS);
    glVertex3f(0.7f,-0.5f0.0f); //入参为三维坐标
    glVertex3f(0.6f,-0.7f0.0f);
    glVertex3f(0.6f,-0.8f0.0f);
glEnd();
//绘制线
glBegin(GL_LINE_STRIP);
    glVertex3f(-1.0f1.0f0.0f);
    glVertex3f(-0.5f0.5f0.0f);
    glVertex3f(-0.7f0.5f0.0f);
glEnd();
//……

3,EGL提交绘制后的buffer:通过eglSwapBuffer()进行双缓冲buffer的切换

EGLBoolean res = eglSwapBuffers(mDisplay, mSurface);

swapBuffer切换缓冲区buffer后,显卡就会对Buffer中的图像进行渲染处理。此时,我们的图像就能显示出来了。

我们看一个完整的使用流程Demo

#include <stdlib.h>
#include <unistd.h>
#include <EGL/egl.h>
#include <GLES/gl.h>
typedef ... NativeWindowType;
extern NativeWindowType createNativeWindow(void);
static EGLint const attribute_list[] = {
        EGL_RED_SIZE, 1,
        EGL_GREEN_SIZE, 1,
        EGL_BLUE_SIZE, 1,
        EGL_NONE
};
int main(int argc, char ** argv)
{
        EGLDisplay display;
        EGLConfig config;
        EGLContext context;
        EGLSurface surface;
        NativeWindowType native_window;
        EGLint num_config;

        /* get an EGL display connection */
        display = eglGetDisplay(EGL_DEFAULT_DISPLAY);

        /* initialize the EGL display connection */
        eglInitialize(display, NULL, NULL);

        /* get an appropriate EGL frame buffer configuration */
        eglChooseConfig(display, attribute_list, &config, 1, &num_config);

        /* create an EGL rendering context */
        context = eglCreateContext(display, config, EGL_NO_CONTEXT, NULL);

        /* create a native window */
        native_window = createNativeWindow();

        /* create an EGL window surface */
        surface = eglCreateWindowSurface(display, config, native_window, NULL);

        /* connect the context to the surface */
        eglMakeCurrent(display, surface, surface, context);

        /* clear the color buffer */
        glClearColor(1.0, 1.0, 0.0, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);
        glFlush();

        eglSwapBuffers(display, surface);

        sleep(10);
        return EXIT_SUCCESS;
}

介绍完EGL和OpenGL的使用方式了,我们可以开始看Android是如何通过它进行界面的绘制的,这里会列举两个场景:开机动画,硬件加速来详细的讲解OpenGL ES作为图像生产者,是如何生产,即如何绘制图像的。

OpenGL ES播放开机动画

当Android系统启动时,会启动Init进程,Init进程会启动Zygote,ServerManager,SurfaceFlinger等服务。随着SurfaceFlinger的启动,我们的开机动画也会开始启动。先看看SurfaceFlinger的初始化函数。

//文件-->/frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp
void SurfaceFlinger::init() {
    ...
    mStartBootAnimThread = new StartBootAnimThread();
    if (mStartBootAnimThread->Start() != NO_ERROR) {
        ALOGE("Run StartBootAnimThread failed!");
    }
}

//文件-->/frameworks/native/services/surfaceflinger/StartBootAnimThread.cpp
status_t StartBootAnimThread::Start() {
    return run("SurfaceFlinger::StartBootAnimThread", PRIORITY_NORMAL);
}

bool StartBootAnimThread::threadLoop() {
    property_set("service.bootanim.exit", "0");
    property_set("ctl.start", "bootanim");
    // Exit immediately
    return false;
}

从上面的代码可以看到,SurfaceFlinger的init函数中会启动BootAnimThread线程,BootAnimThread线程会通过property_set来发送通知,它是一种Socket方式的IPC通信机制,对Android IPC通信感兴趣的可以看看我的这篇文章《深入理解Android进程间通信机制》,这里就不过多讲解了。init进程会接收到bootanim的通知,然后启动我们的动画线程BootAnimation。

了解了前面的流程,我们开始看BootAnimation这个类,Android的开机动画的逻辑都在这个类中。我们先看看构造函数和onFirsetRef函数,这是这个类创建时最先执行的两个函数:

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
BootAnimation::BootAnimation() : Thread(false), mClockEnabled(true), mTimeIsAccurate(false),
        mTimeFormat12Hour(false), mTimeCheckThread(NULL) {
    //创建SurfaceComposerClient
    mSession = new SurfaceComposerClient();
    //……
}

void BootAnimation::onFirstRef() {
    status_t err = mSession->linkToComposerDeath(this);
    if (err == NO_ERROR) {
        run("BootAnimation", PRIORITY_DISPLAY);
    }
}

构造函数中创建了SurfaceComposerClient,SurfaceComposerClient是SurfaceFlinger的客户端代理,我们可以通过它来和SurfaceFlinger建立通信。构造函数执行完后就会执行onFirsetRef()函数,这个函数会启动BootAnimation线程

接着看BootAnimation线程的初始化函数readyToRun。

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
status_t BootAnimation::readyToRun() {
    mAssets.addDefaultAssets();

    sp<IBinder> dtoken(SurfaceComposerClient::getBuiltInDisplay(
            ISurfaceComposer::eDisplayIdMain));
    DisplayInfo dinfo;
    //获取屏幕信息
    status_t status = SurfaceComposerClient::getDisplayInfo(dtoken, &dinfo);
    if (status)
        return -1;

    // 通知SurfaceFlinger创建Surface,创建成功会返回一个SurfaceControl代理
    sp<SurfaceControl> control = session()->createSurface(String8("BootAnimation"),
            dinfo.w, dinfo.h, PIXEL_FORMAT_RGB_565);

    SurfaceComposerClient::openGlobalTransaction();
    //设置这个layer在SurfaceFlinger中的层级顺序
    control->setLayer(0x40000000);

    //获取surface
    sp<Surface> s = control->getSurface();

    // 以下是EGL的初始化流程
    const EGLint attribs[] = {
            EGL_RED_SIZE,   8,
            EGL_GREEN_SIZE, 8,
            EGL_BLUE_SIZE,  8,
            EGL_DEPTH_SIZE, 0,
            EGL_NONE
    };
    EGLint w, h;
    EGLint numConfigs;
    EGLConfig config;
    EGLSurface surface;
    EGLContext context;

    //步骤1:获取Display
    EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    //步骤2:初始化EGL
    eglInitialize(display, 0, 0);
    //步骤3:选择参数
    eglChooseConfig(display, attribs, &config, 1, &numConfigs);
    //步骤4:传入SurfaceFlinger生成的surface,并以此构造EGLSurface
    surface = eglCreateWindowSurface(display, config, s.get(), NULL);
    //步骤5:构造egl上下文
    context = eglCreateContext(display, config, NULL, NULL);
    //步骤6:绑定EGL上下文
    if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE)
        return NO_INIT;
    //……
}

通过readyToRun函数可以看到,里面主要做了两件事情:初始化Surface,初始化EGL,EGL的初始化流程和上面OpenGL ES使用中讲的流程是一样的,这里就不详细讲了,主要简单介绍一下Surface初始化的流程,详细的流程会在下一篇文章图像缓冲区中讲,它的步骤如下:

  • 创建SurfaceComponentClient
  • 通过SurfaceComponentClient通知SurfaceFlinger创建Surface,并返回SurfaceControl
  • 有了SurfaceControl之后,我们就可以设置这块Surface的层级等属性,并能获取到这块Surface。
  • 获取到Surface后,将Surface绑定到EGL中去

Surface也创建好了,EGL也创建好了,此时我们就可以通过OpenGL来生成图像——也就是开机动画了,我们接着看看线程的执行方法threadLoop函数中是如何播放的动画的。

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
bool BootAnimation::threadLoop()
{
    bool r;
    if (mZipFileName.isEmpty()) {
        r = android();   //Android默认动画
    } else {
        r = movie();     //自定义动画
    }
	//动画播放完后的释放工作
    eglMakeCurrent(mDisplay, EGL_NO_SURFACE, EGL_NO_SURFACE, EGL_NO_CONTEXT);
    eglDestroyContext(mDisplay, mContext);
    eglDestroySurface(mDisplay, mSurface);
    mFlingerSurface.clear();
    mFlingerSurfaceControl.clear();
    eglTerminate(mDisplay);
    eglReleaseThread();
    IPCThreadState::self()->stopProcess();
    return r;
}

函数中会判断是否有自定义的开机动画文件,如果没有就播放默认的动画,有就播放自定义的动画,播放完成后就是释放和清除的操作。默认动画和自定义动画的播放方式其实差不多,我们以自定义动画为例,看看具体的实现流程。

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
bool BootAnimation::movie()
{
    //根据文件路径加载动画文件
    Animation* animation = loadAnimation(mZipFileName);
    if (animation == NULL)
        return false;

    //……

    
    glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
    // 调用OpenGL清理屏幕
    glShadeModel(GL_FLAT);
    glDisable(GL_DITHER);
    glDisable(GL_SCISSOR_TEST);
    glDisable(GL_BLEND);

    glBindTexture(GL_TEXTURE_2D, 0);
    glEnable(GL_TEXTURE_2D);
    glTexEnvx(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

    //……

    //播放动画
    playAnimation(*animation);

    //……
    
    //释放动画
    releaseAnimation(animation);

    return false;
}

movie函数主要做的事情如下

  1. 通过文件路径加载动画
  2. 调用OpenGL做清屏操作
  3. 调用playAnimation函数播放动画。
  4. 停止播放动画后通过releaseAnimation释放资源

我们接着看playAnimation函数

//文件-->/frameworks/base/cmds/bootanimation/BootAnimation.cpp
bool BootAnimation::playAnimation(const Animation& animation)
{
    const size_t pcount = animation.parts.size();
    nsecs_t frameDuration = s2ns(1) / animation.fps;
    const int animationX = (mWidth - animation.width) / 2;
    const int animationY = (mHeight - animation.height) / 2;

    //遍历动画片段
    for (size_t i=0 ; i<pcount ; i++) {
        const Animation::Part& part(animation.parts[i]);
        const size_t fcount = part.frames.size();
        glBindTexture(GL_TEXTURE_2D, 0);

        // Handle animation package
        if (part.animation != NULL) {
            playAnimation(*part.animation);
            if (exitPending())
                break;
            continue; //to next part
        }
		
        //循环动画片段
        for (int r=0 ; !part.count || r<part.count ; r++) {
            // Exit any non playuntil complete parts immediately
            if(exitPending() && !part.playUntilComplete)
                break;

            
            //启动音频线程,播放音频文件
            if (r == 0 && part.audioData && playSoundsAllowed()) {               
                if (mInitAudioThread != nullptr) {
                    mInitAudioThread->join();
                }
                audioplay::playClip(part.audioData, part.audioLength);
            }

            glClearColor(
                    part.backgroundColor[0],
                    part.backgroundColor[1],
                    part.backgroundColor[2],
                    1.0f);
			//按照frameDuration频率,循环绘制开机动画图片纹理
            for (size_t j=0 ; j<fcount && (!exitPending() || part.playUntilComplete) ; j++) {
                const Animation::Frame& frame(part.frames[j]);
                nsecs_t lastFrame = systemTime();

                if (r > 0) {
                    glBindTexture(GL_TEXTURE_2D, frame.tid);
                } else {
                    if (part.count != 1) {
                        //生成纹理
                        glGenTextures(1, &frame.tid);
                        //绑定纹理
                        glBindTexture(GL_TEXTURE_2D, frame.tid);
                        glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
                        glTexParameterx(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
                    }
                    int w, h;
                    initTexture(frame.map, &w, &h);
                }

                const int xc = animationX + frame.trimX;
                const int yc = animationY + frame.trimY;
                Region clearReg(Rect(mWidth, mHeight));
                clearReg.subtractSelf(Rect(xc, yc, xc+frame.trimWidth, yc+frame.trimHeight));
                if (!clearReg.isEmpty()) {
                    Region::const_iterator head(clearReg.begin());
                    Region::const_iterator tail(clearReg.end());
                    glEnable(GL_SCISSOR_TEST);
                    while (head != tail) {
                        const Rect& r2(*head++);
                        glScissor(r2.left, mHeight - r2.bottom, r2.width(), r2.height());
                        glClear(GL_COLOR_BUFFER_BIT);
                    }
                    glDisable(GL_SCISSOR_TEST);
                }
                // 绘制纹理
                glDrawTexiOES(xc, mHeight - (yc + frame.trimHeight),
                              0, frame.trimWidth, frame.trimHeight);
                if (mClockEnabled && mTimeIsAccurate && validClock(part)) {
                    drawClock(animation.clockFont, part.clockPosX, part.clockPosY);
                }

                eglSwapBuffers(mDisplay, mSurface);

                nsecs_t now = systemTime();
                nsecs_t delay = frameDuration - (now - lastFrame);
                //ALOGD("%lld, %lld", ns2ms(now - lastFrame), ns2ms(delay));
                lastFrame = now;

                if (delay > 0) {
                    struct timespec spec;
                    spec.tv_sec  = (now + delay) / 1000000000;
                    spec.tv_nsec = (now + delay) % 1000000000;
                    int err;
                    do {
                        err = clock_nanosleep(CLOCK_MONOTONIC, TIMER_ABSTIME, &spec, NULL);
                    } while (err<0 && errno == EINTR);
                }

                checkExit();
            }
			//休眠
            usleep(part.pause * ns2us(frameDuration));

            // 动画退出条件判断
            if(exitPending() && !part.count)
                break;
        }

    }

    // 释放纹理
    for (const Animation::Part& part : animation.parts) {
        if (part.count != 1) {
            const size_t fcount = part.frames.size();
            for (size_t j = 0; j < fcount; j++) {
                const Animation::Frame& frame(part.frames[j]);
                glDeleteTextures(1, &frame.tid);
            }
        }
    }

    // 关闭和视频音频
    audioplay::setPlaying(false);
    audioplay::destroy();

    return true;
}

从上面的源码可以看到,playAnimation函数播放动画的原理,其实就是按照一定的频率,循环调用glDrawTexiOES函数,绘制图片纹理,同时调用音频播放模块播放音频。

通过OpenGL ES播放动画的案例就讲完了,我们也了解了通过OpenGL来播放视频的一种方式,我们接着看第二个案例,Activity界面如何通过OpenGL来进行硬件加速,也就是硬件绘制绘制的。

OpenGL ES进行硬件加速

我们知道,Activity界面的显示需要经历Measure测量,Layout布局,和Draw绘制三个过程,而Draw绘制流程又分为软件绘制和硬件绘制,硬件绘制便是通过OpenGL ES进行的。我们直接看看硬件绘制流程里,OpenGL ES是如何来进行绘制的,它的入口在ViewRootImpl的performDraw函数中。

//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java
private void performDraw() {
    //……
    draw(fullRedrawNeeded);
    //……
}

private void draw(boolean fullRedrawNeeded) {
    Surface surface = mSurface;
    if (!surface.isValid()) {
        return;
    }

    //……

    if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
        if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
            if (mAttachInfo.mThreadedRenderer != null && mAttachInfo.mThreadedRenderer.isEnabled()) {
                
                //……

                //硬件渲染
                mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this);

            } else {
                
                //……

                //软件渲染
                if (!drawSoftware(surface, mAttachInfo, xOffset, yOffset, scalingRequired, dirty)) {
                    return;
                }
            }
        }

        //……
    }

    //……
}

从上面的代码可以看到,硬件渲染是通过mThreadedRenderer.draw方法进行的,在分析mThreadedRenderer.draw函数之前,我们需要先了解ThreadedRenderer是什么,它的创建要在Measure,Layout和Draw的流程之前,当我们在Activity的onCreate回调中执行setContentView函数时,最终会执行ViewRootImpl的setView方法,ThreadedRenderer就是在这个此时被创建的。

//文件-->/frameworks/base/core/java/android/view/ViewRootImpl.java
public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView) {
    synchronized (this) {
        if (mView == null) {
            mView = view;

            //……            
            if (mSurfaceHolder == null) {
                enableHardwareAcceleration(attrs);
            }

            //……
        }
    }
}

private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
    mAttachInfo.mHardwareAccelerated = false;
    mAttachInfo.mHardwareAccelerationRequested = false;

    // 兼容模式下不开启硬件加速
    if (mTranslator != null) return;

    
    final boolean hardwareAccelerated =
            (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0;

    if (hardwareAccelerated) {
        if (!ThreadedRenderer.isAvailable()) {
            return;
        }

        //……

        if (fakeHwAccelerated) {
            //……
        } else if (!ThreadedRenderer.sRendererDisabled
                || (ThreadedRenderer.sSystemRendererDisabled && forceHwAccelerated)) {
            //……
			//创建ThreadedRenderer
            mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent,
                    attrs.getTitle().toString());
            if (mAttachInfo.mThreadedRenderer != null) {
                mAttachInfo.mHardwareAccelerated =
                        mAttachInfo.mHardwareAccelerationRequested = true;
            }
        }
    }
}

可以看到,当RootViewImpl在调用setView的时候,会开启硬件加速,并通过ThreadedRenderer.create函数来创建ThreadedRenderer。

我们继续看看ThreadedRenderer这个类的实现。

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
public static ThreadedRenderer create(Context context, boolean translucent, String name) {
    ThreadedRenderer renderer = null;
    if (isAvailable()) {
        renderer = new ThreadedRenderer(context, translucent, name);
    }
    return renderer;
}

ThreadedRenderer(Context context, boolean translucent, String name) {
    //……
	
    //创建RootRenderNode
    long rootNodePtr = nCreateRootRenderNode();
    mRootNode = RenderNode.adopt(rootNodePtr);
    mRootNode.setClipToBounds(false);
    mIsOpaque = !translucent;
    //创建RenderProxy
    mNativeProxy = nCreateProxy(translucent, rootNodePtr);
    nSetName(mNativeProxy, name);
	//启动GraphicsStatsService,统计渲染信息
    ProcessInitializer.sInstance.init(context, mNativeProxy);
	
    loadSystemProperties();
}

ThreadedRenderer的构造函数中主要做了这两件事情:

  1. 通过JNI方法nCreateRootRenderNode在Native创建RootRenderNode,每一个View都对应了一个RenderNode,它包含了这个View及其子view的DisplayList,DisplayList包含了是可以让openGL识别的渲染指令,这些渲染指令被封装成了一条条OP。
//文件-->/frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(JNIEnv* env, jobject clazz) {
    RootRenderNode* node = new RootRenderNode(env);
    node->incStrong(0);
    node->setName("RootRenderNode");
    return reinterpret_cast<jlong>(node);
}
  1. 通过Jni方法nCreateProxy在Native层的RenderProxy,它就是用来跟渲染线程进行通信的句柄,我们看下nCreateProxy的Native实现
//文件-->/frameworks/base/core/jni/android_view_ThreadedRenderer.cpp
static jlong android_view_ThreadedRenderer_createProxy(JNIEnv* env, jobject clazz,
        jboolean translucent, jlong rootRenderNodePtr) {
    RootRenderNode* rootRenderNode = reinterpret_cast<RootRenderNode*>(rootRenderNodePtr);
    ContextFactoryImpl factory(rootRenderNode);
    return (jlong) new RenderProxy(translucent, rootRenderNode, &factory);
}

//文件-->/frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory)
        : mRenderThread(RenderThread::getInstance())
        , mContext(nullptr) {
    SETUP_TASK(createContext);
    args->translucent = translucent;
    args->rootRenderNode = rootRenderNode;
    args->thread = &mRenderThread;
    args->contextFactory = contextFactory;
    mContext = (CanvasContext*) postAndWait(task);
    mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode);
}

从RenderProxy构造函数可以看到,通过RenderThread::getInstance()创建了RenderThread,也就是硬件绘制的渲染线程。相比于在主线程进行的软件绘制,硬件加速会新建一个线程,这样能减轻主线程的工作量。

了解了ThreadedRenderer的创建和初始化流程,我们继续回到渲染的流程mThreadedRenderer.draw这个函数中来,先看看这个函数的源码。

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
    attachInfo.mIgnoreDirtyState = true;

    final Choreographer choreographer = attachInfo.mViewRootImpl.mChoreographer;
    choreographer.mFrameInfo.markDrawStart();

    //1,构建RootView的DisplayList
    updateRootDisplayList(view, callbacks);

    attachInfo.mIgnoreDirtyState = false;

    //…… 窗口动画处理

    final long[] frameInfo = choreographer.mFrameInfo.mFrameInfo;
    //2,通知渲染
    int syncResult = nSyncAndDrawFrame(mNativeProxy, frameInfo, frameInfo.length);
    
    //…… 渲染失败的处理
}

这个流程我们只需要关心这两件事情:

  1. 构建DisplayList
  2. 绘制DisplayList**

经过这两步,界面就显示出来。我们详细看一下这这两步的流程:

构建DisplayList

1,通过updateRootDisplayList函数构建根view的DisplayList,DisplayList在前面提到过,它包含了可以让openGL识别的渲染指令,先看看函数的实现

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
    Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Record View#draw()");
	//构建View的DisplayList
    updateViewTreeDisplayList(view);

    if (mRootNodeNeedsUpdate || !mRootNode.isValid()) {
        //获取DisplayListCanvas
        DisplayListCanvas canvas = mRootNode.start(mSurfaceWidth, mSurfaceHeight);
        try {
            final int saveCount = canvas.save();
            canvas.translate(mInsetLeft, mInsetTop);
            callbacks.onPreDraw(canvas);

            canvas.insertReorderBarrier();
            //合并和优化DisplayList
            canvas.drawRenderNode(view.updateDisplayListIfDirty());
            canvas.insertInorderBarrier();

            callbacks.onPostDraw(canvas);
            canvas.restoreToCount(saveCount);
            mRootNodeNeedsUpdate = false;
        } finally {
            //更新RootRenderNode
            mRootNode.end(canvas);
        }
    }
    Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}

updateRootDisplayList函数的主要流程有这几步:

  1. 构建根View的DisplayList
  2. 合并和优化DisplayList
构建根View的DisplayList

我们先看第一步构建根View的DisplayList的源码。

//文件-->/frameworks/base/core/java/android/view/ThreadedRenderer.java
private void updateViewTreeDisplayList(View view) {
    view.mPrivateFlags |= View.PFLAG_DRAWN;
    view.mRecreateDisplayList = (view.mPrivateFlags & View.PFLAG_INVALIDATED)
            == View.PFLAG_INVALIDATED;
    view.mPrivateFlags &= ~View.PFLAG_INVALIDATED;
    view.updateDisplayListIfDirty();
    view.mRecreateDisplayList = false;
}

//文件-->/frameworks/base/core/java/android/view/View.java
public RenderNode updateDisplayListIfDirty() {
    final RenderNode renderNode = mRenderNode;
    if (!canHaveDisplayList()) {
        return renderNode;
    }

   	//判断硬件加速是否可用
    if ((mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0
        || !renderNode.isValid()
        || (mRecreateDisplayList)) {
        //…… 不需要更新displaylist时,则直接返回renderNode
        

        
		//获取DisplayListCanvas
        final DisplayListCanvas canvas = renderNode.start(width, height);       

        try {
            if (layerType == LAYER_TYPE_SOFTWARE) {
                //如果强制开启了软件绘制,比如一些不支持硬件加速的组件,或者静止了硬件加速的组件,会转换成bitmap后,交给硬件渲染
                buildDrawingCache(true);
                Bitmap cache = getDrawingCache(true);
                if (cache != null) {
                    canvas.drawBitmap(cache, 0, 0, mLayerPaint);
                }
            } else {                               
                if ((mPrivateFlags & PFLAG_SKIP_DRAW) == PFLAG_SKIP_DRAW) {
                    //递归子View构建或更新displaylist
                    dispatchDraw(canvas);                    
                } else {
                    //调用自身的draw方法
                    draw(canvas);
                }
            }
        } finally {
            //讲DisplayListCanvas内容绑定到renderNode上
            renderNode.end(canvas);
            setDisplayListProperties(renderNode);
        }
    } else {
        mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID;
        mPrivateFlags &= ~PFLAG_DIRTY_MASK;
    }
    return renderNode;
}

可以看到updateDisplayListIfDirty主要做的事情有这几件

  1. 获取DisplayListCanvas
  2. 判断组件是否支持硬件加速,不支持则转换成bitmap后交给DisplayListCanvas
  3. 递归子View执行DisplayList的构建
  4. 调用自身的draw方法,交给DisplayListCanvas进行绘制
  5. 返回RenderNode

看到这里可能会有人疑问,为什么构建更新DisplayList函数updateDisplayListIfDirty中并没有看到DisplayList,返回对象也不是DisplayList,而是RenderNode?这个DisplayList其实是在Native层创建的,在前面提到过RenderNode其实包含了DisplayList,renderNode.end(canvas)函数会将DisplayList绑定到renderNode中。而DisplayListCanvas的作用,就是在Native层创建DisplayList。那么我们接着看DisplayListCanvas这个类。

//文件-->/frameworks/base/core/java/android/view/RenderNode.java
public DisplayListCanvas start(int width, int height) {
    return DisplayListCanvas.obtain(this, width, height);
}

//文件-->/frameworks/base/core/java/android/view/DisplayListCanvas.java
static DisplayListCanvas obtain(@NonNull RenderNode node, int width, int height) {
    if (node == null) throw new IllegalArgumentException("node cannot be null");
    DisplayListCanvas canvas = sPool.acquire();
    if (canvas == null) {
        canvas = new DisplayListCanvas(node, width, height);
    } else {
        nResetDisplayListCanvas(canvas.mNativeCanvasWrapper, node.mNativeRenderNode,
                                width, height);
    }
    canvas.mNode = node;
    canvas.mWidth = width;
    canvas.mHeight = height;
    return canvas;
}

private DisplayListCanvas(@NonNull RenderNode node, int width, int height) {
    super(nCreateDisplayListCanvas(node.mNativeRenderNode, width, height));
    mDensity = 0; // disable bitmap density scaling
}

我们通过RenderNode.start方法获取一个DisplayListCanvas,RenderNode会通过obtain来创建或从缓存中获取DisplayListCanvas,这是一种享元模式。DisplayListCanvas的构造函数里,会通过JNI方法nCreateDisplayListCanvas创建native的Canvas,我们接着看一下Native的流程

//文件-->/frameworks/base/core/jni/android_view_DisplayListCanvas.cpp
static jlong android_view_DisplayListCanvas_createDisplayListCanvas(jlong renderNodePtr,
        jint width, jint height) {
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    return reinterpret_cast<jlong>(Canvas::create_recording_canvas(width, height, renderNode));
}

//文件-->/frameworks/base/libs/hwui/hwui/Canvas.cpp
Canvas* Canvas::create_recording_canvas(int width, int height, uirenderer::RenderNode* renderNode) {
    if (uirenderer::Properties::isSkiaEnabled()) {
        return new uirenderer::skiapipeline::SkiaRecordingCanvas(renderNode, width, height);
    }
    return new uirenderer::RecordingCanvas(width, height);
}

可以看到,java层的DisplayListCanvas对应了native层RecordingCanvas或者SkiaRecordingCanvas

这里简单介绍一下这两个Canvas的区别,在Android8之前,HWUI中通过OpenGL对绘制操作进行封装后,直接送GPU进行渲染。Android 8.0开始,HWUI进行了重构,增加了RenderPipeline的概念,RenderPipeline有三种类型,分别为Skia,OpenGL和Vulkan,分别对应不同的渲染。并且Android8.0开始强化和重视Skia的地位,Android10版本后,所有通过硬件加速的渲染,都是通过SKIA进行封装,然后再经过OpenGL或Vulkan,最后交给GPU渲染。我讲解的源码是8.0的源码,可以看到,其实已经可以通过配置,来开启skiapipeline了。

在这里插入图片描述

为了更容易的讲解如何通过OpenGL进行硬件渲染,这里我还是以RecordingCanvas来讲解,这里列举几个RecordingCanvas中的常规操作

//文件-->/frameworks/base/libs/hwui/RecordingCanvas.cpp
//绘制点
void RecordingCanvas::drawPoints(const float* points, int floatCount, const SkPaint& paint) {
    if (CC_UNLIKELY(floatCount < 2 || paint.nothingToDraw())) return;
    floatCount &= ~0x1; // round down to nearest two

    addOp(alloc().create_trivial<PointsOp>(
            calcBoundsOfPoints(points, floatCount),
            *mState.currentSnapshot()->transform,
            getRecordedClip(),
            refPaint(&paint), refBuffer<float>(points, floatCount), floatCount));
}

struct PointsOp : RecordedOp {
    PointsOp(BASE_PARAMS, const float* points, const int floatCount)
            : SUPER(PointsOp)
            , points(points)
            , floatCount(floatCount) {}
    const float* points;
    const int floatCount;
};
//绘制线
void RecordingCanvas::drawLines(const float* points, int floatCount, const SkPaint& paint) {
    if (CC_UNLIKELY(floatCount < 4 || paint.nothingToDraw())) return;
    floatCount &= ~0x3; // round down to nearest four

    addOp(alloc().create_trivial<LinesOp>(
            calcBoundsOfPoints(points, floatCount),
            *mState.currentSnapshot()->transform,
            getRecordedClip(),
            refPaint(&paint), refBuffer<float>(points, floatCount), floatCount));
}
struct LinesOp : RecordedOp {
    LinesOp(BASE_PARAMS, const float* points, const int floatCount)
            : SUPER(LinesOp)
            , points(points)
            , floatCount(floatCount) {}
    const float* points;
    const int floatCount;
};

//绘制矩阵
void RecordingCanvas::drawRect(float left, float top, float right, float bottom, const SkPaint& paint) {
    if (CC_UNLIKELY(paint.nothingToDraw())) return;

    addOp(alloc().create_trivial<RectOp>(
            Rect(left, top, right, bottom),
            *(mState.currentSnapshot()->transform),
            getRecordedClip(),
            refPaint(&paint)));
}

struct RectOp : RecordedOp {
    RectOp(BASE_PARAMS)
            : SUPER(RectOp) {}
};

struct RoundRectOp : RecordedOp {
    RoundRectOp(BASE_PARAMS, float rx, float ry)
            : SUPER(RoundRectOp)
            , rx(rx)
            , ry(ry) {}
    const float rx;
    const float ry;
};

int RecordingCanvas::addOp(RecordedOp* op) {
    // skip op with empty clip
    if (op->localClip && op->localClip->rect.isEmpty()) {
        // NOTE: this rejection happens after op construction/content ref-ing, so content ref'd
        // and held by renderthread isn't affected by clip rejection.
        // Could rewind alloc here if desired, but callers would have to not touch op afterwards.
        return -1;
    }

    int insertIndex = mDisplayList->ops.size();
    mDisplayList->ops.push_back(op);
    if (mDeferredBarrierType != DeferredBarrierType::None) {
        // op is first in new chunk
        mDisplayList->chunks.emplace_back();
        DisplayList::Chunk& newChunk = mDisplayList->chunks.back();
        newChunk.beginOpIndex = insertIndex;
        newChunk.endOpIndex = insertIndex + 1;
        newChunk.reorderChildren = (mDeferredBarrierType == DeferredBarrierType::OutOfOrder);
        newChunk.reorderClip = mDeferredBarrierClip;

        int nextChildIndex = mDisplayList->children.size();
        newChunk.beginChildIndex = newChunk.endChildIndex = nextChildIndex;
        mDeferredBarrierType = DeferredBarrierType::None;
    } else {
        // standard case - append to existing chunk
        mDisplayList->chunks.back().endOpIndex = insertIndex + 1;
    }
    return insertIndex;
}

可以看到,我们通过RecordingCanvas绘制的图元,都被封装成了一个个能够让GPU能够识别的OP,这些OP都存储在了mDisplayList中。这就回答了前面的疑问,为什么updateDisplayListIfDirty没有看到DisplayList,因为DisplayListCanvas通过调用Natice层的RecordingCanvas,更新了Natice层的mDisplayList。

我们在接着看renderNode.end(canvas)函数,如何将Natice层的DisplayList绑定到renderNode中。

//文件-->/frameworks/base/core/java/android/view/RenderNode.java
public void end(DisplayListCanvas canvas) {
    long displayList = canvas.finishRecording();
    nSetDisplayList(mNativeRenderNode, displayList);
    canvas.recycle();
}

这里通过JNI方法nSetDisplayList进行了DisplayList和RenderNode的绑定,此时,我们就能理解我在前面说的:RenderNode包含了这个View及其子view的DisplayList,DisplayList包含了一条条可以让openGL识别的渲染指令——OP操作,它是一个基本的能让GPU识别的绘制元素。

合并和优化DisplayList

updateViewTreeDisplayList花了比较大精力,将所有的View的DisplayList已经创建好了,DisplayList里的DrawOP树也创建好了,为什么还要在调用**canvas.drawRenderNode(view.updateDisplayListIfDirty())**这个函数呢?这个函数的主要功能是对前面构建的DisplayList做优化和合并处理,我们看看具体的实现细节。

//文件-->/frameworks/base/core/java/android/view/DisplayListCanvas.java
public void drawRenderNode(RenderNode renderNode) {
    nDrawRenderNode(mNativeCanvasWrapper, renderNode.getNativeDisplayList());
}

//文件-->/frameworks/base/core/jni/android_view_DisplayListCanvas.cpp
static void android_view_DisplayListCanvas_drawRenderNode(jlong canvasPtr, jlong renderNodePtr) {
    Canvas* canvas = reinterpret_cast<Canvas*>(canvasPtr);
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    canvas->drawRenderNode(renderNode);
}

//文件-->/frameworks/base/libs/hwui/RecordingCanvas.cpp
void RecordingCanvas::drawRenderNode(RenderNode* renderNode) {
    auto&& stagingProps = renderNode->stagingProperties();
    RenderNodeOp* op = alloc().create_trivial<RenderNodeOp>(
            Rect(stagingProps.getWidth(), stagingProps.getHeight()),
            *(mState.currentSnapshot()->transform),
            getRecordedClip(),
            renderNode);
    int opIndex = addOp(op);
    if (CC_LIKELY(opIndex >= 0)) {
        int childIndex = mDisplayList->addChild(op);

        // update the chunk's child indices
        DisplayList::Chunk& chunk = mDisplayList->chunks.back();
        chunk.endChildIndex = childIndex + 1;

        if (renderNode->stagingProperties().isProjectionReceiver()) {
            // use staging property, since recording on UI thread
            mDisplayList->projectionReceiveIndex = opIndex;
        }
    }
}

可以看到,最终执行到了RecordingCanvas中的drawRenderNode函数,这个函数会对DisplayList做合并和优化。

绘制DisplayList

经过比较长的篇幅,我们把mThreadedRenderer.draw函数中的第一个流程,构建DisplayList说完,现在开始说第二个流程,nSyncAndDrawFrame进行帧绘制,这个流程结束,我们的界面就能在屏幕上显示出来了。nSyncAndDrawFrame是一个native方法,我们看看它的实现

static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz,
        jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
    LOG_ALWAYS_FATAL_IF(frameInfoSize != UI_THREAD_FRAME_INFO_SIZE,
            "Mismatched size expectations, given %d expected %d",
            frameInfoSize, UI_THREAD_FRAME_INFO_SIZE);
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
    return proxy->syncAndDrawFrame();
}

int RenderProxy::syncAndDrawFrame() {
    return mDrawFrameTask.drawFrame();
}


nSyncAndDrawFrame函数调用了RenderProxy的syncAndDrawFrame,syncAndDrawFrame调用了DrawFrameTask.drawFrame()方法

//文件-->/frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
int DrawFrameTask::drawFrame() {
    LOG_ALWAYS_FATAL_IF(!mContext, "Cannot drawFrame with no CanvasContext!");

    mSyncResult = SyncResult::OK;
    mSyncQueued = systemTime(CLOCK_MONOTONIC);
    postAndWait();

    return mSyncResult;
}

void DrawFrameTask::postAndWait() {
    AutoMutex _lock(mLock);
    mRenderThread->queue(this);
    mSignal.wait(mLock);
}

void DrawFrameTask::run() {
    ATRACE_NAME("DrawFrame");

    bool canUnblockUiThread;
    bool canDrawThisFrame;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = info.out.canDrawThisFrame;
    }

    // Grab a copy of everything we need
    CanvasContext* context = mContext;

    // From this point on anything in "this" is *UNSAFE TO ACCESS*
    if (canUnblockUiThread) {
        unblockUiThread();
    }

    if (CC_LIKELY(canDrawThisFrame)) {
        context->draw();
    } else {
        // wait on fences so tasks don't overlap next frame
        context->waitOnFences();
    }

    if (!canUnblockUiThread) {
        unblockUiThread();
    }
}

DrawFrameTask做了两件事情

  1. 调用syncFrameState函数同步frame信息
  2. 调用CanvasContext.draw()函数进行绘制
同步Frame信息

我们先看看第一件事情,同步Frame信息,它主要的工作是将主线程的RenderNode同步到RenderNode来,在前面讲mAttachInfo.mThreadedRenderer.draw函数中,第一步会将DisplayList构建完毕,然后绑定到RenderNode中,这个RenderNode是在主线程创建的。而我们的DrawFrameTask,是在native层的RenderThread中执行的,所以需要讲数据同步过来。

//文件-->/frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
bool DrawFrameTask::syncFrameState(TreeInfo& info) {
    ATRACE_CALL();
    int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
    mRenderThread->timeLord().vsyncReceived(vsync);
    bool canDraw = mContext->makeCurrent();
    mContext->unpinImages();

    for (size_t i = 0; i < mLayers.size(); i++) {
        mLayers[i]->apply();
    }
    mLayers.clear();
    mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);
    
	//……
   
    // If prepareTextures is false, we ran out of texture cache space
    return info.prepareTextures;
}

这里调用了mContext->prepareTree函数,mContext在下面会详细讲,我们这里先看看这个方法的实现。

//文件-->/frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo,
        int64_t syncQueued, RenderNode* target) {
   //……
    for (const sp<RenderNode>& node : mRenderNodes) {
        // Only the primary target node will be drawn full - all other nodes would get drawn in
        // real time mode. In case of a window, the primary node is the window content and the other
        // node(s) are non client / filler nodes.
        info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
        node->prepareTree(info);
        GL_CHECKPOINT(MODERATE);
    }
    //……
}

void RenderNode::prepareTree(TreeInfo& info) {
    bool functorsNeedLayer = Properties::debugOverdraw;
    prepareTreeImpl(info, functorsNeedLayer);
}

void RenderNode::prepareTreeImpl(TreeInfo& info, bool functorsNeedLayer) {
    info.damageAccumulator->pushTransform(this);

    if (info.mode == TreeInfo::MODE_FULL) {
        // 同步属性 
        pushStagingPropertiesChanges(info);
    }
     
    // layer
    prepareLayer(info, animatorDirtyMask);
    //同步DrawOpTree
    if (info.mode == TreeInfo::MODE_FULL) {
        pushStagingDisplayListChanges(info);
    }
    //递归处理子View
    prepareSubTree(info, childFunctorsNeedLayer, mDisplayListData);
    // push
    pushLayerUpdate(info);
    info.damageAccumulator->popTransform();
}

同步Frame的操作完成了,我们接着看最后绘制的流程。

进行绘制

图形的硬件渲染,是通过调用CanvasContext的draw方法来进行绘制的,CanvasContext是什么呢?

它是渲染的上下文,CanvasContext可以选择不同的渲染模式进行渲染,这是策略模式的设计。我们看一下CanvasContext的create方法,可以看到,方法中会根据渲染类型,创建不同的渲染管道,总共有三种渲染管道——OpenGL,SKiaGL和SkiaVulkan。

CanvasContext* CanvasContext::create(RenderThread& thread,
        bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) {

    auto renderType = Properties::getRenderPipelineType();

    switch (renderType) {
        case RenderPipelineType::OpenGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                    std::make_unique<OpenGLPipeline>(thread));
        case RenderPipelineType::SkiaGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                    std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread));
        case RenderPipelineType::SkiaVulkan:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread));
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t) renderType);
            break;
    }
    return nullptr;
}

我们这里这里只看通过OpenGL进行渲染的OpenGLPipeline

OpenGLPipeline::OpenGLPipeline(RenderThread& thread)
        :  mEglManager(thread.eglManager())
        , mRenderThread(thread) {
}

在OpenGLPipeline的构造函数里面,创建了EglManager,EglManager将我们对EGL的操作全部封装好了,我们看看EglManager的初始化方法

//文件-->/frameworks/base/libs/hwui/renderthread/EglManager.cpp
void EglManager::initialize() {
    if (hasEglContext()) return;

    ATRACE_NAME("Creating EGLContext");

    //获取 EGL Display 对象
    mEglDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    LOG_ALWAYS_FATAL_IF(mEglDisplay == EGL_NO_DISPLAY,
            "Failed to get EGL_DEFAULT_DISPLAY! err=%s", eglErrorString());

    EGLint major, minor;
    //初始化与 EGLDisplay 之间的连接
    LOG_ALWAYS_FATAL_IF(eglInitialize(mEglDisplay, &major, &minor) == EGL_FALSE,
            "Failed to initialize display %p! err=%s", mEglDisplay, eglErrorString());
	//……

    //EGL配置设置
    loadConfig();
    //创建EGL上下文
    createContext();
    //创建离屏渲染Buffer
    createPBufferSurface();
    //绑定上下文
    makeCurrent(mPBufferSurface);
    DeviceInfo::initialize();
    mRenderThread.renderState().onGLContextCreated();
}

在这里我们看到了熟悉的身影,EglManager中的初始化流程和前面所有EGL初始化的流程都是一样的。但在初始化的流程中,我们没看到WindowSurface的设置,只看到了PBufferSurface的创建,它是一个离屏渲染的Buffer,这里简单介绍一下WindowSurface和PbufferSurface

  • WindowSurface:是和窗口相关的,也就是在屏幕上的一块显示区的封装,渲染后即显示在界面上。
  • PbufferSurface:在显存中开辟一个空间,将渲染后的数据(帧)存放在这里。

可以看到没有WindowSurface,OpenGL ES渲染的图形是没法显示在界面上的。其实EglManager已经封装了初始化WindowSurface的方法。

//文件-->/frameworks/base/libs/hwui/renderthread/EglManager.cpp
EGLSurface EglManager::createSurface(EGLNativeWindowType window) {
    initialize();

    EGLint attribs[] = {
#ifdef ANDROID_ENABLE_LINEAR_BLENDING
            EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_SRGB_KHR,
            EGL_COLORSPACE, EGL_COLORSPACE_sRGB,
#endif
            EGL_NONE
    };

    EGLSurface surface = eglCreateWindowSurface(mEglDisplay, mEglConfig, window, attribs);
    LOG_ALWAYS_FATAL_IF(surface == EGL_NO_SURFACE,
            "Failed to create EGLSurface for window %p, eglErr = %s",
            (void*) window, eglErrorString());

    if (mSwapBehavior != SwapBehavior::Preserved) {
        LOG_ALWAYS_FATAL_IF(eglSurfaceAttrib(mEglDisplay, surface, EGL_SWAP_BEHAVIOR, EGL_BUFFER_DESTROYED) == EGL_FALSE,
                            "Failed to set swap behavior to destroyed for window %p, eglErr = %s",
                            (void*) window, eglErrorString());
    }

    return surface;
}

这个surface又是什么时候设置的呢?在activity的界面显示流程中,当我们setView后,ViewRootImpl会执行performTraveserl函数,然后执行Measure测量,Layout布局,和Draw绘制的流程,setView函数在前面讲过,会开启硬件加速,创建ThreadedRenderer,draw函数也讲过,measure,layout的流程就不在这儿说了,它和OpgenGL没关系,其实performTraveserl函数里,同时也设置了EGL的Surface,可见这个函数是多么重要的一个函数,我们看一下。

private void performTraversals() {
    //……
    if (mAttachInfo.mThreadedRenderer != null) {
        try {
            //调用ThreadedRenderer initialize函数
            hwInitialized = mAttachInfo.mThreadedRenderer.initialize(
                    mSurface);
            if (hwInitialized && (host.mPrivateFlags
                    & View.PFLAG_REQUEST_TRANSPARENT_REGIONS) == 0) {
                // Don't pre-allocate if transparent regions
                // are requested as they may not be needed
                mSurface.allocateBuffers();
            }
        } catch (OutOfResourcesException e) {
            handleOutOfResourcesException(e);
            return;
        }
    }
    //……
}

boolean initialize(Surface surface) throws OutOfResourcesException {
    boolean status = !mInitialized;
    mInitialized = true;
    updateEnabledState(surface);
    nInitialize(mNativeProxy, surface);
    return status;
}


ThreadedRenderer的initialize函数调用了native层的initialize方法。

static void android_view_ThreadedRenderer_initialize(JNIEnv* env, jobject clazz,
        jlong proxyPtr, jobject jsurface) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    sp<Surface> surface = android_view_Surface_getSurface(env, jsurface);
    proxy->initialize(surface);
}

void RenderProxy::initialize(const sp<Surface>& surface) {
    SETUP_TASK(initialize);
    args->context = mContext;
    args->surface = surface.get();
    post(task);
}

void CanvasContext::initialize(Surface* surface) {
    setSurface(surface);
}

void CanvasContext::setSurface(Surface* surface) {
    ATRACE_CALL();

    mNativeSurface = surface;

    bool hasSurface = mRenderPipeline->setSurface(surface, mSwapBehavior);

    mFrameNumber = -1;

    if (hasSurface) {
         mHaveNewSurface = true;
         mSwapHistory.clear();
    } else {
         mRenderThread.removeFrameCallback(this);
    }
}

从这里可以看到,EGL的Surface在很早之前就已经设置好了。

此时我们的流程中,EGL的初始化工作都已经完成了,现在可以开始绘制了,我们回到DrawFrameTask::run的draw流程上来

void CanvasContext::draw() {
    SkRect dirty;
    mDamageAccumulator.finish(&dirty);

    mCurrentFrameInfo->markIssueDrawCommandsStart();

    Frame frame = mRenderPipeline->getFrame();

    SkRect windowDirty = computeDirtyRect(frame, &dirty);
	//调用OpenGL的draw函数
    bool drew = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue,
            mContentDrawBounds, mOpaque, mLightInfo, mRenderNodes, &(profiler()));

    waitOnFences();

    bool requireSwap = false;
    //交换缓冲区
    bool didSwap = mRenderPipeline->swapBuffers(frame, drew, windowDirty, mCurrentFrameInfo,
            &requireSwap);

    mIsDirty = false;

	//……

}

这里调用mRenderPipeline的draw方法,其实就是调用了OpenGL的draw方法,然后调用mRenderPipeline->swapBuffers进行缓存区交换

//文件-->/frameworks/base/libs/hwui/renderthread/OpenGLPipeline.cpp
bool OpenGLPipeline::draw(const Frame& frame, const SkRect& screenDirty, const SkRect& dirty,
        const FrameBuilder::LightGeometry& lightGeometry,
        LayerUpdateQueue* layerUpdateQueue,
        const Rect& contentDrawBounds, bool opaque,
        const BakedOpRenderer::LightInfo& lightInfo,
        const std::vector< sp<RenderNode> >& renderNodes,
        FrameInfoVisualizer* profiler) {

    //……
	//BakedOpRenderer用于替代之前的OpenGLRenderer
    BakedOpRenderer renderer(caches, mRenderThread.renderState(),
            opaque, lightInfo);
    frameBuilder.replayBakedOps<BakedOpDispatcher>(renderer);
    //调用GPU进行渲染
    drew = renderer.didDraw();

    //……

    return drew;
}

bool OpenGLPipeline::swapBuffers(const Frame& frame, bool drew, const SkRect& screenDirty,
        FrameInfo* currentFrameInfo, bool* requireSwap) {

    GL_CHECKPOINT(LOW);

    // Even if we decided to cancel the frame, from the perspective of jank
    // metrics the frame was swapped at this point
    currentFrameInfo->markSwapBuffers();

    *requireSwap = drew || mEglManager.damageRequiresSwap();

    if (*requireSwap && (CC_UNLIKELY(!mEglManager.swapBuffers(frame, screenDirty)))) {
        return false;
    }

    return *requireSwap;
}

至此,通过OpenGL ES进行硬件渲染的主要流程结束了。看完了两个例子,是不是对OpenGL ES作为图像生产者是如何生产图像已经了解了呢?我们接着看下一个图像生产者Skia。

由于掘金字数限制的原因,这篇文章拆成了两篇《Android图形渲染原理中(二)》。或者这里查看完整的文章《掌握Android图像显示原理(中)》