TAA算法关键点分析

748 阅读6分钟

屏幕坐标和NDC坐标转换

假设屏幕空间原点位于屏幕左下角。

NDC坐标转屏幕坐标:

//[-1, 1] -> [0, w]
screenX =  (w/2) * (ndcX  + 1) = w * 0.5 * (ndcX  + 1)

//[-1,1] -> [0, h]
screenY =  (h/2) * (ndcY  + 1) = h * 0.5 * (ndcY  + 1)

//[-1, 1]-> [0, 1]
depth = (1/2) * (ndcZ  + 1)

屏幕坐标转NDC坐标:

ndcX = screenX * 2/w - 1
ndcY = screenY * 2/h - 1
ndcZ = depth * 2 - 1

抖动投影矩阵推导

抖动投影矩阵的问题可以描述如下:

如果把渲染的输出平移(jitterX,jitterY),投影矩阵应该如何变化?

要解决上面的问题,先推导物体的顶点坐标是如何转成屏幕坐标的。过程如下:

设投影矩阵为projectionM(基于列排列的), 假设视图矩阵和模型矩阵都是单位矩阵,则物体坐标(X, Y, Z)转到裁剪空间(Clip Space Coordinates)坐标:

cscX = X*projectionM[0][0] + Y*projectionM[1][0] + Z*projectionM[2][0] + W*projectionM[3][0]
cscY = X*projectionM[0][1] + Y*projectionM[1][1] + Z*projectionM[2][1] + W*projectionM[3][1]
cscZ = X*projectionM[0][2] + Y*projectionM[1][2] + Z*projectionM[2][2] + W*projectionM[3][2]
cscW = X*projectionM[0][3] + Y*projectionM[1][3] + Z*projectionM[2][3] + W*projectionM[3][3]

投影矩阵是根据perspective(fovy, aspect, zNear, zFar)函数生成的,生成的投影矩阵如下:

tanHalfFovy = tan(fovy / 2);

[1aspecttanHalfFovy00001tanHalfFovy0000zFar+zNearzFarzNear2zFarzNearzFarzNear0010]\begin{bmatrix} \frac{1}{aspect * tanHalfFovy}& 0& 0& 0\\ 0& \frac{1}{tanHalfFovy}& 0& 0\\ 0& 0& \frac{zFar + zNear}{zFar - zNear}& \frac{2 * zFar * zNear}{zFar - zNear}\\ 0& 0& 1& 0 \end{bmatrix}

带入,得:

cscX = X*projectionM[0][0]
cscY = Y*projectionM[1][1]
cscZ = Z*projectionM[2][2] + W*projectionM[3][2]
cscW = Z

裁剪空间(Clip Space Coordinates)坐标经过透视除法,转成NDC坐标:

ndcX = cscX/Z
ndcY = cscY/Z
ndcZ = cscZ/Z
ndcW = 1

NDC坐标转成屏幕坐标:

screenX = w * 0.5 * (ndcX  + 1)
screenY = h * 0.5 * (ndcY  + 1) 
depth = (1/2) * (ndcZ  + 1)

根据上面的推导过程,如果令投影矩阵作如下改变:

projectionM[2][0] += jitterX * 2/w;
projectionM[2][0] += jitterY * 2/h

则新的裁剪空间坐标如下:


new_cscX = X*projectionM[0][0] + Z*jitterX * 2/w
new_cscY = Y*projectionM[1][1] + Z*jitterY * 2/h
new_cscZ = Z*projectionM[2][2] + W*projectionM[3][2]
new_cscW = Z

新的NDC坐标如下:

new_ndcX = ndcX + jitterX * 2/w
new_ndcY = ndcY + jitterY * 2/h
new_ndcZ = ndcZ
new_ndcW = 1

新的屏幕坐标如下:

new_screenX = w * 0.5 * (new_ndcX  + 1)
            = w * 0.5 * (ndcX + jitterX * 2/w + 1)
            = w * 0.5 * (ndcX  + 1) + jitterX 
            = screenX + jitterX 
            
new_screenY = h * 0.5 * (new_ndcY  + 1) 
            = h * 0.5 * (ndcY + jitterY * 2/h + 1)
            = h * 0.5 * (ndcY + 1) + jitterY 
            = screenY + jitterY         

new_depth = (1/2) * (new_ndcZ  + 1)
          = depth 

故要想把渲染的输出平移(jitterX,jitterY),则投影矩阵应该如下变化:

//jitterX = HaltonSequence[Index].x - 0.5f
//jitterY = HaltonSequence[Index].y - 0.5f
projection[2][0] += jitterX + 2.0f/w,
projection[2][1] += jitterY + 2.0f/h,

这个新的投影矩阵也叫抖动投影矩阵

Motion Vectors计算

Motion Vectors同一个顶点在前后两帧中被渲染到屏幕空间的坐标之差。

Motion Vectors计算分为两种情况:

  • 所有物体都是静止的,只有摄像头移动

  • 物体不是静止的,物体和摄像头都在移动

第一种情况,所有物体都是静止的,只有摄像头移动,故其MVP矩阵都是一样的,可以利用深度重建世界坐标然后重投影来计算Motion Vectors。该情况可以直接在TAA混合的pass进行计算,不需要增加新的的pass来计算Motion Vectors。

代码如下:

//fragment shader
...
uniform vec2 windowSize;
uniform sampler2D depthRENDER;
uniform mat4 inverseViewProjectionCURRENT;
uniform mat4 preViewProjection;

void main() {
    ...
    //gl_FragCoord.xy是屏幕坐标,windowSize是屏幕大小
    vec2 uvCURRENT = gl_FragCoord.xy / windowSize;
    vec2 uvCURRENTNoJitter = uvCURRENT - jitter
    
    //depthRENDER绑定depth纹理
    float depthCURRENT = texture(depthRENDER, uvCURRENTNoJitter).r;
    
    //获取NDC的z坐标
    float z = depthCURRENT * 2.0 - 1.0;
    
    //获取NDC坐标
    vec4 ndc= vec4((uvCURRENTNoJitter) * 2.f - 1.f, z, 1.f);
    
    //NDC坐标转到模型坐标
    //inverseVP是VP的逆
    //正常情况应该先把NDC坐标转成裁剪空间坐标,在进行乘以VP的逆
    //无法知道实际的Z坐标,故无法把NDC坐标转成裁剪空间,直接用NDC进行计算的化,w不会等一,故还需要除以w
    //除以w后, 刚好就是算出实际的物体的世界坐标
    vec4 worldSpacePosition = inverseViewProjectionCURRENT * ndc;
    worldSpacePosition /= worldSpacePosition.w;
    
    //重投影
    //preViewProjections上一帧的VP矩阵
    vec4 prePosition = worldSpacePosition * preViewProjection;
    vec2 preNdcPostion = prePosition .xy/prePosition.w;
    vec2 preScreenSpaceUV = 0.5 * (preNdcPostion + 1);
    
    vec2 velocity = uvCURRENTNoJitter - preScreenSpaceUV;
}

第二种情况,因为物体在移动,每个物体的MVP都不一样,要计算Motion Vectors,就需要在每个物体绘制后,加个pass来计算Motion Vectors。

但无需切换fbo,可以利用multi render target技术,在物体绘制的shader里面加个输出,来保存Motion Vectors, 把这个pass融合到物体绘制里面。

Motion Vectors计算需要三个输入:本帧的MVP,上一帧的MVP,以及物体本地顶点坐标,计算公式为:

Motion Vectors = 上一帧的MVP物体本地顶点坐标 - 本帧的MVP物体本地顶点坐标

shader实现如下:

//vetex shader
...
uniform mat4 MVPNJPrevious;
uniform mat4 MVPNoJitter
layout (location=0) in vec3 VertexPosition;
out vec2 screenSpaceVel;

void main()
{
   ...
  //VertexPosition模型顶点坐标
  //MVPNJPrevious上一帧不带抖动的MVP矩阵
  //MVPNoJitter本帧不带抖动的MVP矩阵
  vec4 curCscPostion = MVPNoJitter * vec4(VertexPosition, 1.f);
  vec4 preCscPostion = MVPNJPrevious * vec4(VertexPosition, 1.f);
  vec2 curNdcPostion = curCscPostion.xy/curCscPostion.w;
  vec2 preNdcPostion = preCscPostion.xy/preCscPostion.w;
  vec2 curScreenSpaceUV = 0.5 * (curNdcPostion + 1);
  vec2 preScreenSpaceUV = 0.5 * (preNdcPostion + 1);
  
  screenSpaceVel = curScreenSpaceUV - preScreenSpaceUV;
}


//fragment shader
...
in vec2 screenSpaceVel;
layout (location = 0) out vec4 FragColor;
layout (location = 1) out vec3 velocity;

void main()
{
  ...
  velocity = vec3(screenSpaceVel, 0.f);
}

历史帧混合


//vertex shader
layout (location = 0) in vec2 aPos;
layout (location = 1) in vec2 aTexCoords;

out vec2 screenPosition;


void main()
{
    screenPosition = aTexCoords;
    gl_Position = vec4(aPos.x, aPos.y, 0.0, 1.0); 
}


//fragment shader
#version 450 core

layout(binding=0) uniform sampler2D currentColor;
layout(binding=1) uniform sampler2D previousColor;
layout(binding=2) uniform sampler2D velocityTexture;
layout(binding=3) uniform sampler2D currentDepth;

uniform float ScreenWidth;
uniform float ScreenHeight;
uniform int frameCount;

in  vec2 screenPosition;
out vec4 outColor;

vec2 getClosestOffset()
{
        vec2 deltaRes = vec2(1.0 / ScreenWidth, 1.0 / ScreenHeight);
        float closestDepth = 1.0f;
        vec2 closestUV = screenPosition;

        for(int i=-1;i<=1;++i)
        {
                for(int j=-1;j<=1;++j)
                {
                        vec2 newUV = screenPosition + deltaRes * vec2(i, j);

                        float depth = texture2D(currentDepth, newUV).x;

                        if(depth < closestDepth)
                        {
                                closestDepth = depth;
                                closestUV = newUV;
                        }
                }
        }

        return closestUV;
}

void main()
{
        vec3 nowColor = texture(currentColor, screenPosition).rgb;
        if(frameCount == 0)
        {
                outColor = vec4(nowColor, 1.0);
                return;
        }

        // 周围3x3内距离最近的速度向量
        vec2 velocity = texture(velocityTexture, getClosestOffset()).rg;
        vec2 offsetUV = clamp(screenPosition - velocity, 0, 1);
        vec3 preColor = texture(previousColor, offsetUV).rgb;
        // 混合
        float c = 0.05;
        outColor = vec4(c * nowColor + (1-c) * preColor, 1.0);
}

采样velocity时,因为物体轮廓周围的velocity也可能是有锯齿的,所以物体轮廓边缘可能会失去抗锯齿的效果。可以比较该像素周边3x3像素的深度,选用深度最小的那一个的velocity。

原神Motion Vectors计算

原神Motion Vectors计算分为如下步骤:

(1)先用重投影计算出一个有错误的Motion Vectors

(2)然后在根据运动的物体的信息去修正上面的Motion Vectors

如图:

6784 : 先用重投影计算出一个有错误的Motion Vectors

6811 : 人物有运动,故在6784的基础上对人物部分的Motion Vectors进行修正

参考: