Three.js水面渲染初探

258 阅读13分钟

前言

Three.js经常被用于GIS、建筑BIM等领域的开发,水面渲染是一个常见的需求。水面渲染是一个很大的话题,相关的技术非常多。技术出现的时间有先后,实现的难度有高低,取得效果的真实感也有差距。水面渲染相关的博客、文章也非常多,但根据这几天的搜索,大部分都是基于Unity等游戏引擎的,基于Three.js的很少。本文将初步探索基于Three.js的水面渲染,用比较常见比较简单的实现方式,实现一个基础的水面渲染效果。本文的目的是以比较简单方法,大致跑通一个水面渲染的流程,并不追求高端的实现方式。尽管如此,对于要求不高的场景也基本够用。而且本文最主要的目的是提供一个基础参考,重要的是以此为依据逐步优化,在此基础上逐步实现一些进阶的方法。Three.js的官方示例中是实现了水面渲染的,本文的实现方式在很大程度上也参考了官方示例。所以本文也可以看做是官方示例的源码阅读解析。

本文的实现方式主要包括以下几个方面:

  • 反射
  • 折射
  • 水深效果
  • 水面波动
  • 岸边白沫

反射

在做水面渲染时,水面可以看做是一种材质。既然考虑一种材质对光照的反射,那当然可以基于渲染方程来计算渲染的结果。但是通常情况下,实时渲染大多数的水体渲染,都是非基于物理的经验型渲染方法。水面的反射最明显的特征是形成镜面反射,周围的环境会被比较明亮、清晰的反射出来。这里为了到达镜面反射的效果,我采用比较常见的平面反射的方法(Planar Reflection)。

平面反射方法的原理是根据光传播时在镜面上反射的物理规律,入射角等于反射角,经过镜面某点上反射后到达观察点的光线,和光延直线穿过该点到达某个观察点的光线完全一样(暂不考虑光照能量射入水面下吸收)。基于此,我们以水面为对称分界面,在水面下设置一个相对观察相机对称的相机,在渲染场景之前,先在一个renderTarget上使用这个水下相机渲染场景,将结果记录在renderTarget.texture中。这张texture记录的就等价于水面完全镜面反射场景后形成的渲染图。这里还需要修改水下相机的投影矩阵,对水下相机进行裁剪,确保水下相机只能看到水面之上的物体,避免反射了水下物体而穿帮。

绘图.png

接下来看部分源码:

const renderTargetReflector = new THREE.WebGLRenderTarget(textureWidth, textureHeight, { samples: 4, type: THREE.HalfFloatType });
const reflectorRender = (() => {
  const reflectorPlane = new THREE.Plane();
  const normal = new THREE.Vector3();
  const reflectorWorldPosition = new THREE.Vector3();
  const cameraWorldPosition = new THREE.Vector3();
  const rotationMatrix = new THREE.Matrix4();
  const lookAtPosition = new THREE.Vector3(0, 0, - 1);
  const clipPlane = new THREE.Vector4();

  const view = new THREE.Vector3();
  const target = new THREE.Vector3();
  const q = new THREE.Vector4();
  const virtualCamera = new THREE.PerspectiveCamera();
  return (renderer: THREE.WebGLRenderer, scene: THREE.Scene, camera: THREE.Camera) => {
    reflectorWorldPosition.setFromMatrixPosition(this.matrixWorld);
    cameraWorldPosition.setFromMatrixPosition(camera.matrixWorld);

    rotationMatrix.extractRotation(this.matrixWorld);

    normal.set(0, 0, 1);
    normal.applyMatrix4(rotationMatrix);

    view.subVectors(reflectorWorldPosition, cameraWorldPosition);

    // Avoid rendering when reflector is facing away unless forcing an update
    const isFacingAway = view.dot(normal) > 0;

    if (isFacingAway === true) return;

    view.reflect(normal).negate();
    view.add(reflectorWorldPosition);

    rotationMatrix.extractRotation(camera.matrixWorld);

    lookAtPosition.set(0, 0, - 1);
    lookAtPosition.applyMatrix4(rotationMatrix);
    lookAtPosition.add(cameraWorldPosition);

    target.subVectors(reflectorWorldPosition, lookAtPosition);
    target.reflect(normal).negate();
    target.add(reflectorWorldPosition);

    virtualCamera.position.copy(view);
    virtualCamera.up.set(0, 1, 0);
    virtualCamera.up.applyMatrix4(rotationMatrix);
    virtualCamera.up.reflect(normal);
    virtualCamera.lookAt(target);

    let far: number;
    if ((camera as THREE.PerspectiveCamera).isPerspectiveCamera) {
      far = (camera as THREE.PerspectiveCamera).far;
    } else {
      far = (camera as THREE.OrthographicCamera).far;
    }

    virtualCamera.far = far; // Used in WebGLBackground

    virtualCamera.updateMatrixWorld();
    virtualCamera.projectionMatrix.copy(camera.projectionMatrix);

    // Now update projection matrix with new clip plane, implementing code from: http://www.terathon.com/code/oblique.html
    // Paper explaining this technique: http://www.terathon.com/lengyel/Lengyel-Oblique.pdf
    reflectorPlane.setFromNormalAndCoplanarPoint(normal, reflectorWorldPosition);
    reflectorPlane.applyMatrix4(virtualCamera.matrixWorldInverse);

    clipPlane.set(reflectorPlane.normal.x, reflectorPlane.normal.y, reflectorPlane.normal.z, reflectorPlane.constant);

    const projectionMatrix = virtualCamera.projectionMatrix;

    q.x = (Math.sign(clipPlane.x) + projectionMatrix.elements[8]) / projectionMatrix.elements[0];
    q.y = (Math.sign(clipPlane.y) + projectionMatrix.elements[9]) / projectionMatrix.elements[5];
    q.z = - 1.0;
    q.w = (1.0 + projectionMatrix.elements[10]) / projectionMatrix.elements[14];

    // Calculate the scaled plane vector
    clipPlane.multiplyScalar(2.0 / clipPlane.dot(q));

    // Replacing the third row of the projection matrix
    projectionMatrix.elements[2] = clipPlane.x;
    projectionMatrix.elements[6] = clipPlane.y;
    projectionMatrix.elements[10] = clipPlane.z + 1.0 - clipBias;
    projectionMatrix.elements[14] = clipPlane.w;

    renderer.setRenderTarget(renderTargetReflector);

    renderer.state.buffers.depth.setMask(true); // make sure the depth buffer is writable so it can be properly cleared, see #18897

    if (renderer.autoClear === false) renderer.clear();
    renderer.render(scene, virtualCamera);
  }
})()

这段代码主要来自于three.js示例中的Reflector,我基本上抄来用。这段代码不难理解,16-60行主要是根据场景中的相机以水面为分界线构建虚拟水下相机。64-83行为虚拟相机设置裁剪平面,使得该水下相机不会渲染水面上的物体。

折射

相对于反射,折射的处理则比较简单,折射是透过水面看到水下的场景。相对于反射以水下相机渲染水面之上的场景,折射则要渲染水面之下场景。折射也需要将水下的场景记录在一张renderTarget.texture上。渲染水下场景的相机几乎与场景本身的相机完全一致,只是同样也需要裁剪,确保渲染水下场景的相机只看到水面之下的物体。(这里值得一提是,并未考虑水面折射率带来的光线的偏移,由于水有波动的特征,后面还要根据水面的流动对反射和折射都进行偏移,所以个人认为忽略折射率带来的光线折射的偏移在视觉上影响不算大。)

const renderTargetRefractor = new THREE.WebGLRenderTarget(textureWidth, textureHeight, { samples: 4, type: THREE.HalfFloatType });
const refractorRender = (() => {
  const normal = new THREE.Vector3();
  const position = new THREE.Vector3();
  const quaternion = new THREE.Quaternion();
  const scale = new THREE.Vector3();
  const clipPlane = new THREE.Plane();
  const clipVector = new THREE.Vector4();
  const q = new THREE.Vector4();
  const refractorPlane = new THREE.Plane();
  const virtualCamera = new THREE.PerspectiveCamera();
  virtualCamera.matrixAutoUpdate = false;

  return (renderer: THREE.WebGLRenderer, scene: THREE.Scene, camera: THREE.Camera) => {
    this.matrixWorld.decompose(position, quaternion, scale);
    normal.set(0, 0, 1).applyQuaternion(quaternion).normalize();
    normal.negate();

    refractorPlane.setFromNormalAndCoplanarPoint(normal, position);

    virtualCamera.matrixWorld.copy(camera.matrixWorld);
    virtualCamera.matrixWorldInverse.copy(virtualCamera.matrixWorld).invert();
    virtualCamera.projectionMatrix.copy(camera.projectionMatrix);

    let far: number;
    if ((camera as THREE.PerspectiveCamera).isPerspectiveCamera) {
      far = (camera as THREE.PerspectiveCamera).far;
    } else {
      far = (camera as THREE.OrthographicCamera).far;
    }
    virtualCamera.far = far; // used in WebGLBackground

    // The following code creates an oblique view frustum for clipping.
    // see: Lengyel, Eric. “Oblique View Frustum Depth Projection and Clipping”.
    // Journal of Game Development, Vol. 1, No. 2 (2005), Charles River Media, pp. 5–16

    clipPlane.copy(refractorPlane);
    clipPlane.applyMatrix4(virtualCamera.matrixWorldInverse);

    clipVector.set(clipPlane.normal.x, clipPlane.normal.y, clipPlane.normal.z, clipPlane.constant);

    // calculate the clip-space corner point opposite the clipping plane and
    // transform it into camera space by multiplying it by the inverse of the projection matrix

    const projectionMatrix = virtualCamera.projectionMatrix;

    q.x = (Math.sign(clipVector.x) + projectionMatrix.elements[8]) / projectionMatrix.elements[0];
    q.y = (Math.sign(clipVector.y) + projectionMatrix.elements[9]) / projectionMatrix.elements[5];
    q.z = - 1.0;
    q.w = (1.0 + projectionMatrix.elements[10]) / projectionMatrix.elements[14];

    // calculate the scaled plane vector

    clipVector.multiplyScalar(2.0 / clipVector.dot(q));

    // replacing the third row of the projection matrix

    projectionMatrix.elements[2] = clipVector.x;
    projectionMatrix.elements[6] = clipVector.y;
    projectionMatrix.elements[10] = clipVector.z + 1.0 - clipBias;
    projectionMatrix.elements[14] = clipVector.w;

    renderer.setRenderTarget(renderTargetRefractor);
    if (renderer.autoClear === false) renderer.clear();
    renderer.render(scene, virtualCamera);
  };
})();

这段代码主要来自于three.js示例中的Refractor,我也是基本上抄来用。和反射类似,这段代码也是构建一个虚拟相机,只不过折射的虚拟相机不再是水下,而是几乎与场景本身的相机完全一样。同样,这个虚拟相机也设置了裁剪平面,裁剪掉了水下物体。

纹理映射

三维空间中的某个点要渲染到纹理上,需要经过一系列的空间转换,每一次空间转换都会乘一个矩阵:

物体局部空间→世界空间→相机空间→裁剪空间→纹理空间。

那么当渲染水面上某个点时,需要在上面所说的反射texture和折射texture找到这个点对应的纹理坐标,就需要重复这个空间转换过程。用以下过程提前构建好一个纹理矩阵(textureMatrix),这样在着色器程序中直接用渲染点的坐标乘这个矩阵,就能找到纹理坐标。

<纹理空间映射>×<投影矩阵>×<视图矩阵>×<变换矩阵>

可以参考以下代码:

//空间纹理映射,将[-1,1]映射到[0,1]
textureMatrix.set(
0.5, 0.0, 0.0, 0.5,
0.0, 0.5, 0.0, 0.5,
0.0, 0.0, 0.5, 0.5,
0.0, 0.0, 0.0, 1.0
);

textureMatrix.multiply(camera.projectionMatrix);
textureMatrix.multiply(camera.matrixWorldInverse);
textureMatrix.multiply(this.matrixWorld);

水深效果

在自然界中,由于水面对不同波长的光的吸收和散射的性质不同,可以从经验角度大致观察到,水深越深水面的颜色越暗。所以我们可以根据深度来采样如下的lut纹理,控制不同深度的水体颜色。那么如何得到水体的深度呢?可以类似像折射texture一样,将水下深度信息也记录在一张renderTarget.texture上,然后可以用同样的纹理映射方法采样再转换得到水下的ViewZ(摄像机原点到某一点的深度距离),在用水面上的点的ViewZ减去水下的ViewZ,就可以得到水面的深度了。

image.png image.png 绘图-1.png

const renderTargetDepthBuffer = new THREE.WebGLRenderTarget(1920, 1920, { type: THREE.HalfFloatType });
renderTargetDepthBuffer.texture.name = 'Water.depth';
renderTargetDepthBuffer.texture.generateMipmaps = false;
const depthMaterial = new THREE.MeshDepthMaterial();
depthMaterial.side = THREE.DoubleSide;
depthMaterial.depthPacking = THREE.RGBADepthPacking;
depthMaterial.blending = THREE.NoBlending;
……………………………省略…………………………………
this.material.uniforms['depthMap'].value = renderTargetDepthBuffer.texture;

scene.overrideMaterial = depthMaterial;
//将水面之上的物体隐藏掉
onSurfaceList.forEach((object) => {
    object.visible = false;
})
renderer.setRenderTarget(renderTargetDepthBuffer);
if (renderer.autoClear === false) {
renderer.clear();
}
renderer.render(scene, camera);
scene.overrideMaterial = null;
onSurfaceList.forEach(object => {
    object.visible = true;
})
float getViewDepth( vec2 uv ) {
    float depth = unpackRGBAToDepth(texture2D( depthMap, uv ));
    float viewZ =isOrthographic ?
        orthographicDepthToViewZ( depth, cameraNearFar.x, cameraNearFar.y ):
        perspectiveDepthToViewZ( depth, cameraNearFar.x, cameraNearFar.y );
    viewZ = -viewZ;
    float viewDepth = viewZ-(-vPosition.z);
    return viewDepth;
}

float viewDepth = getViewDepth(uv);
float deepAlpha = clamp(viewDepth, 0.0, maxViewDepth) / maxViewDepth;

vec3 color = mix(shallowColor, deepColor, deepAlpha);
refractColor = vec4( color, 1.0 ) * refractColor;

这两段代码,js代码是将整个场景的深度记录下来,这里值得一提的是,在渲染深度图之前,将水面之上的物体隐藏,这样记录的深度肯定是水面之下的深度。
glsl代码是在渲染时采样深度图,计算水面到水底的深度,然后这个深度,计算水体的颜色。这里没有采样lut图,而是简单的设置了两个颜色,shallowColor和deepColor,根据深度来插值这两种颜色。

水面波形

水面波形的技术非常多,毛星云大神在他的文章真实感水体渲染技术总结中,介绍了23种方法,本文我们使用比较常见的流形图(Flow Map)方法。 Flow Map的原理是预先计算得到一张记录UV位移向量的纹理贴图,采样这张纹理贴图,可以获取某个点的流向。根据时间周期性的基于这个流向对水面的颜色和法线进行偏移,达到水面周期性扭曲运动的效果。这里分享一个制作Flow Map的工具。FlowMapPainter.exe

const updateFlow = () => {
  const delta = clock.getDelta();
  const config = this.material.uniforms['config'];
  config.value.x += flowSpeed * delta;
  config.value.y = config.value.x + halfCycle;
  if (config.value.x >= cycle) {
    config.value.x = 0;
    config.value.y = halfCycle;
  } else if (config.value.y >= cycle) {
    config.value.y = config.value.y - cycle;
  }
}
float flowMapOffset0 = config.x;
float flowMapOffset1 = config.y;
float halfCycle = config.z;
vec2 flow = texture2D( tFlowMap, vUv ).rg * 2.0 - 1.0;
flow.x *= - 1.0;
vec4 normalColor0 = texture2D( tNormalMap0,  vUv   + flow * flowMapOffset0 );
vec4 normalColor1 = texture2D( tNormalMap1,  vUv   + flow * flowMapOffset1 );
float flowLerp = abs( halfCycle - flowMapOffset0 ) / halfCycle;
vec4 normalColor = mix( normalColor0, normalColor1, flowLerp );
vec3 tNormal = normalize( vec3( normalColor.r * 2.0 - 1.0,  normalColor.g * 2.0 - 1.0, normalColor.b ) );


vec3 worldBitangent = cross(worldNormal, worldTangent); 
mat3 tToW = mat3(worldTangent, worldBitangent, worldNormal);
vec3 normal = tToW * tNormal;

vec3 toEye = normalize( vToEye );
float theta = max( dot( toEye, normal ), 0.0 );
float reflectance = reflectivity + ( 1.0 - reflectivity ) * pow( ( 1.0 - theta ), 5.0 );
vec3 coord = vCoord.xyz / vCoord.w;


vec2 uv = coord.xy+ coord.z * tNormal.xy * 0.05;


vec4 reflectColor = texture2D( tReflectionMap, vec2( 1.0 - uv.x, uv.y ) );
vec4 refractColor = texture2D( tRefractionMap, uv );
gl_FragColor = mix( refractColor, reflectColor, reflectance )

按照一定的周期,结合Flow Map上记录的方向信息对uv进行偏移,对两张向量贴图分别采样,两次采样uv偏离相差半个周期,并且利用周期函数的计算差值权重。这样做的目的是为了uv偏移能无缝衔接,平滑过渡。采样得到的向量值要乘上切线空间的矩阵,确保将采样的向量值转到世界空间中。这个向量就是偏移后法向量,根据法向量和观察方向就可以用Schlick近似来计算Fresnel系数,即反射和折射在渲染时的权重。同时,由于采样得到的向量值是周期性偏移的,x和y分量的取值范围都是-1到1,所以可以用x和y分量来扰动对反射贴图和折射贴图采样时的uv。最终形成水面波形的效果。

岸边白沫

最后简单实现了下岸边白沫,岸边泡沫使用前面的水面深度作为遮罩,具体来说,就是水面深度越深白沫越少,反之越多,这样白沫将主要集中在岸边。然后用一个随机噪声进行扰动。

float noiseAlpha = noise((tNormal.xy*2.));
float foamAlpha = 1. - smoothstep(0.0, 40.,viewDepth);
vec4 foamColor = vec4(noiseAlpha*foamAlpha,noiseAlpha*foamAlpha,noiseAlpha*foamAlpha,1.);

全部代码

import * as THREE from 'three';

export interface CustomWaterOptions {
  shallowColor: THREE.Color | string | number | undefined,
  deepColor: THREE.Color | string | number | undefined,
  textureWidth: number | undefined,
  textureHeight: number | undefined,
  clipBias: number | undefined,
  reflectivity: number | undefined,
  maxViewDepth: number | undefined,
  normalMap0: THREE.Texture | undefined,
  normalMap1: THREE.Texture | undefined,
  flowMap: THREE.Texture | undefined,
  foamMap: THREE.Texture | undefined,
  flowSpeed: number | undefined,
  onSurfaceList: THREE.Group[],
}

export class CustomWater extends THREE.Mesh {
  isWater: boolean;
  type: string;
  material: THREE.ShaderMaterial;
  constructor(geometry: THREE.BufferGeometry, options: CustomWaterOptions) {
    geometry.computeTangents();
    super(geometry);
    this.isWater = true;
    this.type = 'Water';
    const shallowColor = (options.shallowColor !== undefined) ? new THREE.Color(options.shallowColor) : new THREE.Color(0xFFFFFF);
    const deepColor = (options.deepColor !== undefined) ? new THREE.Color(options.deepColor) : new THREE.Color(0x1E90FF);
    const textureWidth = options.textureWidth !== undefined ? options.textureWidth : 512;
    const textureHeight = options.textureHeight !== undefined ? options.textureHeight : 512;
    const clipBias = options.clipBias !== undefined ? options.clipBias : 0;
    const flowSpeed = options.flowSpeed !== undefined ? options.flowSpeed : 0.03;
    const reflectivity = options.reflectivity !== undefined ? options.reflectivity : 0.02;
    const maxViewDepth = options.maxViewDepth !== undefined ? options.maxViewDepth : 300;
    const onSurfaceList = options.onSurfaceList !== undefined ? options.onSurfaceList : [];

    const textureLoader = new THREE.TextureLoader();

    const normalMap0 = options.normalMap0 || textureLoader.load(new URL("./assets/textures/Water_1_M_Normal.jpg", import.meta.url).href);
    const normalMap1 = options.normalMap1 || textureLoader.load(new URL("./assets/textures/Water_2_M_Normal.jpg", import.meta.url).href);
    const flowMap = options.flowMap || textureLoader.load(new URL("./assets/textures/flowmap.png", import.meta.url).href);
    const cycle = 0.15;
    const halfCycle = cycle * 0.5;

    const renderTargetDepthBuffer = new THREE.WebGLRenderTarget(1920, 1920, { type: THREE.HalfFloatType });
    renderTargetDepthBuffer.texture.name = 'Water.depth';
    renderTargetDepthBuffer.texture.generateMipmaps = false;
    const depthMaterial = new THREE.MeshDepthMaterial();
    depthMaterial.side = THREE.DoubleSide;
    depthMaterial.depthPacking = THREE.RGBADepthPacking;
    depthMaterial.blending = THREE.NoBlending;


    const textureMatrix = new THREE.Matrix4();
    const clock = new THREE.Clock();


    const renderTargetReflector = new THREE.WebGLRenderTarget(textureWidth, textureHeight, { samples: 4, type: THREE.HalfFloatType });
    const reflectorRender = (() => {
      const reflectorPlane = new THREE.Plane();
      const normal = new THREE.Vector3();
      const reflectorWorldPosition = new THREE.Vector3();
      const cameraWorldPosition = new THREE.Vector3();
      const rotationMatrix = new THREE.Matrix4();
      const lookAtPosition = new THREE.Vector3(0, 0, - 1);
      const clipPlane = new THREE.Vector4();

      const view = new THREE.Vector3();
      const target = new THREE.Vector3();
      const q = new THREE.Vector4();
      const virtualCamera = new THREE.PerspectiveCamera();
      return (renderer: THREE.WebGLRenderer, scene: THREE.Scene, camera: THREE.Camera) => {
        reflectorWorldPosition.setFromMatrixPosition(this.matrixWorld);
        cameraWorldPosition.setFromMatrixPosition(camera.matrixWorld);

        rotationMatrix.extractRotation(this.matrixWorld);

        normal.set(0, 0, 1);
        normal.applyMatrix4(rotationMatrix);

        view.subVectors(reflectorWorldPosition, cameraWorldPosition);

        // Avoid rendering when reflector is facing away unless forcing an update
        const isFacingAway = view.dot(normal) > 0;

        if (isFacingAway === true) return;

        view.reflect(normal).negate();
        view.add(reflectorWorldPosition);

        rotationMatrix.extractRotation(camera.matrixWorld);

        lookAtPosition.set(0, 0, - 1);
        lookAtPosition.applyMatrix4(rotationMatrix);
        lookAtPosition.add(cameraWorldPosition);

        target.subVectors(reflectorWorldPosition, lookAtPosition);
        target.reflect(normal).negate();
        target.add(reflectorWorldPosition);

        virtualCamera.position.copy(view);
        virtualCamera.up.set(0, 1, 0);
        virtualCamera.up.applyMatrix4(rotationMatrix);
        virtualCamera.up.reflect(normal);
        virtualCamera.lookAt(target);

        let far: number;
        if ((camera as THREE.PerspectiveCamera).isPerspectiveCamera) {
          far = (camera as THREE.PerspectiveCamera).far;
        } else {
          far = (camera as THREE.OrthographicCamera).far;
        }

        virtualCamera.far = far; // Used in WebGLBackground

        virtualCamera.updateMatrixWorld();
        virtualCamera.projectionMatrix.copy(camera.projectionMatrix);

        // Now update projection matrix with new clip plane, implementing code from: http://www.terathon.com/code/oblique.html
        // Paper explaining this technique: http://www.terathon.com/lengyel/Lengyel-Oblique.pdf
        reflectorPlane.setFromNormalAndCoplanarPoint(normal, reflectorWorldPosition);
        reflectorPlane.applyMatrix4(virtualCamera.matrixWorldInverse);

        clipPlane.set(reflectorPlane.normal.x, reflectorPlane.normal.y, reflectorPlane.normal.z, reflectorPlane.constant);

        const projectionMatrix = virtualCamera.projectionMatrix;

        q.x = (Math.sign(clipPlane.x) + projectionMatrix.elements[8]) / projectionMatrix.elements[0];
        q.y = (Math.sign(clipPlane.y) + projectionMatrix.elements[9]) / projectionMatrix.elements[5];
        q.z = - 1.0;
        q.w = (1.0 + projectionMatrix.elements[10]) / projectionMatrix.elements[14];

        // Calculate the scaled plane vector
        clipPlane.multiplyScalar(2.0 / clipPlane.dot(q));

        // Replacing the third row of the projection matrix
        projectionMatrix.elements[2] = clipPlane.x;
        projectionMatrix.elements[6] = clipPlane.y;
        projectionMatrix.elements[10] = clipPlane.z + 1.0 - clipBias;
        projectionMatrix.elements[14] = clipPlane.w;

        renderer.setRenderTarget(renderTargetReflector);

        renderer.state.buffers.depth.setMask(true); // make sure the depth buffer is writable so it can be properly cleared, see #18897

        if (renderer.autoClear === false) renderer.clear();
        renderer.render(scene, virtualCamera);
      }
    })()

    const renderTargetRefractor = new THREE.WebGLRenderTarget(textureWidth, textureHeight, { samples: 4, type: THREE.HalfFloatType });
    const refractorRender = (() => {
      const normal = new THREE.Vector3();
      const position = new THREE.Vector3();
      const quaternion = new THREE.Quaternion();
      const scale = new THREE.Vector3();
      const clipPlane = new THREE.Plane();
      const clipVector = new THREE.Vector4();
      const q = new THREE.Vector4();
      const refractorPlane = new THREE.Plane();
      const virtualCamera = new THREE.PerspectiveCamera();
      virtualCamera.matrixAutoUpdate = false;

      return (renderer: THREE.WebGLRenderer, scene: THREE.Scene, camera: THREE.Camera) => {
        this.matrixWorld.decompose(position, quaternion, scale);
        normal.set(0, 0, 1).applyQuaternion(quaternion).normalize();
        normal.negate();

        refractorPlane.setFromNormalAndCoplanarPoint(normal, position);

        virtualCamera.matrixWorld.copy(camera.matrixWorld);
        virtualCamera.matrixWorldInverse.copy(virtualCamera.matrixWorld).invert();
        virtualCamera.projectionMatrix.copy(camera.projectionMatrix);

        let far: number;
        if ((camera as THREE.PerspectiveCamera).isPerspectiveCamera) {
          far = (camera as THREE.PerspectiveCamera).far;
        } else {
          far = (camera as THREE.OrthographicCamera).far;
        }
        virtualCamera.far = far; // used in WebGLBackground

        // The following code creates an oblique view frustum for clipping.
        // see: Lengyel, Eric. “Oblique View Frustum Depth Projection and Clipping”.
        // Journal of Game Development, Vol. 1, No. 2 (2005), Charles River Media, pp. 5–16

        clipPlane.copy(refractorPlane);
        clipPlane.applyMatrix4(virtualCamera.matrixWorldInverse);

        clipVector.set(clipPlane.normal.x, clipPlane.normal.y, clipPlane.normal.z, clipPlane.constant);

        // calculate the clip-space corner point opposite the clipping plane and
        // transform it into camera space by multiplying it by the inverse of the projection matrix

        const projectionMatrix = virtualCamera.projectionMatrix;

        q.x = (Math.sign(clipVector.x) + projectionMatrix.elements[8]) / projectionMatrix.elements[0];
        q.y = (Math.sign(clipVector.y) + projectionMatrix.elements[9]) / projectionMatrix.elements[5];
        q.z = - 1.0;
        q.w = (1.0 + projectionMatrix.elements[10]) / projectionMatrix.elements[14];

        // calculate the scaled plane vector

        clipVector.multiplyScalar(2.0 / clipVector.dot(q));

        // replacing the third row of the projection matrix

        projectionMatrix.elements[2] = clipVector.x;
        projectionMatrix.elements[6] = clipVector.y;
        projectionMatrix.elements[10] = clipVector.z + 1.0 - clipBias;
        projectionMatrix.elements[14] = clipVector.w;

        renderer.setRenderTarget(renderTargetRefractor);
        if (renderer.autoClear === false) renderer.clear();
        renderer.render(scene, virtualCamera);
      };
    })();

    this.material = new THREE.ShaderMaterial({
      name: 'customWaterShader',
      uniforms: {
        'shallowColor': {
          value: null
        },
        'deepColor': {
          value: null
        },

        'reflectivity': {
          value: 0
        },

        'tReflectionMap': {
          value: null
        },

        'tRefractionMap': {
          value: null
        },
        'tNormalMap0': {
          value: null
        },

        'tNormalMap1': {
          value: null
        },
        'tFlowMap': {
          value: null
        },


        'depthMap': {
          value: null
        },

        'textureMatrix': {
          value: null
        },
        'cameraNearFar': {
          value: null
        },
        'maxViewDepth': {
          value: 100
        },
        'config': {
          value: new THREE.Vector3()
        }
      },
      vertexShader: `

		#include <common>
		#include <logdepthbuf_pars_vertex>

		uniform mat4 textureMatrix;
    attribute vec4 tangent;
		varying vec4 vCoord;
		varying vec2 vUv;
		varying vec3 vToEye;
    varying vec4 vPosition;
    varying vec3 worldNormal;
    varying vec3 worldTangent;
		void main() {

			vUv = uv;
			vCoord = textureMatrix * vec4( position, 1.0 );
      mat3 modelMatrix3 = mat3( modelMatrix[0].xyz, modelMatrix[1].xyz, modelMatrix[2].xyz );
      worldTangent = modelMatrix3 * vec3(tangent.x, tangent.y, tangent.z);
      worldNormal = modelMatrix3 * normal;

			vec4 worldPosition = modelMatrix * vec4( position, 1.0 );
			vToEye = cameraPosition - worldPosition.xyz;

			vec4 mvPosition =  viewMatrix * worldPosition; // used in fog_vertex
      vPosition = mvPosition;
			gl_Position =projectionMatrix * mvPosition;

			#include <logdepthbuf_vertex>
		}`,
      fragmentShader: `
      #include <common>
			#include <logdepthbuf_fragment>
      #include <packing>
      uniform sampler2D tReflectionMap;
		  uniform sampler2D tRefractionMap;
      uniform sampler2D depthMap;
      uniform sampler2D tNormalMap0;
		  uniform sampler2D tNormalMap1;
      uniform sampler2D tFlowMap;
      uniform vec3 shallowColor;
      uniform vec3 deepColor;
      uniform float reflectivity;
      uniform vec2 cameraNearFar;
      uniform float maxViewDepth;
      uniform vec3 config;
      varying vec4 vCoord;
      varying vec2 vUv;
      varying vec3 vToEye;
      varying vec3 worldNormal;
      varying vec3 worldTangent;
      varying vec4 vPosition;

      float getViewDepth( vec2 uv ) {
        float depth = unpackRGBAToDepth(texture2D( depthMap, uv ));
        float viewZ =isOrthographic ?
          orthographicDepthToViewZ( depth, cameraNearFar.x, cameraNearFar.y ):
          perspectiveDepthToViewZ( depth, cameraNearFar.x, cameraNearFar.y );
        viewZ = -viewZ;
        float viewDepth = viewZ-(-vPosition.z);
        return viewDepth;
      }

      float random (in vec2 st) {
        return fract(sin(dot(st.xy,
                            vec2(12.9898,78.233)))
                    * 43758.5453123);
      }

      float noise (vec2 st) {
          vec2 i = floor(st);
          vec2 f = fract(st);

          float a = random(i);
          float b = random(i + vec2(1.0, 0.0));
          float c = random(i + vec2(0.0, 1.0));
          float d = random(i + vec2(1.0, 1.0));

          vec2 u = f*f*(3.0-2.0*f);

          return mix(a, b, u.x) +
                  (c - a)* u.y * (1.0 - u.x) +
                  (d - b) * u.x * u.y;
      }

      void main() {
        #include <logdepthbuf_fragment>

        float flowMapOffset0 = config.x;
        float flowMapOffset1 = config.y;
        float halfCycle = config.z;
        vec2 flow = texture2D( tFlowMap, vUv ).rg * 2.0 - 1.0;
        flow.x *= - 1.0;
        vec4 normalColor0 = texture2D( tNormalMap0,  vUv   + flow * flowMapOffset0 );
        vec4 normalColor1 = texture2D( tNormalMap1,  vUv   + flow * flowMapOffset1 );
        float flowLerp = abs( halfCycle - flowMapOffset0 ) / halfCycle;
        vec4 normalColor = mix( normalColor0, normalColor1, flowLerp );
        vec3 tNormal = normalize( vec3( normalColor.r * 2.0 - 1.0,  normalColor.g * 2.0 - 1.0, normalColor.b ) );


        vec3 worldBitangent = cross(worldNormal, worldTangent); 
        mat3 tToW = mat3(worldTangent, worldBitangent, worldNormal);
        vec3 normal = tToW * tNormal;

        vec3 toEye = normalize( vToEye );
        float theta = max( dot( toEye, normal ), 0.0 );
        float reflectance = reflectivity + ( 1.0 - reflectivity ) * pow( ( 1.0 - theta ), 5.0 );
        vec3 coord = vCoord.xyz / vCoord.w;
        

        vec2 uv = coord.xy+ coord.z * tNormal.xy * 0.05;

        float viewDepth = getViewDepth(coord.xy);
        float deepAlpha = clamp(viewDepth, 0.0, maxViewDepth) / maxViewDepth;

        vec4 reflectColor = texture2D( tReflectionMap, vec2( 1.0 - uv.x, uv.y ) );
			  vec4 refractColor = texture2D( tRefractionMap, uv );

        vec3 color = mix(shallowColor, deepColor, deepAlpha);
        refractColor = vec4( color, 1.0 ) * refractColor;

        float noiseAlpha = noise((tNormal.xy*2.));
        float foamAlpha = 1. - smoothstep(0.0, 40.,viewDepth);
        vec4 foamColor = vec4(noiseAlpha*foamAlpha,noiseAlpha*foamAlpha,noiseAlpha*foamAlpha,1.);

        gl_FragColor = (mix( refractColor, reflectColor, reflectance )+foamColor);
        #include <tonemapping_fragment>
			  #include <colorspace_fragment>
      }
      `,
      transparent: true,
    });

    this.material.uniforms['tReflectionMap'].value = renderTargetReflector.texture;
    this.material.uniforms['tRefractionMap'].value = renderTargetRefractor.texture;
    this.material.uniforms['depthMap'].value = renderTargetDepthBuffer.texture;
    this.material.uniforms['tNormalMap0'].value = normalMap0;
    this.material.uniforms['tNormalMap1'].value = normalMap1;
    this.material.uniforms['tFlowMap'].value = flowMap;
    this.material.uniforms['shallowColor'].value = shallowColor;
    this.material.uniforms['deepColor'].value = deepColor;
    this.material.uniforms['reflectivity'].value = reflectivity;
    this.material.uniforms['maxViewDepth'].value = maxViewDepth;
    this.material.uniforms['textureMatrix'].value = textureMatrix;

    this.material.uniforms['config'].value.x = 0; // flowMapOffset0
    this.material.uniforms['config'].value.y = halfCycle; // flowMapOffset1
    this.material.uniforms['config'].value.z = halfCycle; // halfCycle

    const updateTextureMatrix = (camera: THREE.Camera) => {
      textureMatrix.set(
        0.5, 0.0, 0.0, 0.5,
        0.0, 0.5, 0.0, 0.5,
        0.0, 0.0, 0.5, 0.5,
        0.0, 0.0, 0.0, 1.0
      );

      textureMatrix.multiply(camera.projectionMatrix);
      textureMatrix.multiply(camera.matrixWorldInverse);
      textureMatrix.multiply(this.matrixWorld);

    }

    const updateFlow = () => {
      const delta = clock.getDelta();
      const config = this.material.uniforms['config'];
      config.value.x += flowSpeed * delta;
      config.value.y = config.value.x + halfCycle;
      if (config.value.x >= cycle) {
        config.value.x = 0;
        config.value.y = halfCycle;
      } else if (config.value.y >= cycle) {
        config.value.y = config.value.y - cycle;
      }
    }

    this.onBeforeRender = (renderer, scene, camera) => {
      updateTextureMatrix(camera);
      updateFlow();

      this.visible = false;

      reflectorRender(renderer, scene, camera)
      refractorRender(renderer, scene, camera)

      onSurfaceList.forEach((object) => {
        object.visible = false;
      })
      scene.overrideMaterial = depthMaterial;
      renderer.setRenderTarget(renderTargetDepthBuffer);
      if (renderer.autoClear === false) {
        renderer.clear();
      }
      renderer.render(scene, camera);
      scene.overrideMaterial = null;
      renderer.setRenderTarget(null);
      onSurfaceList.forEach(object => {
        object.visible = true;
      })

      let near: number;
      if ((camera as THREE.PerspectiveCamera).isPerspectiveCamera) {
        near = (camera as THREE.PerspectiveCamera).near;
      } else {
        near = (camera as THREE.OrthographicCamera).near;
      }
      let far: number;
      if ((camera as THREE.PerspectiveCamera).isPerspectiveCamera) {
        far = (camera as THREE.PerspectiveCamera).far;
      } else {
        far = (camera as THREE.OrthographicCamera).far;
      }
      this.material.uniforms['cameraNearFar'].value = new THREE.Vector2(near, far)
      this.visible = true;

    };
  }
}

最终效果

image.png