你們好,本文學習Chrome->webgpu-samplers->fractalCube示例。html
上一篇博文:
WebGPU學習(八):學習「texturedCube」示例git
下一篇博文:
WebGPU學習(十):介紹「GPU實現粒子效果」github
最終渲染結果:
web
該示例展現瞭如何用上一幀渲染的結果做爲下一幀的紋理。canvas
與「texturedCube」示例相比,該示例的紋理並非來自圖片,而是來自上一幀渲染的結果學習
下面,咱們打開fractalCube.ts文件,分析相關代碼:.net
它與「texturedCube」示例->「傳遞頂點的uv數據」相似,這裏再也不分析code
由於swapChain保存了上一幀渲染的結果,因此將其做爲下一幀紋理的source,它的usage須要增長GPUTextureUsage.COPY_SRC:orm
const swapChain = context.configureSwapChain({ device, format: "bgra8unorm", usage: GPUTextureUsage.OUTPUT_ATTACHMENT | GPUTextureUsage.COPY_SRC, });
代碼以下:htm
const cubeTexture = device.createTexture({ size: { width: canvas.width, height: canvas.height, depth: 1 }, format: "bgra8unorm", usage: GPUTextureUsage.COPY_DST | GPUTextureUsage.SAMPLED, }); const sampler = device.createSampler({ magFilter: "linear", minFilter: "linear", }); const uniformBindGroup = device.createBindGroup({ layout: bindGroupLayout, bindings: [ ... { binding: 1, resource: sampler, }, { binding: 2, resource: cubeTexture.createView(), }], });
在每一幀中:
繪製帶紋理的立方體;
將渲染結果拷貝到紋理中。
相關代碼以下:
return function frame() { const swapChainTexture = swapChain.getCurrentTexture(); renderPassDescriptor.colorAttachments[0].attachment = swapChainTexture.createView(); const commandEncoder = device.createCommandEncoder({}); const passEncoder = commandEncoder.beginRenderPass(renderPassDescriptor); ... passEncoder.setBindGroup(0, uniformBindGroup); ... passEncoder.draw(36, 1, 0, 0); passEncoder.endPass(); commandEncoder.copyTextureToTexture({ texture: swapChainTexture, }, { texture: cubeTexture, }, { width: canvas.width, height: canvas.height, depth: 1, }); device.defaultQueue.submit([commandEncoder.finish()]); ... }
vertex shader與「texturedCube」示例相比,增長了color attribute:
const vertexShaderGLSL = `#version 450 ... layout(location = 1) in vec4 color; ... layout(location = 0) out vec4 fragColor; ... void main() { ... fragColor = color; ... }`;
fragment shader的代碼以下:
const fragmentShaderGLSL = `#version 450 layout(set = 0, binding = 1) uniform sampler mySampler; layout(set = 0, binding = 2) uniform texture2D myTexture; layout(location = 0) in vec4 fragColor; layout(location = 1) in vec2 fragUV; layout(location = 0) out vec4 outColor; void main() { vec4 texColor = texture(sampler2D(myTexture, mySampler), fragUV * 0.8 + 0.1); // 1.0 if we're sampling the background float f = float(length(texColor.rgb - vec3(0.5, 0.5, 0.5)) < 0.01); outColor = mix(texColor, fragColor, f); }`;
第10行對fragUV進行了處理,咱們會在分析渲染時間線中分析它。
第13行和第15行至關於作了if判斷:
if(紋理顏色 === 背景色){ outColor = fragColor } else{ outColor = 紋理顏色 }
這裏之因此不用if判斷而使用計算的方式,是爲了減小條件判斷,提升gpu的並行性
下面分析下渲染的時間線:
由於紋理爲空紋理,它的顏色爲背景色,因此fragment shader的outColor始終爲fragColor,所以立方體的全部片斷的顏色均爲fragColor。
第一幀的渲染結果以下:
第一幀繪製結束後,渲染結果會被拷貝到紋理中。
分析執行的fragment shader代碼:
const fragmentShaderGLSL = `#version 450 layout(set = 0, binding = 1) uniform sampler mySampler; layout(set = 0, binding = 2) uniform texture2D myTexture; layout(location = 0) in vec4 fragColor; layout(location = 1) in vec2 fragUV; layout(location = 0) out vec4 outColor; void main() { vec4 texColor = texture(sampler2D(myTexture, mySampler), fragUV * 0.8 + 0.1); // 1.0 if we're sampling the background float f = float(length(texColor.rgb - vec3(0.5, 0.5, 0.5)) < 0.01); outColor = mix(texColor, fragColor, f); }`;
獲得的紋理區域以下圖的紅色區域所示:
第二幀的渲染結果以下:
依次類推,第三幀的渲染結果以下: