它在Unity5中已經開始實裝了的, 不過由於如今的開發在顯示效果有什麼需求的, 基本都是插件能找到的, 也是策劃人員隨大流的設計, 因此基本沒有實際上的開發需求, 一直就沒有關注過.app
先下載它的官方樣例來看看吧, 給了三個例子, 比較有意義的是第二個 DeferredCustomLights, 順便能複習一下Unity Built-In Shader API. 這個假光照的例子, 它包含了繪製簡易物體, 假光照, 以及目標渲染路徑 : dom
這個假光照使用了一個經常使用的渲染思路, 使用3D物體在物理空間的位置和體積, 控制渲染範圍, 而後經過屏幕空間的反向計算獲取真實像素對應的實際物體的位置, 經過它再次計算光照, 某些動態貼花插件用的就是這個方法, 先無論這些, 看看它是怎樣構建一個渲染流程的吧.ide
PS : 由於之前作相似暴風雪技能範圍的顯示, 用過Unity自帶的投影, 那性能簡直坑爹.函數
就這東西性能
先把CommandBuffer能插入的生命週期弄過來看看:ui
由於我手機, PC -> HTC Vive, 微軟HoloLens都用過, 因此知道手機基本五花八門, 是否能支持不止要看API版本, 還要看GPU, 好比找到iOS的Doc : spa
Apple GPU Hardware插件
Together, the A7, A8, A9, A10, and A11 GPUs create a new generation of graphics hardware that support both Metal and OpenGL ES 3.0. To get the most out of a 3D, graphics-dominated app running on the these GPUs, use Metal. Metal provides extremely low-overhead access to these GPUs, enabling incredibly high performance for your sophisticated graphics rendering and computational tasks. Metal eliminates many performance bottlenecks—such as costly state validation—that are found in traditional graphics APIs. Both Metal and OpenGL ES 3.0 incorporate many new features, such as multiple render targets and transform feedback, that have not been available on mobile processors before. This means that advanced rendering techniques that have previously been available only on desktop machines, such as deferred rendering, can now be used in iOS apps. Refer to the Metal Programming Guide for more information about what features are visible to Metal apps.設計
安卓就更不用說了, 那麼在不一樣的渲染路徑下, 他們能通用的只有從SkyBox開始的步驟了, 很大機率在作的時候要兩種渲染路徑都作...code
建立CommandBuffer(代碼來自官方代碼, 所有省略), 很是簡單 :
buf.m_AfterLighting = new CommandBuffer(); buf.m_AfterLighting.name = "Deferred custom lights";
cam.AddCommandBuffer(CameraEvent.AfterLighting, buf.m_AfterLighting);
它是假光照, 因此放在系統光照完了以後再進行疊加渲染便可, 而後下面就是更新各類數據, 讓燈光的位置和數據能夠被更新 :
// construct command buffer to draw lights and compute illumination on the scene foreach(var o in system.m_Lights) { // light parameters we'll use in the shader param.x = o.m_TubeLength; param.y = o.m_Size; param.z = 1.0f / (o.m_Range * o.m_Range); param.w = (float)o.m_Kind; buf.m_AfterLighting.SetGlobalVector(propParams, param); // light color buf.m_AfterLighting.SetGlobalColor(propColor, o.GetLinearColor()); // draw sphere that covers light area, with shader // pass that computes illumination on the scene var scale = Vector3.one * o.m_Range * 2.0f; trs = Matrix4x4.TRS(o.transform.position, o.transform.rotation, scale); buf.m_AfterLighting.DrawMesh(m_SphereMesh, trs, m_LightMaterial, 0, 0); }
這樣就完成了一個渲染流程, CommandBuffer能夠說的就完了...完了...
確實沒有什麼好說的, 它提供了不少Draw, Blit方法, 提供了插入各個生命週期的功能, 你能夠用來代替普通的後處理, 也能夠儲存臨時變量跨生命週期來使用, 很好很強大的功能, 根據使用者的不一樣能發揮不一樣的效果...好比說Unity自帶的FrameDebugger, 你把每一個生命週期的圖片都存下來, 就能作了...
我來試一下, 先建立幾個CommandBuffer, 放在不一樣的生命週期裏 :
using UnityEngine; using UnityEngine.Rendering; public class CommandBufferTest : MonoBehaviour { public Shader imageShader; void Start() { var cam = Camera.main; var mat = new Material(imageShader); { var tex0 = Shader.PropertyToID("GetTex0"); CommandBuffer cmd1 = new CommandBuffer(); cmd1.name = "GetTex0"; cmd1.GetTemporaryRT(tex0, Screen.width + 1, Screen.height + 1, 0, FilterMode.Bilinear, RenderTextureFormat.ARGB32); // 一樣大小的RT分到的是同一個RT... cmd1.Blit(BuiltinRenderTextureType.GBuffer0, tex0); cmd1.SetGlobalTexture(Shader.PropertyToID("_Tex0"), tex0); cmd1.ReleaseTemporaryRT(tex0); cam.AddCommandBuffer(CameraEvent.AfterGBuffer, cmd1); } { var tex1 = Shader.PropertyToID("GetTex1"); CommandBuffer cmd2 = new CommandBuffer(); cmd2.name = "GetTex1"; cmd2.GetTemporaryRT(tex1, Screen.width + 2, Screen.height + 2, 0, FilterMode.Bilinear, RenderTextureFormat.ARGB32); // 一樣大小的RT分到的是同一個RT... cmd2.Blit(BuiltinRenderTextureType.CameraTarget, tex1); cmd2.SetGlobalTexture(Shader.PropertyToID("_Tex1"), tex1); cmd2.ReleaseTemporaryRT(tex1); cam.AddCommandBuffer(CameraEvent.BeforeLighting, cmd2); } { CommandBuffer cmd3 = new CommandBuffer(); cmd3.name = "_PostEffect"; cmd3.Blit(BuiltinRenderTextureType.CurrentActive, BuiltinRenderTextureType.CameraTarget, mat); cam.AddCommandBuffer(CameraEvent.AfterImageEffects, cmd3); } } }
cmd1在Gbuffer渲染完成後獲取GBuffer0貼圖, cmd2在光照以前獲取輸出貼圖, 而後設置到全局變量_Tex0, _Tex1裏面去了, 而後cmd3在後處理階段使用了這兩張貼圖, 輸出到屏幕上, 就跟FrameDebug同樣顯示了不一樣生命週期下的輸出了, Shader見下圖, 把GBuffer0渲染在左半邊, 光照前的輸出在右半邊 :
sampler2D _MainTex; sampler2D _Tex0; sampler2D _Tex1; fixed4 frag (v2f i) : SV_Target { if (i.uv.x < 0.5) { return float4(tex2D(_Tex0, float2(i.uv.x * 2.0, i.uv.y)).rgb, 1); } else { return float4(tex2D(_Tex1, float2((i.uv.x - 0.5) * 2.0, i.uv.y)).rgb,1); } }
場景原圖和輸出圖 :
這就是跨越生命週期獲取的信息了. 以單純只使用Shader是作不到的, 即便Gbuffer能夠獲取, 但是光照前的圖像是沒法獲取的, 明白了這點就知道它的妙用了. 打開FrameDebugger能夠看到 :
這裏獲取GBuffer0跟右邊顯示的不同, 右邊顯示的GBuffer0的跟我獲取的是相反的, 而GBuffer1纔是我獲取的, 貌似BuiltinRenderTextureType.GBuffer0, BuiltinRenderTextureType.GBuffer1被他搞反了......
無論怎樣, 獲取未光照時的效果是正確的.
補充一些函數的特色 :
1. CommandBuffer的Blit會致使RenderTarget被改變
CommandBuffer.Blit(source, dest, mat) // 會致使RenderTarget改變
2. CommandBuffer怎樣實現GL的全屏幕繪製 :
// GL Graphics.SetRenderTarget(destination); GL.PushMatrix(); GL.LoadOrtho(); GL.Begin(GL.QUADS); { // Quad ... } GL.End(); // ComandBuffer -- 這裏若是是OpenGL的話須要反向y軸 quad = new Mesh(); quad.vertices = new Vector3[] { new Vector3(-1f, y1, 0f), // Bottom-Left new Vector3(-1f, y2, 0f), // Upper-Left new Vector3( 1f, y2, 0f), // Upper-Right new Vector3( 1f, y1, 0f) // Bottom-Right }; quad.uv = new Vector2[] { new Vector2(0f, 0f), new Vector2(0f, 1f), new Vector2(1f, 1f), new Vector2(1f, 0f) }; quad.colors = new Color[] { ... }; quad.triangles = new int[] { 0, 1, 2, 2, 3, 0 }; CommandBuffer.SetRenderTarget(...) CommandBuffer.DrawMesh(quad, Matrix4x4.identity, ...);
=================================================================================
補一些計算過程, 可能之後其它地方會用到 :
1. 簡單光照計算(自身爲光源)
struct v2f { float4 pos : SV_POSITION; float4 uv : TEXCOORD0; float3 ray : TEXCOORD1; }; // Common lighting data calculation (direction, attenuation, ...) void DeferredCalculateLightParams( unity_v2f_deferred_instanced i, out float3 outWorldPos, out float2 outUV, out half3 outLightDir, out float outAtten, out float outFadeDist) { i.ray = i.ray * (_ProjectionParams.z / i.ray.z); float2 uv = i.uv.xy / i.uv.w; // read depth and reconstruct world position float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv); depth = Linear01Depth(depth); float4 vpos = float4(i.ray * depth, 1); float3 wpos = mul(unity_CameraToWorld, vpos).xyz; float fadeDist = UnityComputeShadowFadeDistance(wpos, vpos.z); float3 lightPos = float3(unity_ObjectToWorld[0][3], unity_ObjectToWorld[1][3], unity_ObjectToWorld[2][3]); float3 tolight = wpos - lightPos; half3 lightDir = -normalize(tolight); float att = dot(tolight, tolight) * (距離平方倒數); // 光源的最遠照亮距離 float atten = tex2D (_LightTextureB0, att.rr).UNITY_ATTEN_CHANNEL; atten *= UnityDeferredComputeShadow(tolight, fadeDist, uv); outWorldPos = wpos; outUV = uv; outLightDir = lightDir; outAtten = atten; outFadeDist = fadeDist; }