【翻譯】Temporal supersampling and antialiasing

原文連接:https://bartwronski.com/2014/03/15/temporal-supersampling-and-antialiasing/html

Aliasing problem鋸齒問題

Before I address temporal supersampling, just a quick reminder on what aliasing is.程序員

在我講述時間超級採樣以前,對鋸齒是什麼作一個快速的提示。算法

Aliasing is a problem that is very well defined in signal theory. According to the general sampling theorem we need to have our signal spectrum containing only frequencies lower than Nyquist frequency. If we don’t (and when rasterizing triangles we always will as triangle edge is infinite frequency spectrum, step-like response) we will have some frequencies appearing in the final signal (reconstructed from samples) that were not in the original signal. Visual aliasing can have different appearance, it can appear as regular patterns (so-called moire), noise or flickering.緩存

鋸齒是一個在信號理論中意義明確的問題。根據廣泛的採樣原理,咱們須要讓咱們的信號頻譜只包含比Nyquist頻率低的頻率。若是咱們不這麼作(而且當光柵化三角形時咱們老是會這樣由於三角形邊是無限頻率的頻譜,梯狀的反應)咱們將有一些頻率表如今最終的信號(採樣重構)不是原始的信號。可見的鋸齒會有不一樣的表現,他可能以規律的模式(稱爲雲紋),噪波或閃爍表現出來。sass

Classic supersampling經典的超級採樣

Classic supersampling is a technique that is extremely widely used by the CGI industry. Per every target image fragment we perform sampling multiple times at much higher frequencies (for example by tracing multiple rays per simply pixel or shading fragments multiple times at various positions that cover the same on-screen pixel) and then performing the signal downsampling/filtering – for example by averaging. There are various approaches to even easiest supersampling (I talked about this in one of my previous blog posts), but the main problem with it is the associated cost – N times supersampling means usually N times the basic shading cost (at least for some pipeline stages) and sometimes additionally N times the basic memory cost. Even simple, hardware-accelerated techniques like MSAA that do estimate only some parts of the pipeline (pixel coverage) in higher frequency and don’t provide as good results, have quite big cost on consoles.session

經典的超級採樣是一個在CGI行業普遍使用的技術。對每個目標圖像像素咱們用更高的頻率執行屢次採樣(例如,在每一個簡單像素跟蹤多個射線或在覆蓋相同屏幕像素的不一樣位置上着色屢次片斷)而且以後執行信號下采樣/過濾——例如經過均值。有不一樣的方式去實現甚至最先還有超級採樣(我在我以前的博客中有討論過)。可是最主要的問題是它相關的性能消耗,N次的超級採樣意味着一般N倍的基本着色消耗(至少對於一些渲染管線階段)。而且一些額外的N倍的基本內存消耗。甚至簡單的,硬件加速技術像MSAA只在部分管線階段用更高的頻率,而且提供不了那麼好的效果,在遊戲機有至關高的消耗。app

But even if supersampling is often unpractical technique, it’s temporal variation can be applied with almost zero cost.less

即便超級採樣是一個難以應用的技術,可是他的時間變體幾乎0消耗應用。ide

Temporal supersampling theory時間超級採樣理論

So what is the temporal supersampling? Temporal supersampling techniques base on a simple observation – from frame to frame most of the on-screen screen content do not change. Even with complex animations we see that multiple fragments just change their position, but apart from this they usually correspond to at least some other fragments in previous and future frames.wordpress

因此時間超級採樣是什麼?時間超級採樣技術基於一個簡單的觀察——幀與幀之間屏幕上的屏幕內容大多數都沒有改變。甚至一些有複雜的動畫咱們看到多個像素只是改變了他們的位置,可是除此以外,他們一般對應至少一些其餘的片斷在以前和將來的幀中。

Based on this observation, if we know the precise texel position in previous frame (and we often do! Using motion vectors that are used for per-object motion blur for instance), we can distribute the multiple fragment evaluation component of supersampling between multiple frames.

基於這個觀察,若是咱們知道以前幀精確的像素位置(而且咱們常常這麼作!使用運動向量用於逐對象的運動模糊實例),咱們能夠分佈超級採樣的多個像素求值份量到多個幀之間。

What is even more exciting is that this technique can be applied to any pass – to your final image, to AO, screen-space reflections and others – to either filter the signal or increase the number of samples taken. I will first describe how it can be used to supersample final image and achieve much better AA and then example of using it to double or triple number of samples and quality of effects like SSAO.

更讓人驚喜的是這個技術能夠應用到任何Pass中——最終的圖像,AO,SSR以及其餘——甚至過濾信號或增長採樣的數量。我將第一次描述它若是用與超級採樣最終的圖像而且實現更佳的AA,而且以後使用到雙倍或三倍數量的採樣和效果質量像SSAO的例子

Temporal antialiasing時間抗鋸齒

I have no idea which game was the first to use the temporal supersampling AA, but Tiago Sousa from Crytek had a great presentation on Siggraph 2011 on that topic and its usage in Crysis 2 [1]. Crytek proposed using a sub pixel jitter to the final MVP transformation matrix that alternates every frame – and combine two frames in post-effect style pass. This way they were able to increase the sampling resolution twice at almost no cost!

我不清楚哪一款遊戲是第一個使用時間超級採樣抗鋸齒的,可是Crytek的Tiago Sousa在Siggraph2011有一個很棒的主題展現也包含了它在Crysis2的使用。Cryteck推薦每一幀交替的對最終的MVP變換矩陣使用一個次像素抖動——而且在後處理風格Pass中將兩幀合併。這個方式能夠在幾乎0消耗的狀況下提高兩倍的採樣分辨率。

Too good to be true?

好的讓人難以相信?

Yes, the result of such simple implementation looks perfect on still screenshots (and you can implement it in just couple hours!***), but breaks in motion. Previous frame pixels that correspond to current frame were in different positions. This one can be easily fixed by using motion vectors, but sometimes the information you are looking for was occluded or had. To address that, you cannot rely on depth (as the whole point of this technique is having extra coverage and edge information from the samples missing in current frame!), so Crytek proposed relying on comparison of motion vector magnitudes to reject mismatching pixels.

是的,這樣簡單的實現結果在靜止的截屏上看起來完美(而且你能夠實現它在短短的幾個小時——我不能夠),可是在運動的時候會露餡。以前的幀像素對應的當前幀像素在不一樣的位置。這個問題可使用motion vectors簡單的修復。可是有的時候,你查找的信息被阻擋的。爲了處理這個問題,你不能依靠深度(由於這個技術的總體要點是有額外的覆蓋和來自當前幀的採樣丟失邊緣信息!),因此Crytek 推薦依靠motion vector放大來對比拒毫不匹配的像素。

***yeah, I really mean maximum one working day if you have a 3D developer friendly engine. Multiply your MVP matrix with a simple translation matrix that jitters in (-0.5 / w, -0.5 / h) and (0.5 / w, 0.5 / h) every other frame plus write a separate pass that combines frame(n) and frame(n-1) together and outputs the result.

***是的,若是你有一個3D開發友好的引擎,個人意思是最多一個工做日。將你的MVP矩陣乘一個簡單的(-0.5 / w, -0.5 / h) 和(0.5 / w, 0.5 / h) 抖動過渡矩陣每隔一幀+寫一個單獨的pass將第n幀和第n-1幀混合在一塊兒而且輸出結果。

Usage in Assassin’s Creed 4 – motivation《刺客信條4》中的使用—動機

For a long time we relied on FXAA (aided by depth-based edge detection) as a simple AA technique during our game development. This simple technique usually works 「ok」 with static image and improves its quality, but breaks in motion – as edge estimations and blurring factors change from frame to frame. While our motion blur (simple and efficient implementation that used actual motion vectors for every skinned and moving objects) helped to smooth edge look for objects moving quite fast (small motion vector dilation helped even more), it didn’t do anything with calm animations and subpixel detail. And our game was full of them – just look at all the ropes tied to sails, nicely tessellated wooden planks and dense foliage in jungles! 

很長的一段時間,在咱們的遊戲開發期間咱們依靠FXAA(受助於經過基於深度的邊緣檢測)做爲簡單的抗鋸齒技術。這個簡單的技術在一些靜幀圖像和提高它的質量上一般工做的還能夠,可是運動的時候會出錯——由於邊緣判斷和模糊因素逐幀改變。然而咱們的運動模糊(用於每一個蒙皮且運動的對象使用真實的運動向量的簡單和高效的實現)有助於爲運動的至關快的對象的平滑邊緣(微小的運動向量更有幫助)。它對任何平靜動畫和次像素細節的沒有做用。而且咱們的遊戲充滿了它們——只須要看看全部的繫着帆的繩索,在叢林中完美嵌合的木板和茂密的植物。

Unfortunately motion blur did nothing to help the antialiasing of such slowly moving objects and FXAA added some nasty noise during movement, especially on grass. We didn’t really have time to try so-called 「wire AA」 and MSAA was out of our budgets so we decided to try using temporal antialiasing techniques.

不幸的是運動模糊對運動緩慢的對象的抗鋸齒沒有幫助,而且FXAA在運動的時候增長了一些使人不爽的噪聲,尤爲在草地上。咱們真的沒有時間去嘗試所謂的「wire抗鋸齒」,而且MSAA超出了咱們的性能預算,因此咱們決定嘗試使用時間抗鋸齒技術。

I would like to thank here especially Benjamin Goldstein, our Technical Lead with whom I had a great pleasure to work on trying and prototyping various temporal AA techniques very late in the production.

這裏我要感謝Benjamin Goldstein,咱們的技術Lead,我很愉快在開發後期嘗試研究不一樣時間抗鋸齒技術。

Assassin’s Creed 4 XboxOne / Playstation 4 AA《刺客信條4》XboxOne / Playstation 4 抗鋸齒

As a first iteration, we started with single-frame variation of morphological SMAA by Jimenez et al. [2] In its even most basic settings it showed definitely better-quality alternative to FXAA (at a bit higher cost, but thanks to much bigger computing power of next-gen consoles it stayed in almost same budget compared to FXAA on current-gen consoles). There was less noise and artifacts and much better morphological edge reconstruction , but obviously it wasn’t able do anything to reconstruct all this subpixel detail.

做爲第一次迭代,咱們以Jimenez et al的形態學SMAA 單幀變體開始。在它的甚至大多數的基礎設置上清楚的展現了比起FXAA更佳的質量的選擇(略高一點的性能消耗,可是得益於次世代遊戲機更強的計算能力,比起本世代遊戲機的FXAA來講基本上是同樣的預算消耗)。他有更少的噪聲和瑕疵和更佳的形態學邊緣重構,可是顯然他不可以重建全部的次像素細節。

So the next step was to try to plug in temporal AA component. Couple hours of work and voila – we had much better AA. Just look at the following pictures.

因此下一步是嘗試插入時間抗鋸齒部分。數小時的工做後,瞧——咱們有個更佳的抗鋸齒表現。就像下面圖片展現的那樣。

Pretty amazing, huh?

確實使人驚訝?

Sure, but this was at first the result only for static image – and this is where your AA problems start (not end!).

不過這只是第一步靜幀圖像的結果——而且這是你的抗鋸齒問題的開始。

Getting motion vectors right正確的獲取運動向量

Ok, so we had some subtle and we thought 「precise」 motion blur, so getting motion vectors to allow proper reprojection for moving objects should be easy?

好的,因此咱們有一些精巧的而且咱們認爲「精確」運動模糊,那麼對於運動物體獲得運動向量去容許正確的重映射應該很容易?

Well, it wasn’t. We were doing it right for most of the objects and motion blur was ok – you can’t really notice lack of motion blur or slightly wrong motion blur on some specific objects. However for temporal AA you need to have them proper and pixel-perfect for all of your objects!

然而並非這樣。咱們對大多數的對象這樣作是正確的而且運動模糊是能夠接受的——在一些特定的對象你幾乎不會注意到運動模糊缺失或輕微的運動模糊錯誤。然而對於時間抗鋸齒,你的全部對象你都須要它們有正確且像素級別的完美。

Other way you will get huge ghosting. If you try to mask out this objects and not apply temporal AA on them at all, you will get visible jittering and shaking from sub-pixel camera position changes.

否則的話,你會獲得巨大的重影。若是你試着去遮掩掉這個對象,並不該用時間抗鋸齒在上面,你會獲得來自次像素攝像機位置改變的明顯的抖動和震動。

Let me list all the problems with motion vectors we have faced and some comments of whether we solved them or not:

讓我列一下咱們面對過的運動向量的全部問題和一些咱們是否解決他們的體會:

  • Cloth and soft-body physical objects. From our physics simulation for cloth and soft bodies that was very fast and widely used in the game (characters, sails) we got full vertex information in world space. Object matrices were set to just identity. Therefore, such objects had zero motion vector (and only motion from camera was applied to them). We needed to extract such information from the engine and physics – fortunately it was relatively easy as it was used already for bounding box calculations. We fixed ghosting from moving soft body and cloth objects, but didn’t have motion vectors from the movement itself – we didn’t want to completely change the pipeline to GPU indirections and subtracting positions from two vertex buffers. It was ok-ish as they wouldn’t move very abruptly and we didn’t see artifacts from it.
  • 布料和軟體物理對象。布料和軟體的物理模擬的在咱們的遊戲中的應用十分快速和普遍(角色,帆)咱們獲得完整的世界空間頂點信息。對象矩陣設置爲單位化。所以,這個對象運動向量爲0(而且只應用來自攝像機的運動)。咱們須要從引擎和物理中去提取這個信息——幸運地是這相對比較簡單由於這些信息以前已經用於包圍盒的計算。咱們修復移動軟體和布料對象的殘影問題,可是沒有來自它自身運動的運動向量——咱們徹底不想間接的改變GPU管線而且減去來自兩個頂點緩衝區的位置,這看起來還能夠由於他們不會運動的十分忽然,而且咱們並無看到瑕疵。
  • Some 「custom」 object types that had custom matrices and the fact we interpreted data incorrectly. Same situation as with cloth existed also for other dynamic objects. We got some custom motion vector debugging rendering mode working and fixing all those bugs was just matter of couple days in total.
  • 一些有自定義矩陣的自定義的對象類型和咱們解釋數據錯誤的事實。和布料相同的情形也在一些其餘的動態對象上存在。咱們獲得一些自定義的移動向量測試渲染模式工做而且修復這些bugs總共幾天的事情。
  • Ocean. It was not writing to the G-buffer. Instead of seeing motion vectors of ocean surface, we had proper information, but for ocean floor or 「sky」 behind it (when with very deep ocean there was no bottom surface at all). The fix there was to overwrite some G-buffer information like depth and motion-vectors. However, still we didn’t store previous frame simulation results and didn’t try to use them, so in theory you could see some ghosting on big and fast waves during storm. It wasn’t very big problem for us and no testers ever reported it.
  • 海洋。海洋不會寫入到G-Buffer中。不是看見的海洋表面的運動向量,咱們有正確的信息,而是對於海洋平面或者它後面的「天空」(當有很是深沒有底面的海洋的時候)。修復它的方法是重寫一些像深度和移動向量G-Buffer信息。然而,儘管如此咱們沒有存儲過去的幀模擬結果而且不使用他們,全部在理論上在風暴期間你能夠在大和快的波浪上看到一些重影。對於咱們來講這不是個很大的問題,而且沒有測試人員報告它。
  • Procedurally moving vegetation. We had some vertex noise based artist-authored vegetation movement and again, difference between two frame vertex position values wasn’t calculated to produce proper motion vectors. This is single biggest visible artifact in game from temporal AA technique and we simply didn’t have the time to modify our material shader compiler / generator and couldn’t apply any significant data changes in patch (we improved AA in our first patch). Proper solution here would be to automatically replicate all the artist created shader code that calculates output local vertex position if it relies on any input data that changes between frames like 「time」 or closest character entity position (this one was used to simulate collision with vegetation), pass it through interpolators (perspective correction!), subtract it and have proper motion vectors. Artifacts like over blurred leaves are sometimes visible in the final game and I’m not very proud of it – although maybe it is usual programmer obsession.
  • 程序化的運動植被。再次,咱們有一些基於藝術家實現的頂點噪波的植被運動,兩個幀頂點位置值的差值沒有計算提供正確的運動向量。這是遊戲中使用時間抗鋸齒技術一個最大的明顯的瑕疵,而且咱們沒有時間來修改咱們的材質着色器編譯器/生成器,而且不能在補丁中應用任何重大數據修改(咱們在咱們的第一個補丁上提高了抗鋸齒)。若是它依靠任何幀之間的相似「時間」或最近角色實體位置的改變(這用來模擬植被的碰撞)的輸入數據,合適的解決方案是自動複製全部藝術家建立的着色器代碼並計算輸出局部頂點位置,經過插值器傳遞(透視校訂!),減去它而且獲得適合的運動向量。在最終的遊戲中,像過分模糊樹葉的偶爾看得見的瑕疵。我不爲這感到驕傲——儘管可能這一般是程序員的沉迷。
  • Objects being teleported on skinning. We had some checks for entities and meshes being teleported, but in some single and custom cases objects were teleported using skinning – it would be impractical to analyze whole skeleton looking for temporal discontinuities. We asked gameplay and animations programmers to mark them on such a frame and quickly fixed all the remaining bugs.
  • 經過蒙皮驅動的對象。咱們有一些對於被驅動的實體和模型的檢查,可是在一些單一且自定義的案例中對象一般使用蒙皮驅動——分析整個骨架查找時間間斷點是不切實際的。咱們要求gamplay和動畫程序在這樣的一個幀中去標記他們,而且快速的修復全部存在的bug。

Problems with motion vector based rejection algorithm基於拒絕算法運動向量的問題

Ok, we spend 1-2 weeks on fixing our motion vectors (and motion blur also got much better!😊 ), but in the meanwhile realized that the approach proposed by Crytek and used in SMAA for motion rejection is definitely far from perfect. I would divide problems into two categories.

咱們花費了1-2周在修復咱們的運動向量(同時咱們的運動模糊效果也更好了!😊),可是同時意識到Crytek推薦的方法而且在SMAA中使用的運動拒絕確定離完美還遠得很。我把問題分紅了兩類。

Edge cases極端狀況

It was something we didn’t really expect, but temporal AA can break if menu pops up quickly, you pause the game, you exit to console dashboard (but game remains visible), camera teleports or some post-effect immediately kicks in. You will see some weird transition frame. We had to address each case separately – by disabling the jitter and frame combination on such frame. Add another week or two to your original plan of enabling temporal AA to find, test and fix all such issues…

這是咱們沒有預料到的事情,若是菜單彈出太快可能時間抗鋸齒會出錯,你暫停遊戲,退出到控制檯儀表盤(可是遊戲保持可見),相機瞬移或一些後處理特效馬上關閉。你會看到一些奇怪的過渡幀。咱們必須去單獨處理每一種狀況——經過禁用抖動和在這樣的幀進行幀合併。花費額外的一個或兩個星期在你本來的啓用時間抗鋸齒計劃去查找,測試和修復這些全部的問題。

Wrong rejection technique錯誤的拒絕技術

This is my actual biggest problem with naive SMAA-like way of rejecting blending by comparing movement of objects.

經過對比對象的運動,拒絕混合的幼稚的相似SMAA的方式,是我真正的最大的問題。

First of all, we a had very hard time to adjust the 「magic value」 for the rejection threshold and 8-bit motion vectors didn’t help it. Objects were either ghosting or shaking.

首先,對於拒絕閾值,咱們有段艱難的時間來調整這個「魔幻值」,並且8位的移動向量對此也沒有什麼幫助。對象依然重影或抖動。

Secondly, there were huge problems on for example ground and shadows – the shadow itself was ghosting – well, there is no motion vector for shadow or any other animated texture, right? 😊 It was the same with explosions, particles, slowly falling leaves (that we simulated as particle systems).

第二,例如地面和陰影有巨大的問題——陰影自己就是重影——那好,對於陰影或者任何運動的紋理沒有移動向量,對吧?😊爆炸,粒子,緩慢下落的樹葉(咱們用粒子系統模擬)也是同樣的方式。

For both of those issues, we came up with simple workaround – we were not only comparing similarity of motion of objects, but on top of it added a threshold value – if object moved faster than around ~2 pixels per frame in current or previous frame, do not blend them at all! We found such value much easier to tweak and to work with. It solved the issue of shadows and visible ghosting.

對於這些全部的問題,咱們想出了簡單的變通方案——咱們不只僅對比了對象的運動類似性,可是在這之上還添加了一個閾值——若是物體在當前或以前的幀中移動超過約2個像素,不要對他們進行混合!咱們發現這樣的值更容易調整和運用。解決了陰影和明顯的重影問題。

We also increased motion blur to reduce any potential visible shaking.

咱們也增長了運動模糊來減小可能存在的明顯的震動。

Unfortunately, it didn’t do anything for transparent or animated texture changes over time, they were blended and over-blurred – but as a cool side effect we got free rain drops and rain ripples antialiasing and our art director preferred such soft, 「dreamy」 result. 

不幸地是,這對於透明或隨着時間改變的動畫紋理沒有做用,他們被混合且過分模糊——可是做爲一個很酷的反作用,咱們獲得了免費的雨滴和雨漣漪的抗鋸齒,重要的是咱們的藝術指導更喜歡這樣柔軟,「夢幻」的結果。

Recently Tiago Souse in his Siggraph 2013 talk proposed to address this issue by changing metric to color-based and we will investigate it in the near future [3].

最近Tiago Souse在他的siggraph2013演講中提出了經過改變基於顏色的度量標準來解決這個問題,咱們在不久的未來研究它。

Temporal supersampling of different effects – SSAO不一樣做用的時間超級採樣—SSAO

I wanted to mention another use of temporal supersampling that got into final game on the next-gen consoles and that I really liked. I got inspired by Matt Swoboda’s presentation [4] and mention of distributing AO calculation sampling patterns between multiple frames. For our SSAO we were having 3 different sampling patterns (spiral-based) that changed (rotated) every frame and we combined them just before blurring the SSAO results. This way we effectively increased number of samples 3 times, needed less blur and got much much better AO quality and performance for cost of storing just two additional history textures.Unfortunately I do not have screenshots to prove that and you have to take my word for it, but I will try to update my post later.

我想要談談時間超級採樣在次世代的遊戲機中已經進入了決勝局的其餘應用,而且我真的的喜歡。我受Matt Swoboda的演講啓發,而且說起在多幀之間分佈AO計算採樣模式。對於咱們的SSAO,咱們有三種不一樣的每幀改變(旋轉)的採樣模式(基於螺旋的),而且咱們在模糊SSAO的結果前就將它們組合在一塊兒。這個方法有效的提升了三倍的採樣數量,須要更少的模糊且獲得好不少的AO質量,並且只有兩張額外的歷史紋理的存儲性能消耗。不幸地是,我沒有截圖來證實它,你必須相信個人話,但我以後會更新個人文章。

For rejection technique I was relying on a simple depth comparison – we do not really care about SSAO on geometric foreground object edges and depth discontinuities as by AO definition, there should be almost none. Only visible problem was when SSAO caster moved very fast along static SSAO receiver – there was visible trail lagging in time – but this situation was more artificial problem I have investigated, not a serious in-game problem/situation. Unlike the temporal antialiasing, putting this in game (after having proper motion vectors) and testing took under a day, there were no real problems, so I really recommend using such techniques – for SSAO, screen-space reflections and many more. 

對於拒絕技術,我依靠簡單的深度對比——正如AO的定義同樣,咱們真的不在乎幾何前景邊緣和深度間斷處的SSAO,那裏應該幾乎沒有AO。只有可見的問題是,當投射SSAO的對象沿着靜態的SSAO接收對象移動的很是快的時候——那裏有明顯的時間滯後的痕跡——可是這種狀況是我研究過的更虛假的問題,不是一個嚴重的遊戲中問題/情形。SSAO不像時間抗鋸齒,我把它加入遊戲中(在有正確的移動向量後)而且只用了一天測試,沒有什麼真正的問題,因此我強烈建議使用這個技術——對於SSAO,SSR還有更多的應用。

Summary總結

Temporal supersampling is a great technique that will increase final look and feel of your game a lot, but don’t expect that you can do it in just couple days. Don’t wait till the end of the project, 「because it is only a post-effect, should be simple to add」 – it is not! Take weeks or even months to put it in, have testers report all the problematic cases and then properly and iteratively fix all the issues. Have proper and optimal motion vectors, think how to write them for artist-authored materials, how to batch your objects in passes to avoid using extra MRT if you don’t need to write them (static objects and camera-only motion vector). Look at differences in quality between 16bit and 8bit motion vectors (or maybe R11G11B10 format and some other G-Buffer property in B channel?), test all the cases and simply take your time to do it all properly and early in production, while for example changing a bit skeleton calculation or caching vertex skinning information (having 「vertex history」) is still an acceptable option. 

時間超級採樣是一個提高你的遊戲最終的表現和感覺的絕佳的技術,可是不要妄想你能夠在短短的幾天實現它。不要等到項目的後期,「由於它只是一個後處理,應該很容易添加」——並非!須要花費數週乃至數月來投入開發,讓測試人員報告全部問題狀況,而後正確地,迭代地修復全部的問題。擁有正確和最優化的移動向量,考慮如何爲藝術家製做的材質寫入它們,若是你不須要寫入它們(靜態對象和僅攝像機的移動向量),如何在多個Pass中分批處理你的對象以免使用額外的MRT。觀察16位和8位移動向量質量上的差別(或者多是R11G11B10 格式和B通道中一些其餘的G-Buffer屬性?),測試全部的狀況,花點時間在開發上作得很好,例如當改變一點骨架計算或者緩存頂點蒙皮信息(存在「頂點歷史」)的時候依然是一個可接受的選擇。

References引用

[1] http://iryoku.com/aacourse/ 

[2] http://www.iryoku.com/smaa/

[3] http://advances.realtimerendering.com/s2013/index.html

[4] http://directtovideo.wordpress.com/2012/03/15/get-my-slides-from-gdc2012/

相關文章
相關標籤/搜索