Open topic with navigation
This DirectX 11 sample demonstrates how to implement multisample antialiasing (MSAA) on top of deferred shading.
This DirectX 11 sample demonstrates how to implement multisample antialiasing (MSAA) on top of deferred shading. The general idea is to first render the scene geometry to a multisample-enabled g-buffer, then determine which pixels are complex (meaning they contain more than one fragment) and should be shaded at sample frequency, (a.k.a. supersampled), and finally perform shading accordingly. This sample implements an array of approaches to achieve these goals. We can simply use SV_Coverage or we can check the discontinuities between samples within a pixel to determine whether a pixel has multiple different fragments. We can also perform shading normal and complex pixels in two separate passes, which results in better thread coherency. Finally, we can determine the number of unique fragments in a complex pixel, and perform shading adaptively based on unique fragment count. This sample implements 4x MSAA, but the idea can be easily extended to support more sample count.
MSAA has always been a major drawback of deferred shading, since the geometry for lighting (in this sample, a full-screen quad) is separated from the scene geometry. Therefore we don't get the benefit of hardware determining which pixels are edges as it does in traditional forward MSAA rendering. In the deferred scenario, we have to be able to determine which pixels are complex and only shade them on per-sample frequency, with the least computation and in the least divergent way possible.
The first important task in doing MSAA with deferred shading is to detect complex pixels, which are pixels containing more than one unique fragment. This sample implements two approaches to do this. The first one uses SV_Coverage, which computes the pixel samples covered by the current fragment. The following figure shows an example of SV_Coverage.
If there is one sample in a pixel whose coverage mask is not 1111, we can mark the pixel as complex. We store whether each sample coverage mask is 1111 to the multisampled buffer, then explicitly resolve the buffer so that we don't have to loop through each sample to tell whether the pixel is complex.
This approach can very efficiently detect complex pixels, but it does have a drawback of marking some non-edge pixels as complex, since it relies on underlying geometry rather than screen space discontinuities.
Another approach we have in this sample is to search for discontinuities in normals, colors, and depths between samples in a pixel, and if the discontinuity is larger than a certain threshold, we mark the pixel as complex. This approach has the advantage of producing complex pixels only at edges, which means but we need to loop through samples in a pixel during detection.
In the lighting pass, we use the complex pixel mask to differentiate normal pixels and complex pixels. If the pixel is complex we then perform super sampling, and output the average color of each sample. The problem with this is thread coherency. For pixels packed in a warp, the normal pixels, which perform shading once, would have to wait for complex pixels to complete since they are doing super sampling. Alternately, we can separate normal and complex pixels into 2 separate passes with stencil masks. The downside of separated passes is that it introduces the overhead of generating the stencil masks and an additional draw call each frame.
This sample also implements the option of computing hardware-based super sampling for complex pixels. Instead of shading multiple samples in a single pixel shader invocation, we invoke the pixel shader at per-sample level to achieve fully super-sampled complex pixels.
One important observation is that the majority of complex pixels only contain 2 unique fragments, which means we can potentially perform less shading at complex pixels and still achieve the same visual quality.
Again, this application uses coverage masks to compute the number of unique fragments in each complex pixel. We count the number of unique coverage masks per pixel, and weight each unique fragment in the pixel by the number of samples covered. For example, in figure 4 there are 3 unique fragments. The red fragment is weighted 2/4, while blue and grey fragments are each weighted 1/4.
This saves the cost of shading, but introduces additional cost of counting and weighting unique fragments in complex pixels.
The Sponza model was created by Marko Dabrovic.
NVIDIA® GameWorks™ Documentation Rev. 1.0.191119 ©2014-2019. NVIDIA Corporation. All Rights Reserved.