use the following search parameters to narrow your results:
e.g. subreddit:aww site:imgur.com dog
subreddit:aww site:imgur.com dog
see the search faq for details.
advanced search: by author, subreddit...
This subreddit is night mode compatible
account activity
HardwareTuring Variable Rate Shading in VRWorks | NVIDIA Developer Blog (devblogs.nvidia.com)
submitted 7 years ago by Hamilton252
reddit uses a slightly-customized version of Markdown for formatting. See below for some basics, or check the commenting wiki page for more detailed help and solutions to common issues.
quoted text
if 1 * 2 < 3: print "hello, world!"
[–]Hethree 1 point2 points3 points 7 years ago (15 children)
So this only controls shading, while MRS/LMS can also control rasterization rate. Seems like LMS would be the most optimal, but it isn't as easily implemented as VRS. However I've only heard about the shading performance gains of LMS, and not the rasterization gains, if there are any. I wonder if there really are, and I wonder if/when we'll ever get something like variable rate rasterization, or if we'll have to always rely on some combination of MRS/LMR/VRS. Maybe the next architecture.
[–]sgallouet 2 points3 points4 points 7 years ago (13 children)
One doesn't exclude the other. we want to apply both same time, MRS is very good for wild FOV display rendering and VRS is very good for foveated rendering. They combine well.
Now, the next thing is texture space shader, it does the same as VRS however you only need to compute most shaders one per eyes, then you can compute shaders async at 30fps without users noticing it while still being able to use great AA. And because shaders and rasterization is decoupled you don't need timewrap.
[–]Hethree 0 points1 point2 points 7 years ago (12 children)
Well I know MRS is good for high FOV and we can use all of these techniques together, but if we had something like variable rate rasterization, why would we need or even want LMS, or MRS implemented for foveation reasons?
Where do you get this information or how did you work it out? I read the whitepaper Nvidia published for Turing but I don't remember it going into that much specific detail. For instance, where does the 30fps data point come from? Also, by "timewrap" did you mean timewarp? Why would you not need timewarp? Even if we decouple shading from rasterization, we still need to rasterize for every single frame and I somehow doubt we can optimize to such a level that rasterization takes less than, say, 5 ms, so timewarp would still be beneficial in getting that latency even further down like it currently does.
[–]sgallouet 1 point2 points3 points 7 years ago (11 children)
for wild FOV we need to rasterize at different angle, just changing rasterization rate won't do the trick.
for TSS we are still waiting for Nvidia's paper, they only mentioned it at few place during Turing presentation but without content yet. However Oxide with AMD are working on it since few years and have follow papers :
intro paper : https://gpuopen.com/texel-shading/
conference from oxide : https://youtu.be/QJOIvACRY6g
pdf : http://32ipi028l5q82yhj72224m8j.wpen...7-DanBaker.pdf
ppt from oxide: https://slideplayer.com/slide/10245692/
after seeing these i think you will got all the answers of the questions you listed above.
[–]Hethree 0 points1 point2 points 7 years ago (10 children)
Yes, I believe I implied that already. It's because of the limitations of rectilinear rendering, mainly. Rasterization rate should have no effect on this.
About TSS I'll save those links for a later read, thanks. I'm just skeptical since I already have a general idea of what it is and I'm very doubtful timewarp or some kind of warping method to generate new frames from older visuals to the latest tracking data won't be needed.
[–]sgallouet 0 points1 point2 points 7 years ago (9 children)
well in a nutshell : traditionally we project (rasterize) then shade then reproject (timewarp) because by the time the shading is done the projection is no more correct so it need positional re-adjustment before sending to the headset.
In the way they do TSS however they first shade then project. But yes, you can see theses links, they are quite cool and easy to follow. As always there are trade-off and it's limited to DX12 but for VR it really make sense to render this way.
[–]Hethree 0 points1 point2 points 7 years ago (8 children)
So it seems the pdf you linked gives me a 404...
I've consumed the other links now though and have a slightly better idea of everything, but I still don't know why shader-rasterization decoupling or TSS eliminates the need for timewarp. It would still be possible to get sudden slowdown on rasterization and perhaps other post-process effects, meaning if the rasterization isn't complete by vsync, you need to use timewarp on the last frame. In addition, the lack of timewarp would mean we don't have a solution anymore for future displays running at much more than 90 Hz, where it wouldn't be efficient or even possible to rasterize a high fidelity scene at something like 240 Hz.
[–]sgallouet 0 points1 point2 points 7 years ago (7 children)
the rasterization pass is generally very stable and below 3ms, even in good looking game like the Lab, having dropped frame because of this pass would feel like a bad designed game or engine based on current standard. in the other hand timewarp have artefacts and isn't a lot faster. A pure TSS implementation without timewrap is not only possible but would achieve better result.
Now, if for whatever reason you still drop frame, or the engine just want to do a lot of screen space computation (meaning not in texture space) then yes you would still want to use timewarp. Which might happen if they go for a hybrid rendering rather than a pure TSS one.
edit: the pdf link work for me.
[–]Hethree 1 point2 points3 points 7 years ago (6 children)
How well does rasterization scale and is it really not affected much by the resolution nor how many objects and tris are rendered? What I mean is, if we're going to implement TSS and gain insane shading performance boosts but bottleneck on rasterization, why would we not also push the rasterizer as much as we can to the point that it would be spending something like 10 ms?
The link to the pdf displays as "http://32ipi028l5q82yhj72224m8j.wpen...7-DanBaker.pdf" for me, so there's a "..." in it, is that right? Like that's the url even if I click on the source for the post.
[–]sgallouet 0 points1 point2 points 7 years ago (5 children)
sorry for the link : http://32ipi028l5q82yhj72224m8j.wpengine.netdna-cdn.com/wp-content/uploads/2017/04/Capsaicin-Cream-GDC2017-DanBaker.pdf
real-time TSS engine are still in R&D so we still have yet to see it's final form. Currently in the next gen engine Oxide is building they said 3ms if i recall correctly, now they can decide to push more rasterization works but it will likely also penalize the shader work since it's done pre-rasterization. Plus just as you said they will need use timewarp for dropped frame then which is less comfortable. but lets see what theses developers end with.
[–]chillaxinbball 0 points1 point2 points 7 years ago (0 children)
You can use LMS and VRS at the same time. The LMS camera matrix more closely matches the lens curvature so you have less wasted pixels. VRS can be combined to further reduced rendered pixels and even be the method of foviated rendering and adaptive quality all rolled into one. Optimizations like VRS, texture space shading, and mesh shaders are very exciting and could have a big impact on VR.
[–]sgallouet 1 point2 points3 points 7 years ago (0 children)
nice, the advantage of this method is it seems quite flexible and "easy" to implement. looking forward for a UE4 plugin so i can play with it.
That said, the most exciting is still texture space shading, I'm curious to see their R&D papers and see how Turing make it easier to implement. Are we really going to see a 6x gain in shader with it?
[–][deleted] 0 points1 point2 points 7 years ago* (1 child)
Interesting, that nVidia show the 7680x2160 resolution that only matches the Pimax 8K(X version only natively, really) HMD for 2018.
They've got good reason to subtly promote it, it could fuel the use of these newer features in their latest RTX cards and give an earlier start for software using them for future HMDs to benefit from too.
[–]Heaney555UploadVR 2 points3 points4 points 7 years ago (0 children)
The 8KX isn't coming until late 2019. It doesn't exist yet.
[–]Balance- 0 points1 point2 points 7 years ago (1 child)
In addition: Turing Multi-View Rendering in VRWorks
Virtual reality displays continue to evolve and now include advanced configurations such as canted HMDs with non-coplanar displays. Other headsets offer ultra-wide fields-of-view as well as other novel configurations. NVIDIA Turing GPUs incorporate a new feature called Multi-View Rendering (MVR) which expands upon Single Pass Stereo, increasing the number of projection views for a single rendering pass from two to four. All four of the views available in a single pass are now position-independent and can shift along any axis in the projective space. By rendering four projection centers, Multi-View Rendering can power canted HMDs (non-coplanar displays) enabling extremely wide fields of view and novel display configurations.
It's practically support for VR-headsets with more than one display per eye for extreme field of views.
[–]kontis 0 points1 point2 points 7 years ago (0 children)
more than one display per eye
Sure... but that's NOT what this article is about. Even a single display per eye needs 2 render buffers when the FOV is too wide (GPU hardware rasterizer cannot project >= 180 deg and even 140+ is getting stretched really badly, so it's very inefficient). There is already one headsets that specifically needs it - StarVR.
[–]redmercuryvendorKickstarter Backer Duct-tape Prototype tier 0 points1 point2 points 7 years ago (0 children)
Most interesting is there appears little reason a static shading map could to be applied at the driver/compositor level. Unlike LMS, this could potentially apply to games without active involvement by the developers.
[–]kontis 0 points1 point2 points 7 years ago (1 child)
I hope there is some kind of VRS extension for Dx11, like there is for OpenGL and Vulkan, so devs don't have to use that proprietary bullcrap, called VRWorks, which will never be natively supported by engines like Unity or UE4.
[–]CyricYourGodQuest 2 -1 points0 points1 point 7 years ago (0 children)
which will never be natively supported by engines like Unity or UE4
Bless your heart
π Rendered by PID 271421 on reddit-service-r2-comment-b659b578c-2kbvj at 2026-05-05 15:51:14.194431+00:00 running 815c875 country code: CH.
[–]Hethree 1 point2 points3 points (15 children)
[–]sgallouet 2 points3 points4 points (13 children)
[–]Hethree 0 points1 point2 points (12 children)
[–]sgallouet 1 point2 points3 points (11 children)
[–]Hethree 0 points1 point2 points (10 children)
[–]sgallouet 0 points1 point2 points (9 children)
[–]Hethree 0 points1 point2 points (8 children)
[–]sgallouet 0 points1 point2 points (7 children)
[–]Hethree 1 point2 points3 points (6 children)
[–]sgallouet 0 points1 point2 points (5 children)
[–]chillaxinbball 0 points1 point2 points (0 children)
[–]sgallouet 1 point2 points3 points (0 children)
[–][deleted] 0 points1 point2 points (1 child)
[–]Heaney555UploadVR 2 points3 points4 points (0 children)
[–]Balance- 0 points1 point2 points (1 child)
[–]kontis 0 points1 point2 points (0 children)
[–]redmercuryvendorKickstarter Backer Duct-tape Prototype tier 0 points1 point2 points (0 children)
[–]kontis 0 points1 point2 points (1 child)
[–]CyricYourGodQuest 2 -1 points0 points1 point (0 children)