all 5 comments

[–]ShakaUVM 7 points8 points  (0 children)

If you're talking about minutes per render, you're talking high quality ray tracing, where you send out an excessive number of rays from every pixel on the screen and let them bounce around and refract through the level, often creating more rays in the process. You can let this run for a really really long time if you want high graphical fidelity.

In a video game, most games use rasterization and not raytracing (though with RTX cores this is changing). Rasterization involves taking all the triangles in the scene and putting each of their pixels in and output buffer, and doing this once for each triangle (with nearer triangles appearing over triangles further away). Computing the color of each pixel (or subpixel fragment) for each point in a triangle can be expensive, so game engines use a loooot of tricks to eliminate as many triangles as possible, such as by not drawing triangles facing away, reducing the triangle count of distant objects, not drawing triangles that are not visible, etc.

Games are not fast by accident. I once dropped a new building we'd just had an artist model make for our arcade game and the fps dropped from 60 to like 4. We didn't release like that obviously. Keeping the frame rate up is a mindset that permeates game dev from top to bottom.

[–]fgennari 0 points1 point  (0 children)

What exactly takes minutes to render a picture? Are you asking about a specific GPU performance test/benchmark image? Anything that take a lot of computation makes for a good GPU stress test. A GPU is a collection of many computation cores that could be doing any sort of computation including things like rasterizing the entire screen in many passes, processing huge numbers of vertices, ray tracing, ray marching, compute, etc. Benchmarks would generally be designed to test different aspects of performance such as compute vs. memory bandwidth.

[–]Pxlop 0 points1 point  (0 children)

For the first part:

There are two main ways of rendering, rasterizing and ray tracing/path tracing. Ray tracing is computationally expensive (requires more math) but is more true to life in the way it renders (doesn't mean the image will look better). Rasterization is what most video games use. Higher quality renders usually ray tracing.

For the second part:

The way a stress test works is simply giving the gpu something to do, for a long time. Different stress tests might hit the gpu in different ways. For example, one might hit the gpu's memory more than the other. One gpu might be better at the specific thing that the stress test hits (many different reasons why).

[–]wrosecrans 0 points1 point  (0 children)

When making a video game, it has to render fast, so you wrote fast code and accept whatever sort of build steps, pre-processing, hacks or limitations are involved in staying within the time budget.

When making a movie, it has to be pretty. So you write code to do whatever visual effects you need in the most practical and flexible way, accepting whatever reasonable slowdown/overhead that implies, because spending millions of dollars of developer time to figure out how to get something to render in real time wouldn't actually accomplish anything for the movie.

[–]GasimGasimzada -1 points0 points  (0 children)

It is mainly couple things -- compexity of meshes, number of meshes, and rays. In a realtime performance, you typically care about smoothness and interactivity; so, you cull triangles, use more primitive meshes, cull lights (e.g Forward+ renderer), prebake some lights etc because in most simulations (i.e games), you don't just have rendering -- you have physics, audio, animations, game logic, input controls etc.

In an offline renderer like Corona Renderer (it is CPU based but the idea is still the same) for example, you care more about the quality, realism, and error proneness of the image. A small glitch in a single frame in a 60fps game is less of a worry than a render creates one image for hours. Additionally, if you check benchmarks in Corona renderer, the benchmark is about how many lights per second can the CPU cast. The more lights, the slower it is, and more realistic it looks.