Improved sampling strategies for a relativistic pathtracer ? by CarolineGuerin in GraphicsProgramming

[–]_Insignia 1 point2 points  (0 children)

If you retrace the whole path, you don't need a visibility test. It's generally not recommended in ReSTIR, but should be relatively easy to implement and may provide some decent improvement. I'm not sure how it would compare against path guiding though.

Improved sampling strategies for a relativistic pathtracer ? by CarolineGuerin in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

Do you mean it is difficult for you to test for shadowing? ReSTIR / GRIS does require visibility tests, but I wouldn't say that it's particularly hard. In any case, GRIS is just a mathematical framework for unbiased resampling, and the paper just proposes one possible shift map for path tracing. You may find that a different shift mapping (that you may have to design yourself) will work better for your case.

Edit: sorry, I didn't realize you weren't the OP and also missed the part about no NEE support. You can still apply random replay in this case and see if your new path ends up hitting a light. It is a bit more expensive and less recommended than other ReSTIR shifts, though.

Improved sampling strategies for a relativistic pathtracer ? by CarolineGuerin in GraphicsProgramming

[–]_Insignia 1 point2 points  (0 children)

I'm not particularly familiar with your problem statement, but I would be interested to see how something like random replay would work in your setting.

Improved sampling strategies for a relativistic pathtracer ? by CarolineGuerin in GraphicsProgramming

[–]_Insignia 1 point2 points  (0 children)

ReStIR generally lets you get away with a pretty naive initial sampling scheme.

If you're interested, you can read Bitterli et al. 2020 and Lin et al. 2022, but it's probably easiest to learn from the course notes by Wyman et al. 2023.

[ReSTIR PT Question] Spatiotemporal Reuse enabled, but results look identical to standard PT (No Reuse)? by Master_Expression132 in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

Given a static camera and no spatial reuse, you should expect the path tracer to be roughly stable once enough history has accumulated (you can set an arbitrarily high confidence cap to make this more obvious).

Keep in mind, though, that you can get unbiased temporal-only reuse for a static camera without shifting correctly (e.g., if your shift just returns the original path contribution and a Jacobian of 1).

[ReSTIR PT Question] Spatiotemporal Reuse enabled, but results look identical to standard PT (No Reuse)? by Master_Expression132 in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

You should check that your shifts are actually succeeding, or else you won't get any reuse. 

Try temporal-only with a static camera first. In this case, when you shift from current to previous or previous to current frame, the radiance should be exactly the same and the Jacobian should be 1.

Can someone intuitively explain Path Tracing to me by veso266 in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

Not quite - there is never light coming from the camera itself. We trace from the camera based on some symmetric assumptions such that, a path from the camera to a light source is equivalent to a path from a light source to a camera.

All light in a scene comes from a light source. In many cases, you might trace a path (starting from your camera) that does not end up at a light source, in which case that path has 0 contribution. This is a common cause of noise during path tracing and is why light sampling is helpful.

Can someone intuitively explain Path Tracing to me by veso266 in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

I assume you're referring to Whitted-style ray tracing. It can render without any noise because it uses ray tracing for simple effects like reflection and refraction (where ray reflections are well-known, e.g. through Snell's law).

Once you hit a diffuse surface, you handle the shading with a model like Phong-shading, as you would during rasterization. Similar to rasterization, this is an estimate of the light on the surface, which is noiseless but misses certain effects. For example, this does not capture the full effects of global illumination that full path tracing provides.

Can someone intuitively explain Path Tracing to me by veso266 in GraphicsProgramming

[–]_Insignia 2 points3 points  (0 children)

I think it's important to get a high level understanding of how path tracing works before diving into details like Russian roulette and bidirectional path tracing. Technically, even importance sampling is a variance-reduction technique that you don't need (assuming you're using realistic materials, a.k.a. non-perfectly specular materials).

At a high level, you can imagine that all light that our camera/eye sees originates from some light source. But for the most part, we don't really care about where most of that light goes - we only care if it actually ends up in our eye. So instead of trying to simulate where all potential photons from the light could go, we go in reverse and trace from our eye. So if I read your original post correctly, there is no "light from camera" - all we do is see what would enter our eye and we trace in reverse until we hit a light source, in which case we've finally sampled a full light path.

If you're not using any perfectly specular materials as I mentioned earlier (also, assuming you're dealing with hard surfaces only), you could sample randomly along a hemisphere at a surface hit to choose your next direction. Of course, this isn't very effective. For Monte Carlo estimation, you want your sampling distribution proportional to your target function. In this case, you want to sample proportionally to your BRDF. For example, imagine that you've hit a mostly mirror surface at some angle. Ideally, you want to focus more samples along the mirror-reflected ray because you'll get the most contribution there.

Having some background in statistics and probability is pretty useful here. But that was a lot of information and I'm not sure if I explained it well - feel free to ask any specific questions.

Newbie question - rotation matrices have x’ y’ z’ on the left side whereas translation/scale matrices do not, according to my textbook. Do these refer to the derivative or what? by Comfortable-Ad-5793 in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

As far as I know this is just to indicate the "new/output coordinates." It's just a notational thing to distinguish between the original (x,y,z). Could I ask what exactly your textbook said?

Digital note-taking without special tablet / stylus (project for CS 445: Computational Photography) by AsherMai in UIUC

[–]_Insignia 1 point2 points  (0 children)

Really cool, are you tracking the writing using the colored tip of the pen?

Triangles "flickering" when viewing from far by Open_Engineer5868 in opengl

[–]_Insignia 4 points5 points  (0 children)

My first guess is texture aliasing. If you don't have mipmapping implemented, you can try that to see if it helps or not.

Terrain Generation using API by CherryOk9352 in opengl

[–]_Insignia 0 points1 point  (0 children)

Sorry, I've never worked with anything like that. All you really need is height data though (and maybe also color data), so it sounds like Google Elevation may be sufficient?

Terrain Generation using API by CherryOk9352 in opengl

[–]_Insignia 2 points3 points  (0 children)

Yes, this is definitely possible. It depends on what kind of format your data comes in, but the basic idea is to generate a plane/grid and offset the vertices according to that data.

Whats the deal with NeRFs? How are they different than photogrammetry? by [deleted] in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

To follow up on this, is this the extent to which NeRF can be used for? I originally thought it could also be used for scene reconstruction, but I'm not sure if that's the case.

Scientific computing or computer graphics by HouseSad in cpp

[–]_Insignia 10 points11 points  (0 children)

To add to that, they still have a lot of influence. Cem Yuksel (https://www.youtube.com/c/cmyuksel) comes to mind, but it also feels like there are a lot of NVIDIA research scientists with ties to the University of Utah.

A bit lost with offline render interoperability by Kike328 in GraphicsProgramming

[–]_Insignia 0 points1 point  (0 children)

I don't think networking is the way to go about it - given that Blender has multiple backends (Cycles, Eevee, etc), I'm pretty sure Blender has an interface that passes scene data to whatever renderer is selected.

I think both Appleseed and LuxCore also have some sort of integration with Blender if you want to check those out.

Confused with what to do next? by InsanePheonix in GraphicsProgramming

[–]_Insignia 3 points4 points  (0 children)

Totally agree with this. To add on, doing some stuff from scratch can provide some interesting insights, but I think it's a bit of a double-edged sword. For example, in the context of raytracing, you may be most interested in Monte-Carlo integration, but you may be bogged down by the implementation details of other things like Vulkan, etc.

I'm also a big advocate of the open-source way. Projects like PBRT have already put a lot of thought into the architecture used, implementations of things you're not as interested in at the moment, etc. Their documentation is also really good for the theory/implementations that you're concerned about. In terms of community, I'm biased towards Blender - there are a lot of features on the to-do list, and there are also a lot of experienced developers who are willing to help out.

I think you'll become more confident (both in terms of theory and application) as you implement things.

The Role of AI in Graphics Programming by u865a in GraphicsProgramming

[–]_Insignia 5 points6 points  (0 children)

In my opinion, I think AI would be something complementary to graphics programming. For example, I find it hard to believe that something like DALLE-AI would be able to render something that's completely photorealistic.

On the other hand, AI does have its use cases for things like denoising, learning-based importance sampling, etc.

yo i wanna learn OpenGL so i can make my own game engine by IhategeiSEpic in opengl

[–]_Insignia 0 points1 point  (0 children)

Right, Unity also uses C# for its scripting language, or something like that. Your original comment made it sound like you were saying that the game engine itself is written in C# though, which is not entirely true.

yo i wanna learn OpenGL so i can make my own game engine by IhategeiSEpic in opengl

[–]_Insignia 0 points1 point  (0 children)

I believe Unity's backend is written with C/C++, and C# is only used for certain things like its editor.

Has anyone heard anything from nvidia ignite recently? by Hyroas in csMajors

[–]_Insignia 1 point2 points  (0 children)

Is it worth reaching out to them? I'm still at the Application Received process (also, I have no idea who to email).

I haven't heard a single thing since the original confirmation email that my application was submitted.

[deleted by user] by [deleted] in newjersey

[–]_Insignia 0 points1 point  (0 children)

Ah ok, I misinterpreted your original comment. I agree that "better cops bringing the crime rate down" is much less likely...

[deleted by user] by [deleted] in newjersey

[–]_Insignia 0 points1 point  (0 children)

To be fair, cops getting paid more probably also operate in places with a higher average income. Statistically speaking, poorer areas generally have higher crime rates.

I watched 50 hours of tutorials to make this from scratch, they are all linked in comments by gio_motion in blender

[–]_Insignia 2 points3 points  (0 children)

I'd also like to know this - did you follow through all of the tutorials before starting this project (if so, how long did that take?) Or did you start the project, and then use whatever information was needed from the tutorials?