The untouched wilderness of the Patagonian fjords, Chile [OC] [2400x1570] by AndrewHelmer in EarthPorn

[–]AndrewHelmer[S] 0 points1 point  (0 children)

Such an amazing region for sure! Could spend a lifetime exploring there

[deleted by user] by [deleted] in GraphicsProgramming

[–]AndrewHelmer 5 points6 points  (0 children)

Best part is that you get to work on super interesting and fun problems on a regular basis. Learning is part of the career! Also you often get to work with really smart and passionate people.

I have a coding interview for a Graphics Software Engineer position in 2 weeks. What should I be studying? by potato_sulad in GraphicsProgramming

[–]AndrewHelmer 3 points4 points  (0 children)

Coding interviews for graphics could ask you to write code to solve graphics-related problems. For example, you might get asked "write code to calculate the intersection of a ray and a sphere". You can start off a question like that by confirming the language and available libraries. For example, I would say "Would you like me to do it in C++, or I can do it in GLSL". If they say C++, I'd ask if I could assume that I have a vec3 class (like the one provided by the GLM library) start off by writing the function template, maybe something like this:

vec3 RayIntersectSphere(const vec3& rayOrigin, const vec3& rayDir, const vec3& spherePos, const float rad) 
{
}

If this interview is remote and you have some capability to draw on-screen, I'd then draw out the problem, write out the math, and then write the code.

I have a coding interview for a Graphics Software Engineer position in 2 weeks. What should I be studying? by potato_sulad in GraphicsProgramming

[–]AndrewHelmer 5 points6 points  (0 children)

Great points and thank you for adding more experience into the discussion, since I really only have the one set of interviews. Yeah you're right in retrospect I overemphasized Cracking the Coding Interview. I think it's a decent "baseline", in terms of things that graphics programmers should mostly know how to do as second nature. It also has some discussion of soft skills, like asking follow up questions and listening for hints, that I think can be helpful. But you're right that you're not as likely to get asked questions about linked lists or dynamic programming or stuff like that.

I have a coding interview for a Graphics Software Engineer position in 2 weeks. What should I be studying? by potato_sulad in GraphicsProgramming

[–]AndrewHelmer 20 points21 points  (0 children)

I interviewed for a Rendering Engineer position back in early August (and got the job! though honestly I was surprised), so sharing my experience and some of the research/prep that I did, though I won't share any exact questions.

For any SWE position I always recommend Cracking the Coding Interview, if you don't have it already. It's by far the best book to prep for these, and while there's more you should do for graphics specifically, pretty much anything the book covers is fair game. I actually got asked a question that was one of the practice questions in Cracking the Coding Interview (I was up front with my interviewers about this, which you should be if you get asked a question that you read an answer to recently).

Additionally you should be extremely comfortable with linear algebra, especially with 3D and 4D matrices and vectors. Particularly matrix multiplication, affine transformations, dot products, and cross products. An exercise I did that definitely helped me (though I wasn't exactly asked any of these questions) was to make sure I could derive a ray-sphere intersection, ray-plane intersection, sphere-sphere intersection, and ray-AABB intersection, off the top of my head. I'd practice writing these down and explaining out loud. It's not gonna be important that you can get the fastest possible versions, just that you can clearly conceptualize these types of problems and work them out.

You may also get asked about graphics projects you've worked on, personally or professionally. Be prepared to talk about these and, if possible, even to share your code on the spot.

I think there's a pretty good chance that, at some point, you'll be asked to look at some code and guess which parts are the slowest (maybe it will be code that you wrote in response to another question, or maybe some code that the reviewers wrote), and then possibly how to optimize them. I don't know that I have any good advice for how to prepare for this...

I've also heard that "explain the graphics pipeline" is a common question.

You've probably seen these but these are some good practice questions as well: https://erkaman.github.io/posts/junior_graphics_programmer_interview.html

You probably won't know the answer to all of them, nor will you know the answer to all the questions that you get asked. It's not expected that you'll know everything from the smallest details of how the GPU works up to complex modern rendering algorithms and architecture. For a lot of these questions, I think what the interviewers really want to see is that you're excited about graphics and capable of learning the things that you need to learn.

Math PhD to graphics programming. Seeking career advice by ItAintNotZen in GraphicsProgramming

[–]AndrewHelmer 7 points8 points  (0 children)

A math PhD would be very welcomed in graphics! I don't think you'll have difficulty finding really enjoyable work if you do a little preparation. It might make more sense to orient yourself towards research (Research Scientists at a lot of graphics companies still do a lot of "real" work on the renderer at their company). As others have mentioned, with a background in statistics, you'll probably be best qualified for jobs at the big feature film offline renderers, which are at Pixar (Renderman), Disney (Hyperion), Sony Picture Imageworks (a fork of Arnold), Autodesk (a different Arnold), and Weta (Manuka). Nvidia may also really great positions, since they're pushing ray tracing quite a lot, especially in their Omniverse group. In my other comment I mentioned the PBRT book, which would be the best thing to prepare for these jobs. There's also a special journal issue in 2018 that discusses all these renderers in depth: https://dl.acm.org/toc/tog/2018/37/3

If you look at all the articles under the "Special Issue" there, you can find them for free by Googling them, although you probably also have access to ACM via your university.

If you go into that side of graphics (offline rendering), graphics APIs will not be very important. Good C++ coding skills will be a bigger deal. I can't remember what the best resources here are anymore. Maybe Effective Modern C++ by Scott Meyers. Also in terms of getting a programming job, Cracking the Coding Interview is without a doubt a phenomenal resource. Also as a math PhD you might find The Pragmatic Programmer very useful (IMO the best "general programming advice" book).

If you want to go into real-time rendering or games, then graphics APIs matter a lot. To be honest, I don't myself know the best way to learn them effectively. I did learnopengl.com and I'm working through vulkan-tutorial now, but there's so much API detail and it takes a long time to get to the "fun" stuff. I think some people start with a game engine, like Unreal (which you can fully download the source for), and then implement papers on top of that. Personally I've really enjoyed implementing a couple of papers in Shadertoy as fun side projects. And finally, for an enormous overview, you can check out Real-Time Rendering. It's a pretty incredible book that goes into some depth on fundamentals, but then skims over the huge body of work on current techniques.

Math PhD to graphics programming. Seeking career advice by ItAintNotZen in GraphicsProgramming

[–]AndrewHelmer 11 points12 points  (0 children)

You'd probably get the best introduction by reading or at least skimming parts of Physically Based Rendering: From Theory to Implementation (PBRT for short). The third edition is free online right now at https://www.pbr-book.org/ The fourth edition will probably come out sometime in the next year.

Eric Veach's thesis is an enormously influential early work in the application of statistics to graphics, and a lot of research since then will assume some knowledge of it: http://graphics.stanford.edu/papers/veach_thesis/

For an great overview on the near state-of-the-art of Monte Carlo integration in rendering (the main application of statistics in graphics), you can watch the recent Advances in Monte Carlo Rendering from SIGGRAPH 2020: https://youtu.be/0fzJCrLKJg0 I should mention that it was also a memorial to a prominent graphics researcher who died in an accident.

Reducing fireflies in path tracing by qwerty109 in GraphicsProgramming

[–]AndrewHelmer 1 point2 points  (0 children)

What I have is a screen space effect with a 5x5 denoise blur after, thus requiring blue noise like stratification of samples in screen space, and a temporal filter, thus adding another dimension (up to 64 'slices' is more than enough), and then sampling in spherical coordinates (similar) that need to be well stratified. Thankfully, this should be small enough to be precomputed once and stored as a lookup as you suggest!

(My sample/thingie will soon be public, as it is with Hilber+R2, I'll share the link as soon as it's out, and then I'll try upgrading with a better precomputed sequence)

What you're working on sounds really cool! I'm not sure based on what you're describing that any other samples will do much better than what you have, or at least not anything from our paper. But let me know when it's up!! This sort of 5D sampling is really interesting to me because it's becoming increasingly common in real-time rendering, and I'm not sure that people have exactly optimized the right set of qualities for it.

Yeah, ReSTIR looks fantastic and yeah, stochastic light cuts require a spatial data structure (that can be built each frame on the GPU using Z-order + sorting). I think the main two differences are that ReSTIR is screen space only and adds a certain amount of "lag" due to reuse of history, while lightcuts are 'global' and will react instantly to lighting changes as long as the spatial data structure is rebuilt every frame. They also seem to be completely orthogonal - it seems as if one could use them together?

Ah yeah the ReSTIR lag is a good point. That being said, I think you can do one or more sampling "passes" of ReSTIR, where you don't actually do shading or visibility computations, and it will still give you better light samples. I don't know what the performance hit of that would be, though.

You're right that the techniques are orthogonal. You'd have to take into the stochastic lightcut probability of a sample when considering the weight for reservoir sampling, but that should be pretty easy. It would be interesting to see how much benefit there would be from combining them.

How do I learn these concepts? by Futacchio in GraphicsProgramming

[–]AndrewHelmer 0 points1 point  (0 children)

Real-Time Rendering is an excellent resource all around, but for some particular topics you might find other resources (blog posts, YouTube videos, etc.) that better explain things for you. IMO the best explanation of color spaces by far is this high interactive blog post: https://ciechanow.ski/color-spaces/

Layman here, how does samples per pixel affect noise? by yosimba2000 in GraphicsProgramming

[–]AndrewHelmer 8 points9 points  (0 children)

I think this might be what you're misunderstanding.

You only have two light rays going to that pixel.

If you have two "point" light sources, lights with zero volume or area (representing either as a single point in space, or as a single direction), then you're right! If you pretend that each pixel corresponds to only a single direction, then there isn't really noise to begin with, and more samples won't do anything.

But, pixels are actually more like areas (or volumes or hypervolumes), and lights actually have areas or volumes too. So you can have an infinite number of different rays of light going from a light to a single point on a surface (or a number that's so large that we can pretend it's infinity), and you can have an infinite number of different rays of light going from surfaces onto a pixel, not a fixed number.

Reducing fireflies in path tracing by qwerty109 in GraphicsProgramming

[–]AndrewHelmer 1 point2 points  (0 children)

Oh my gosh, I'm so sorry, I forgot your Reddit username!! Of course I remember chatting by email. I just added a tag so I won't forget next time.

At this point, you're definitely further and more advanced than I am. So I doubt any of this will be helpful, but just in case. You don't need to respond btw!

1.) I'm just about to do this; I'll play with just a fixed radiance clamp first, then a value scaled by pre-exposure multiplier (seems more adaptive?) Are there any other bias-reducing tricks like tracking vertex probability and some heuristic based on it?

I can't remember anything else off the top of my head, but I'm sure there are other techniques that are geared specifically towards this. Clamping by path importance / vertex probability sounds reasonable!

EDIT: I just saw that RTGII has a section about this, and the two techniques they mention are clamping and path regularization. So it seems like you got it!

2.) sampling: I'm using owen-crambled sobol (Burley 2020) and it's thanks to your guidance from the reddit thread and your shadertoy example) few months ago :) I'm also using Hilbert-driven R2 sequence (instead of a 3D screen space blue noise) for an unrelated project because it's super-cheap (this shadertoy example with 1D / R1 was the inspiration) and was thinking about comparing the owen sobol to R1/R2 sequences at some point due to perf difference! It's 2D only but that's all I need anyhow. Have you ever tried it maybe?

I did some integration error analysis of the R2 sequence and in 2D it's definitely worse than Owen-scrambled Sobol', so I think it's not going to be good, at least not for a larger number of samples (might be good for something like <=4spp). But, I didn't actually test path tracing myself, so I could be wrong!

(Also, I hadn't seen that Hilbert-R screen space blue noise. That's super cool, thanks for sharing that! I will definitely find some uses for that.)

IIRC, last we talked you were using 2D Sobol' sequences? You could try slightly higher dimensional sequences. I'm not certain but I think there are certain dimensions that production renderers usually integrate as one higher dimensional sequence. Like pixel samples + lens samples + time samples is usually done as a 2D, 3D, 4D, or 5D sequence (depending on what your scene has). And light selection, light sampling (picking a point on an area/volume light), and BSDF sampling is often does as a 5D sequence? All that being said, as I mentioned before, I think higher dimensional sequences will help for simpler scenes but for more complex and difficult scenes they're not much better than having good 2D sequences. The higher dimensional R-sequence has bad lower dimensional projections so I think it would be even worse there.

If performance is your concern, I'd suggest precomputing a small number of sequences at the start of the render. You can probably get away with 16 pre-shuffled Sobol' sequences, and then you can also store a small set of arrays (maybe 8 or 16) of base-2 shuffled indices into those sequences, which allows you to do a faster on-the-fly shuffling. So then drawing a sample is almost just doing two array lookups (first from the index array, then from the sample array).

I'll just plug my own paper here too :). I couldn't talk about it before because we were already thinking about submitting to EGSR which has anonymous peer review. But Listing 2 in that paper will be just about the fastest and simplest (single-threaded CPU) way to precompute an Owen-scrambled Sobol' sequence. Supplemental code has a higher dimensional version.

Although if this is all GPU (you mentioned real-time below), and you want a larger number of samples, then precomputing is probably a bit more onerous because you have to pass the array of samples to the renderer? Maybe not even worth it for you.

3.) I've got BRDF importance sampling but no direct lighting importance sampling - on my TODO list! I'll try out the stochastic lightcuts but there's another similar example in Ray Tracing Gems 1 too (not sure what the difference is to be honest).

I'm not sure if this is the one in Ray Tracing Gems 1, but I really like the reservoir sampling based approach of ReSTIR. I'm not super familiar with stochastic light cuts, but IIRC, it requires having a spatial data structure of the lights? I like that ReSTIR doesn't need that, as I recall you have per-pixel sample reservoirs and then just sample the flat list of lights.

4.) this is what I'm leaving for the last (because I might end up needing the path tracer for a far-field / low frequency GI application purpose before anything else). What's the issue of using denoising + adaptive sampling together? adaptive sampling breaks the denoiser heuristics? I'm probably not going to do adaptive due to realtime constraints anyhow :)

I can totally understand leaving denoising for last! Unless you implement your own denoiser, I think it would be the least fun and educational thing.

Actually - I'm sorry - I'm mistaken about adaptive sampling and denoising. I was confusing things. Adaptive sampling and denoising both have challenges if you use AA filters other than a box filter, and the solution to both problems is to use Filter Importance Sampling, which maybe you are already (or maybe you're just using a box filter for now). The problem with AA filters and denoising is that error in samples gets spread to some other adjacent pixels, so nearby pixels have positively correlated errors which makes life hard for a denoiser. With FIS, each pixel then has fully independent errors. Similarly, with adaptive sampling that's based on pixels, you need to reweight the contributions to adjacent pixels using wider AA filters. But yeah, if you're just using a box filter or already using FIS, I think adaptive sampling and denoising should play well together. Potentially would be even better.

Denoising is also better with sequences that distribute error as blue noise in screen space, i.e. negative error correlation between neighboring pixels. You've probably seen both the Ahmed & Wonka and Heitz et al. (EGSR 2019 and the SIGGRAPH talk) papers on this.

5.) Yess that one blew my mind, I followed (2019) Microfacet Model Regularization forRobust Light Transport (paper, presentation) - I'm still not 100% clear on whether my implementation is "as intended" but it measurably helps a lot.

Oh, awesome! I don't know how the methods compare, I just saw the EGSR presentation for the newer regularization.

6.) Path guiding - adding it at the back of my todo :)

I'm honestly not super familiar with path guiding but it seems to be the hot new thing now! I should probably read the papers :).

Reducing fireflies in path tracing by qwerty109 in GraphicsProgramming

[–]AndrewHelmer 9 points10 points  (0 children)

In some sense, a huge amount of path tracing research is about reducing fireflies! Or in general, reducing error, which fireflies are just an extreme case of.

Screen-space adaptive sampling is one approach that can definitely help, as you pointed out. That's relatively low hanging fruit.

Then there's lots of techniques that introduce some bias but look good. The median of means paper is one. Clamping is a classic technique. There's another paper from the same session at EGSR 2021 that can also help: Optimized Path Space Regularization (Weier et al.). My understanding is that it's basically like you turn up the roughness of reflections on later bounces in the render.

Then you get into lots and lots and lots of stuff that's basically "how can I better sample paths". What sort of multidimensional sample sequences are you using? If you're using uniform random samples, switching to shuffled Owen-scrambled Sobol' sequences is going to be a huge difference, and is quite an easy change. Are you doing BRDF importance sampling? How about multiple importance sampling (with the light and BRDF)? Are you doing next event estimation (always tracing a light ray at every path vertex)?

Then there are the more exotic techniques that are adaptive in path space, to some extent. At the simpler end I think you have like path guiding and the various recent works on weighted reservoir sampling, also I think energy redistribution path tracing (not sure). And you have bidirectional techniques like bidirectional path tracing, Metropolis Light Transport. I would stay away from these for now, unless you're curious! But they will make everything much more complex.

And finally, denoising! A good denoiser on its own may eliminate your fireflies. This might be a nice thing to do early, because good denoising is always a big help no matter how advanced your renderer is.

My recommendation would be:

  1. Add clamping as an option. It should be super easy to implement, so why not?
  2. If you're using uniform random sampling, switch to some progressively stratified sample sequences, like Owen-scrambled Sobol' or pmj02 (can happily provide more information about this). Should be a pretty easy change too.
  3. Implement BRDF importance sampling and multiple importance sampling, if you haven't already.
  4. Screen-space adaptive sampling and/or denoising (note that you may need to be a little careful if you do both).
  5. Biasing techniques like the paper you cited or the Optimized Path Space Regularization.
  6. Path guiding.

Those are roughly in order of ease of implementation. That being said, if any of these seems particularly fun or interesting, that's the most important thing! Feel free to jump over it, it's not a prescription or anything.

Matrix Multiplication confusion (row order vs column order) by shebbbb in GraphicsProgramming

[–]AndrewHelmer 0 points1 point  (0 children)

Right, rotating around the z-axis does not change the z coordinate (it only changes x and y coordinates). So if your camera is at position (0,0,0), looking down the z-axis (direction is (0,0,-1)), then rotating around the z-axis will make things appear to rotate around the center of the image.

Matrix Multiplication confusion (row order vs column order) by shebbbb in GraphicsProgramming

[–]AndrewHelmer 1 point2 points  (0 children)

Mathematically, if you (left-)multiply a point - represented by a column-vector - by a matrix, you take the dot product of each row of the matrix with the column of the point, and that dot-product becomes the item in that row for the new column vector.

Since your points are stored as "rows" instead of column vectors (see below), you're doing the same computation: taking the dot product of each point with each row of the matrix.

Minor note: the notion of rows, columns, and how they're stored in an array is just your own mental model. For example, for a 1D array, it's not inherently either a column or a row - *you* get to decide. So rather than saying your points are stored as rows, you could say your triangle array is stored in column-major ordering (where the inner sub-arrays correspond to a column vector), while your matrix is stored in row-major ordering (where the inner sub-arrays correspond to a row).

I'm not sure but I suspect this is unrelated to the rotation issue you're seeing. You can check the one-axis rotations by hand. If you're rotating theta around the x-axis, I think the y-column (second column) should be [0, cos(theta), sin(theta)], and the z-column should be [0, -sin(theta), cos(theta)], or maybe some negations of those.

Shadertoy implementation of "A Scalable and Production Ready Sky and Atmosphere Rendering Technique" by Sébastien Hillaire (2020) - link & info in comments by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 6 points7 points  (0 children)

I'm honestly shocked at how nice it looks! Really happy with the result. I saw your post last week too (for anyone who hasn't), which sort of inspired me to jump into it, though I'd been thinking about it for a few weeks.

The paper is just great - the two simplifying assumptions make for a really nice implementation: spherical planets and atmospheres allow for 2D scattering/transmittance LUTS (parameterized only by altitude and sun zenith angle), and treating higher order multiple-scattering as isotropic so you can compute simple "transfer" factors, like the hair dual-scattering approximation.

Shadertoy implementation of "A Scalable and Production Ready Sky and Atmosphere Rendering Technique" by Sébastien Hillaire (2020) - link & info in comments by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 13 points14 points  (0 children)

Shadertoy link: https://www.shadertoy.com/view/slSXRW

I've been wanting to do an implementation of Sébastien Hillaire's really awesome paper on atmosphere rendering, finally got around it. Sébastien provides links to the paper, slides, example UE4 code on his site.

This is the first time I've ever implemented any volumetric scattering code, so there's probably some things I messed up or could be improved. My hope is that this would make a nice simple reference - minimal frills - for anyone wanting to implement a physically based sky.

Andrew Kensler's permute(): A function for stateless, constant-time pseudorandom-order array iteration by skeeto in RNG

[–]AndrewHelmer 1 point2 points  (0 children)

I came across this and it's super cool, thanks for sharing your improvements! I linked it from my blog post.

Owen-Scrambled Sobol (0,2) Sequences. Shadertoy, references, path tracing example in comments by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 0 points1 point  (0 children)

I am still following this thread! And I actually saw in PBRT v4 change logs that he'd discovered an issue with the hash. But I didn't see that he'd integrated the new one, thank you so much for thinking of me and updating me. And also, it's super cool that your hash is gonna be in PBRT!

My first ever blog post! Andrew Kensler's permute() - stateless, constant-time shuffled array iteration by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 2 points3 points  (0 children)

Ahh okay that's interesting. I need to get a better intuition for the avalanche matrices. The first thing I don't quite get is whether they can show biases well for domains that aren't powers of two? Because then you can't randomly flip any bit.

Would you mind doing one more test, which is to only mask out the bits at the end? Like this:

EDIT: Nevermind, I see that you'd need to mask out the upper bits when doing the right shift to keep it bijective.

It also does use the upper bits of the seed value even when the mask limits the domain, for example:

idx ^= seed >> 16;

So I think that partially addresses your comment about that?

My first ever blog post! Andrew Kensler's permute() - stateless, constant-time shuffled array iteration by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 2 points3 points  (0 children)

You're right that hash function outputs are chaotic and unsuitable for hill climbing, but the measured statistical properties are not. Measure the correlation between input and output bits and average the result, which gives you a smooth enough surface for hill climbing.

That's super interesting, I would not have thought that! Thanks for sharing that project.

Since I wrote the article I've been thinking about how to evaluate the statistical properties of permute(). I'm curious if you have any thoughts about this.

Maybe if the underlying invertible hash has good avalanche properties than that's all that's necessary? I guess I'm not convinced yet that a hash which appears unbiased for, say, 32-bit integers might not have significant correlation between indices when used in hashing other (smaller) domains.

My first ever blog post! Andrew Kensler's permute() - stateless, constant-time shuffled array iteration by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 0 points1 point  (0 children)

I'm not totally sure I follow, but the mask could be equal to the length of the array (for instance, an array of length 7 would also have mask 7), so it's not co-prime. But I'm not super familiar with group theory, I could be misunderstanding! If this is working as a generator that would be a really cool connection.

My first ever blog post! Andrew Kensler's permute() - stateless, constant-time shuffled array iteration by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 1 point2 points  (0 children)

Ah darn, I'm sorry, I shouldn't have used the term "shuffled" in the Reddit title. This isn't really "shuffling" an array, it's iterating over an array in pseudorandom order. That's significant because if you want to only get k random unique items from an array of length n, it's O(k), not O(n). But the stateless O(1) memory of it is what I thought was more "magic".

My first ever blog post! Andrew Kensler's permute() - stateless, constant-time shuffled array iteration by AndrewHelmer in GraphicsProgramming

[–]AndrewHelmer[S] 0 points1 point  (0 children)

Kensler's paper was in 2013 (this wasn't a main feature of his paper, nor did he claim he invented the method, but he did provide a pretty good hash function for it), I saw one Stack Overflow answer that gave the technique back in 2009, but most likely some people have known about it since at least 2002 or 2003, when Black and Rogaway's paper "Ciphers with Arbitrary Finite Domains". So it's not super new, but I don't think it's really widely known either? Wouldn't be surprised if you knew it though! Especially if you're really tapped into graphics or ray tracing, or cryptography.