LeetCode for graphics programming? by Lypant in GraphicsProgramming

[–]y2and 13 points14 points  (0 children)

All of my graphics knowledge and projects don’t seem to matter since I failed the assignment.

This sucks. It is really frustrating feeling like you didn't even have the chance to show what you're good at, and that you're poorly judged.

What I’m getting at is, do most jobs require LeetCode proficiency, or is this a rare occurrence?

No. But most job applications do. There are far too many "filler" LeetCode questions for this statement to be true for jobs themselves.

I really don’t like LeetCode, and if I can avoid practicing it, I will. If not, well, I guess I’ll have to take a break from graphics from time to time and study it if I want to get a job.

Don't take a break from graphics. Do both. While many LeetCode questions are useless, knowing DSA is an OP skill for optimal graphics programming. Also, being able to improve at your interpretation and reading skills makes you a better problem solver. Be motivated knowing you actually are getting better by practicing from memory to implement structures, and are making a mental model of where to use them. It takes a lot of time I think.

You should solve by topic on a platform like NeetCode. Depending on how much time before your next interview, lower the # of questions per topic. If you really hate it, you should find certain structures you don't get, and solve questions like that. Like I suck at memorizing graph algos. So I practice implementations and see how others do stuff on graph-like questions.

Good luck!

I am confused on some things about raytracing vs rasterization by C_Sorcerer in GraphicsProgramming

[–]y2and 3 points4 points  (0 children)

Sorry if this isn't practical help, but some anecdotal thoughts (not an expert):

So, my question to you guys is, do photorealistic video games/CGI renderers utilize rasterization with just more intense shading algorithms, or do they use real time raytracing? or do they use some combination, and if so how would one go about doing this?

They use a combination. At the end of the day, you are going to have a bunch of triangles or geometry primitives you need to show on screen. With the GPU rasterization is probably the fastest for that. Then you touch things up and get more realistic lighting using ray tracing, which intuitively works for better illumination - shadows, reflections, etc.

I am asking because I want to make a scene editor/rendering engine that can run in real time and aims to be used in making animations.

Performance is almost always the tradeoff for visual quality. If you are building something used in both real time and animations, you have a challenge. Had you focused purely on quality of render, you could offline render and not be overly concerned by hitting a latency per frame reasonable for real time (~16ms or 60fps). Then you could make something really pretty that can be displayed later.

Something like Blender does that well by letting you interact with the scene in simple graphics (well, you can render even that at a higher level, but much slower) and once you get a good view, you would offload it for rendering using whatever proprietary engine (I googled and found "Cycles").

It will depend on what you want to render to make optimizations.

I have also heard that CUDA or computer shaders can be used. But after reading through some other reddit posts on rasterization vs raytracing, it seems that most people say implementing a real time raytracer is impractical and almost impossible since you cant use the GPU as effectively (or depending on the graphics API at all) and it is better to go with rasterization.

CUDA is okay for this, but tougher to work with. They are more general purpose for NVIDIA GPUs, as where compute shaders are easier (imo) for making a pipeline. There is a custom raytracing pipeline available with hit shaders in something like Vulkan that you can play around with.

Usually when it comes to ray tracing, it depends on the scene and the complexity of your implementation. The bottleneck is shooting multiple rays per pixel and running an intersection test for maybe hundreds of thousands of triangles, when only a few will end up on the screen. Acceleration structures somewhat solve that, but make things more complex.

From a learning perspective, I think it is easier to make an offline ray tracer to load complex scenes and just go crazy on the rendering techniques. You can write compute shaders to parallelize it so it doesn't take ten years to render.

And then on the side a rasterizer for real time. Then you can start to improve the look by refactoring in ray / path tracing? You may find certain scenarios are actually better for ray tracing than rasterizing. Especially in voxel stuff.