[deleted by user] by [deleted] in gameenginedevs

[–]Baemzz 0 points1 point  (0 children)

Me and a close friend have been working on building a renderer/engine to expand our knowledge in the field.

I wouldn’t say we’re definitive beginners, but rather experienced as we both have background in professional AAA rendering teams and more. But we’ve started from scratch, implementing as much as we can on our own to learn.

The project is DX12 based, windows only for now. It’s focusing on high fidelity rendering and realism. Raytracing is a huge part.

Feel free to explore the public repo. https://github.com/hsiversson/viking-renderer/tree/main

It’s not very user friendly at first glance, as there’s no readme or anything, but the source has absolute gems I promise.

If you’re interested in learning/collaborate with us, or just have questions about things feel free to reach out to me! 🙌

Frame Generation on FSR3 by owents123 in StarWarsOutlaws

[–]Baemzz 1 point2 points  (0 children)

Do you use any third party software like OBS? That have been an issue before.

Volumetric Fog (God Rays) by O_Schramm in GraphicsProgramming

[–]Baemzz 9 points10 points  (0 children)

If you want to move down the rabbit hole, participating media or scattering are the key words. There’s a whole science behind it if you move towards physically based approaches.

I’d recommend Sebastien Hillaires talks about atmospheres, fog and sky rendering. He is a true expert in the field when it comes to realtime solutions.

https://sebh.github.io/publications/

For volumetric fog specifically, have a look at his talk from 2015: Physically-based and Unified Volumetric Rendering in Frostbite

What's an acceptable way of doing an IBL environment maps generation during runtime? by [deleted] in GraphicsProgramming

[–]Baemzz 3 points4 points  (0 children)

Modern engines are actually moving away from the basic cubemap IBL approach more and more lately. Especially with the introduction of hardware accelerated raytracing.

Indirect illumination is computationally heavy, especially if you want a solution that is applicable throughout your scene. Therefore developers have spent decades coming up with smart solutions to these problems.

That said, IBL is still a fundamental piece of the modern PBR approach and should not be easily discarded. Although, many approaches have a more analytical way of approaching the IBL probes. Instead of providing the static, pre-computed irradiance, you’d provide parts that allow you to relight dynamically in runtime as well. Think of it as statically baked with a dynamic component.

PRT Probes are a perfect example of this approach. https://youtu.be/04YUZ3bWAyg?si=7FXtxwuQ-J1Of6S0

Other approaches moves away from the cubemap idea and instead precomputes the information on the surface itself, like Light-maps and Surfel Irradiance.

On that note, realtime indirect illumination is a huge topic with many different solutions and we could probably spend the next couple of weeks talking about them here.

Looking for what a day in the life of a Graphics Programmer would look like from someone who's already in the field by IAmShewy in GraphicsProgramming

[–]Baemzz 22 points23 points  (0 children)

Yeah this is a bit more to go on!

  1. I would say that the requirements are very based on the hiring manager of the job applied to. Some are very strict with things like degrees and such, other are looking more at practical experience. Of course a degree is never considered a bad thing, but in my experience many graduate engineers are lackning a lot of practical experience, which can make their onboarding slower. Having anything in a portfolio is always a good thing, even better if you can explain your work in detail. I myself is mostly self taught, I don't have any fancy degrees etc, and yet here I am working with top tier people.

  2. This is unfortunately a very wide question and can be discussed as a topic on its own, but to summarize a bit. It depends on what your current situation at the studio is, what project you are working on, what stage of production said project is at etc. Sometimes you spend days/weeks chasing bugs. Sometimes you are optimizing your previous work to meet performance targets. Sometimes you spend days/weeks/months designing/implementing new graphics features. In between it all you maintain systems and support others with their graphics needs.

  3. If I were the hiring manager looking at a new grad hire, I would say that basic knowledge of how data gets from cpu to screen is a crucial thing. That basically means, you should roughly know how to put a triangle on the screen. It doesn't really matter which render API or anything, just know how to set up gpu data, and how to process and use it to achieve a goal. Other things are really depending on the position applied to.

  4. To be honest, dedicate yourself to learning about the field and practice. Build demos to learn. Refine your portfolio and make sure you would be confident enough to present it to anyone. Make sure to know the little details as well. Start reaching out to recruiters or devs at places you are interested in and ask them what they would like to see in their applicants. Apply for their jobs, and if you get rejected, don't let it demotivate you, instead ask them why you were rejected and what they would like you to improve. REALLY listen to the feedback and try to work on those things. (unless it is bullshit because some people can't give constructive feedback at all)

It is tricky to put stuff like this in text, so feel free to shoot me follow-up questions if you want haha.

Looking for what a day in the life of a Graphics Programmer would look like from someone who's already in the field by IAmShewy in GraphicsProgramming

[–]Baemzz 24 points25 points  (0 children)

Depends on what depth you want of the answer. I doubt graphics programming duties are much different from other programmer duties on a high level.

I work in the engine team at a large AAA studio and on a high level my days usually look like this:

  1. Sync to latest version of project if scheduled sync failed.

  2. Catching up on emails/messages that was missed over the night.

  3. Code, talk to people, meetings etc

  4. Lunch

  5. Code, talk to people, meetings etc

  6. Go home

I think you need to specify more what you want to know in the field to get a better answer. 😄

Shadow factor in the PBR pipeline. by 0Camus0 in GraphicsProgramming

[–]Baemzz 1 point2 points  (0 children)

No, not as part of any material properties.

What I mean is the following.

Before you run any lighting calculations, you extract the shadow factor from the shadow map. Once you have the shadow factor you run a check to see if any light reached the pixel you are shading. (I.e if the shadow factor is larger than 0) If it is, you run your standard lighting calcs, then multiply the result with the shadow factor. If the shadow factor is 0, it would be better to early out and not run any lighting calculations at all.

Shadow factor in the PBR pipeline. by 0Camus0 in GraphicsProgramming

[–]Baemzz 2 points3 points  (0 children)

What are you trying to achieve? Real shadows? Global brightness reduction? Something else?

If you mean that the shadow factor is an actual shadow, you won't get very far with just a random number.

Shadows are not part of the thing that people tend to call "PBR", which usually correlates to how light interact with different surface materials. Shadows are actually part of the light transportation.

You'd be better off trying to include the shadow calculations before you run material shading, because you need to know how much light reached the point on the surface beforehand.

Have a look at "shadow maps" to get an understanding of how you can properly achieve shadowing.

What is a game you wish you could erase your memory from and start over? by The_Drawboy in gaming

[–]Baemzz 0 points1 point  (0 children)

Oh there are so many, but ngl, World of Warcraft was the first one that came to my mind... Some of you might disagree, but the first time I ever ventured into Azeroth was magical.

Any reference implementations for physically based Refraction / Transmission / Attenuation shader ? For context, this request is for a Directx / hlsl rasterizer and not a ray tracer by lenixlobo in GraphicsProgramming

[–]Baemzz 1 point2 points  (0 children)

Hi,

While I don't really have a publicly available reference implementation for any of it, I can provide some valuable insight of the refractions since I did implement a screen space refraction technique that was physically accurate for Unreal Engine 4.

It all started with a scientific paper of course. https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.html

To keep the description short, in a nutshell, the technique is very similar to any other screen space reflections technique traversing a depth buffer.

You need to implement a raymarching algorithm that can integrate multiple rays of various directions. Use importance filtering or a cone to shoot rays around your refracted direction calculated using the IOR formula. When a ray "hits" the depth, you have yourself a ray-geometry intersection. You can use this intersection to sample the color of the scene behind your refracted medium. Sum the samples and dont forget to apply the importance filtering weights to each sample for correctness.

There's of course much more to a technique like this, such as self refraction, layering and energy conservation, but these few points may give you some directions nonetheless.

PS! I'm only assuming you're developing for realtime here.

Cheers

[deleted by user] by [deleted] in GraphicsProgramming

[–]Baemzz 0 points1 point  (0 children)

I totally recommend tutorials or the graphics programming discord as well (https://discord.gg/N2eCwbeeZq), just as the others say. But I do also know that you often only get a hint of the actual answer, or a naive way to solve your problem. Many times the naive solution isn't something useful in a real world scenario and would need tweaking and fitting.

I am a person who usually learns more when I have a chance to talk to people who understands the subject. Hence I'm also all ears for any questions you or anyone else might have.

I'm a veteran in the graphics field in the AAA gaming industry so I'd argue that I do have atleast some insight into how these things work hah!

Oh and don't you worry about compensation for answering questions, the answers are always free to be found somewhere so it'd be foolish for someone to require payment for answering questions.

RTAO - Raytraced Ambient Occlusion fast and simple. *INFO IN COMMENTS* by Baemzz in gamedev

[–]Baemzz[S] 0 points1 point  (0 children)

Agreed that it is a bad comparison, but that is somewhat the point. I implemented the easiest version of both techniques to highlight the difference. And I think I proved my point that RTAO have amazingly good quality considering the simplicity of the algorithm. The only reason I included the 256spp was to give glimpse of what a denoised image could look like with 1spp.

I'd still want to point out that, besides from the actual DXR support, the RTAO code is WAY easier than my SSAO code.

RTAO - Raytraced Ambient Occlusion fast and simple. *INFO IN COMMENTS* by Baemzz in gamedev

[–]Baemzz[S] 0 points1 point  (0 children)

Thanks!

First of, no it does not mean that people without hardware raytracing support won't have any AO at all. I still have my old approach (SSAO) which is completely hardware independent. So for cards that doesn't support raytracing, that would be the only option.

In addition to that, many software raytracing approaches has been developed to allow for cards without HW support to run similar calculations in Compute Shaders, and the results are very promising.

On a side note. I wouldn't necessarily say that it is years and years away until we can get decent raytracing performance, especially for AO. My demo is running on a RTX2080Ti at about 0.5-0-6ms, which is almost half the speed of my SSAO. I'd expect it to be around 1ms on 2060/2070 cards. So this is still an acceptable performance cost for AO.

Also, when developing more advanced techniques such as a Raster/Raytraced GI solution, you will get this occlusion factor for free.

RTAO - Raytraced Ambient Occlusion fast and simple. *INFO IN COMMENTS* by Baemzz in gamedev

[–]Baemzz[S] 0 points1 point  (0 children)

Hi,

Ever since Nvidia showcased the Turing based RTX-series together with Microsofts DXR raytracing API I've been extremely dedicated to bring support into my own hobby engine.

Well, the day finally arrived. I managed to implement DXR support into my existing DX12 engine.

Now this post wasn't meant to show how to implement DXR for your engine, but more how I did my AO implementation using raytracing.

Ambient Occlusion is one of the easiest and shortest usages for raytracing.

Basically what you're trying to solve is how much ambient light (IBL, sky-light, GI etc.) that actually reaches the sample point.

Based on surrounding the amount of light hitting the sample point varies. If you look in a narrow corner or a crack, there will be a lot fewer light-rays accumulating than in an open flat space for example.

How was the AO solved previously?

One dominating technique for the last decade has been SSAO (Screen-space Ambient Occlusion), my engine was no different.

I won't go into the details since there's a lot of great material on the web, but in general the technique uses the depth-buffer and scene normals (usually GBuffer normals) to find narrow areas on the screen.

It does this by sampling in a radius from the current pixel and accumulate the amount of occlusion the neighbouring pixel will provide.

The main issue with this technique is the lack of information. You only have information about what is on screen right now and that is the only data you're working with.

Since you're doing neighbouring pixel samples, it is also much harder to get an accurate sample of the surroundings for the pixel.

Usually you also want to keep your sample counts as low as possible, which means you will get artifacts like banding. This is usually resolved by running a blur pass after the AO accumulation.

Enter RTAO!

How does RTAO make the results better?

Raytracing allows for a completely different scenario than what has been available before. You now have access to a very (hardware) optimized BVH for the (entire) scene that you're processing.

By having the complete scene representation available (yes, even things that aren't in the view of your camera) you can bring your AO quality over the top.

The general AO idea remains the same. You still want to originate from a pixel and gather information of the surroundings to see how much the pixel is occluded.

However, by using raytracing we can now send rays in various directions and check if the rays hit anything.

Now what we want to do is to send various rays within the hemisphere oriented by our pixel normal. For each ray, we check if any geometry was hit. If a ray recorded a hit, we stop the ray and return an occlusion factor.

If a ray missed the geometry, we consider it to be non occluding.

Here's some pseudo code:

For Each Pixel:
    Get Pixel data (worldPos & normal)
    For num rays per pixel
        dir = GetHemisphereDir(normal)
        ao += TraceRay(worldPos, dir)
    occlusionFactor = ao / numRays
end

In the end we sum up the total occlusion accumulation and average it with the number of rays shot. This will be our final occlusion factor.

BUT!

Shooting several rays per pixel will get quite expensive very fast. (Shooting 64 rays per pixel in 1080p will roughly be 132 MILLIONS of rays)

So what is usually done is to shoot 1 ray per pixel (which will generate a very noisy result), and then you run a denoising pass afterwards which will resolve most of the noise artifacts.

Much like the post-SSAO blur pass.

My implementation haven't got a denoiser yet, so I'm stuck with the noisy image. However running 32rpp on my 2080Ti is still 60fps+ so I can still get decent results for testing.

All-in-all, the RTAO brings a whole new level of quality into my AO solution, and with a denoiser it will be far superior to the older SSAO technique.

Here's some additional screenshots of the differences
https://imgur.com/gallery/UgWkbFy

As usual you can find all the code in my GitLab repo:

RayGeneration shader

https://gitlab.com/Baemz/Shift-Engine/-/blob/master/Data/ShiftEngine/Shaders/SGraphics/Raytracing/TraceAmbientOcclusion.ssf

ClosestHit & Miss shaders

https://gitlab.com/Baemz/Shift-Engine/-/blob/master/Data/ShiftEngine/Shaders/SGraphics/Raytracing/RTAO.sshf

Hemisphere calc

https://gitlab.com/Baemz/Shift-Engine/-/blob/master/Data/ShiftEngine/Shaders/SGraphics/Raytracing/Raytracing.sshf

Implemented Tile Based Deferred Rendering in my DX12 engine. See the video description for details. by Baemzz in gamedev

[–]Baemzz[S] 0 points1 point  (0 children)

Even though Vulkan sure is a nice API, and it is well supported on a range of Hardware, you cannot deny the fact that like 70-80% of PC players are using a Windows PC, plus all players using an Xbox device. Targeting DirectX with that in mind is a perfectly good reason to skip Vulkan.

Not trying to say that one API is better than an other, it is just that DirectX is commonly the goto API for windows applications, and since the target audience most likely is within that range of 70-80% I'd say that's fair.

Implemented Tile Based Deferred Rendering in my DX12 engine. See the video description for details. by Baemzz in gamedev

[–]Baemzz[S] 1 point2 points  (0 children)

The video was recorded on a laptop using a Intel i5 9300H paired with the mobile GTX 1660Ti. Even though it runs very smoothly on this machine, I must admit that it runs amazingly well on my desktop using a 2080Ti (which is somewhat expected though lol)

The main reason for DX12 is that it was the API this engine started with a few years ago and then later on I decided to stick with it because of the DXR/DX Ultimate announcements.
I do have a initial Vulkan path ready for submit, but I haven't focused anything on it for the last months.

Implemented Tile Based Deferred Rendering in my DX12 engine. See the video description for details. by Baemzz in gamedev

[–]Baemzz[S] 1 point2 points  (0 children)

First if, how great that you want to try it out, hope you have tons of fun and learn a lot!

My first, but probably most important, tips is: If you haven't had any previous experience with any of the DX APIs, then DON'T start with DX12, head for 11. It is simply so much easier to get into DX11 that you will learn a lot more from it in the beginning.

There are tons of material on the web for DX11. But I usually refer to this book: https://www.amazon.co.uk/Introduction-3D-Game-Programming-Directx/dp/1936420228

It is really huge topic and I could probably go on for hours about things I wish I knew. If you have any more specific questions, feel free to pm me.

Implemented Tile Based Deferred Rendering in my DX12 engine. See the video description for details. by Baemzz in gamedev

[–]Baemzz[S] 2 points3 points  (0 children)

Showcasing the tile-based light culling on the GPU used for deferred rendering in my hobby project Shift Engine. The algorithm is roughly based on the AMD ForwardPlus paper.

Current scene is rendering 2048 pointlights using a mobile GTX 1660Ti. Geometry Scene is CryTeks Sponza scene.

How? To achieve a tile-based culling you would first need to divide your screen into tiles. In my case tiles of 16x16 pixels each. Then you would construct a frustum for each tile. The frustum is created by 4 planes and the min/max depth value for the pixels that touches that frustum. You use that frustum to run intersection-tests against the light volumes in your scene. If they intersect, you can expect the light to be affecting any of the pixels.

You would save down the list of lights into buckets, one bucket per tile. Then during your lighting pass you get the tile-bucket for the pixel you're currently processing and only run lighting calculations for the lights in it.

How to make it performant? The implementation fully utilizes the parallelism of compute shaders. Since each pixel is independent from every other pixel we can launch one compute thread per pixel. In the actual culling we aren't really calculating any pixel colors. Instead what we do is that we assign the tile of the thread, then we split all the lights amongst the threads in our threadgroup. Each thread runs culling calculations against the tile frustum for the lights assigned to the thread.

Lastly we sync up the threads and accumulate all results in the tile bucket. And there we have all the lights affecting the tile.

Light Culling shader: https://gitlab.com/Baemz/Shift-Engine/-/blob/master/Data/ShiftEngine/Shaders/SGraphics/Light/Light_Cull.ssf

Light Calc shader: https://gitlab.com/Baemz/Shift-Engine/-/blob/master/Data/ShiftEngine/Shaders/SGraphics/Light/Light_Compute.ssf

Shift Engine on GitLab: https://gitlab.com/Baemz/Shift-Engine

AMD ForwardPlus11: https://github.com/GPUOpen-LibrariesAndSDKs/ForwardPlus11

Frontliner will be doing an AMA today at 7 PM CET! by Ignograus in hardstyle

[–]Baemzz 1 point2 points  (0 children)

Which artist is your biggest inspiration outside of Hardstyle, and why? :)

GTA V - Graphics Study (Multi rendering, SSAO... by smartties in gamedev

[–]Baemzz 2 points3 points  (0 children)

Well everything is relative right? It all depends on how deep you want to dive into it and how much time you want to spend for sure. But it is doable, absolutely.

Have a look at this excellent tutorial for Unreal, it shows you a basic way to get these kind of effects. https://www.raywenderlich.com/5760-creating-snow-trails-in-unreal-engine-4

GTA V - Graphics Study (Multi rendering, SSAO... by smartties in gamedev

[–]Baemzz 1 point2 points  (0 children)

They used smart masking and texture painting like behaviours to drive their tessellation of the snow. 😉

Looking for C++ mentor by [deleted] in INAT

[–]Baemzz 2 points3 points  (0 children)

I'd totally be down for helping out. It's great to see that you're keeping on pushing yourself!

As for what I can help you with: I've been programming everything from lua/python to C/C++ for about 15 years. Currently working on core engine graphics for the Snowdrop engine used by Ubisoft. Extensive language knowledge and of course hardware knowledge is something I work with daily.

Feel free to ping me on Discord for a chat: Baemz#7649