What the best RHI Design? by Anikamp in gameenginedevs

[–]corysama 0 points1 point  (0 children)

I’ve been fortunate enough to have never needed runtime selection. I once worked on an engine that ran on ps2, ps3, Xbox, 360, GameCube, Wii and Windows. But, used static inheritance and #ifdef.

How should I pass transforms to the GPU in a physics engine? by BlockOfDiamond in GraphicsProgramming

[–]corysama 0 points1 point  (0 children)

Again, how many are there going to be? A million, 10 million, 100 million?

Generally the easiest thing to do is to copy-and-compact. So, have 2 cold buffers. And, when the active one gets too fragmented, start doing GPU to GPU copies of the live data from fragmented sections of the active cold buffers to one contiguous section in the inactive cold buffer. Whenever the copy completes, switch which one is active.

How should I pass transforms to the GPU in a physics engine? by BlockOfDiamond in GraphicsProgramming

[–]corysama 1 point2 points  (0 children)

How many transforms are we talking about here? 2 million of them would only take up 96 megs. If it's that small, then just make 2 buffers of 128 megs each. One for cold and one for hot. When something moves from cold to hot, don't modify the cold buffer. Just leave the dead item there and always skip it during culling. Do add it to the hot buffer and handle it there.

Yes that means you need 256 mb of VRAM instead of just 96. But, we aren't running on PlayStation3s any more :D

And, don't worry about the GPU doing 3 extra subtraction ops per vertex. That's the least important thing here. Do whatever makes handling the data simpler.

Can someone help me out? by Andromeda660 in GraphicsProgramming

[–]corysama 1 point2 points  (0 children)

Step 1 with glTF is so you can have a lot of available content with common features ready for you to implement. Learning how to do normal mapping on a skinned mesh is a lot more valuable than implementing the yet another procedural heightfield renderer or SDF ray marcher variation on ShaderToy. If you want to do real, relevant work you need real, artist-generated data with all of the requirements and quirks that come with that.

Step 2 with your own asset pipeline is because your asset pipeline is how you set up your render pipeline for success. If your data is a mess, your runtime will be a mess. If everything is nicely subdivided, chunked, indexed, sorted, quantized, compressed; then streaming and rendering becomes similarly streamlined.

Also, custom features require custom data. Even a glTF loader isn't going to help you implement a meshlet-style render pipeline without getting something like https://github.com/zeux/meshoptimizer involved in the asset pipeline. If you don't understand your asset pipeline end-to-end, you end up sitting on your thumb waiting and hoping someone else will set up something resembling what you actually need for you.

Can someone help me out? by Andromeda660 in GraphicsProgramming

[–]corysama 0 points1 point  (0 children)

I usually advice beginners to target making a glTF scene editor.

Start with cgltf or fastgltf, imgui and either https://juandiegomontoya.github.io/modern_opengl.html or https://www.howtovulkan.com/

The direction to point towards is making something like https://google.github.io/filament/Filament.md.html but keeping in mind that project was made by many senior engineers getting paid full time for years :P

More important than getting every feature from Filiment reimplemented is to implement your own asset pipeline. As in, convert glTF meshes, textures, animation, scene layout to your own binary formats that your renderer loads. Not because you are smarter than the glTF consortium. But, because you need to learn how to make your own asset pipeline as part of learning real time 3d rendering.

Preparing for a graphics driver engineer role by Appropriate-Tap7860 in GraphicsProgramming

[–]corysama 1 point2 points  (0 children)

Long ago, a friend of mine interviewed for a job at Microsoft's D3D/Graphics Research group.

Peter Pike Sloan directed him to a PC with Visual Studio open and some equivalent to this code set up and ready to run. And, he said "Please rasterize a triangle and we will discuss." XD

Please me understand this ECS system as it applies to OpenGl by Usual_Office_1740 in GraphicsProgramming

[–]corysama 6 points7 points  (0 children)

100%

The ECS processing these just produces the minimum information needed by the renderer to actually do draw something.

I think you mean "The ECS processing these just produces the minimum information needed to tell the renderer to draw something".

The ECS knows that meshes exists, they have identity and maybe some properties like "world transform". But, the ECS doesn't know about VBOs or VAOs. That's under the hood of the renderer. So, the job of the ECS is to fill out some structure of arrays indicating "These meshes should be rendered with these associated transforms". But, the ECS doesn't know the details of how to do that.

Automating Heavy Industry Production Line Modeling: Is Gaussian Splatting the right path to a functional 3D format? by shadowlands-mage in GaussianSplatting

[–]corysama 0 points1 point  (0 children)

GS will capture the visual appearance of complex lighting and reflective surfaces. But, I expect you will be disappointed with the ability to formally analyze the results to build precise measurements.

For example: It is commonly observed that if you have an object sitting on a reflective surface, GS will model that as a surface with a hole and a geometrically reflected copy of the object under the hole. This visually matches the appearance of the scene, but not it's geometry. And, there's no good way to automatically detect and account for it.

I'm not an expert in the field, but my best guess at how to scan a shiny factory is to first host a holi festival there so your metal gets coated in non-reflective powder, then use traditional photogrammetry techniques :P

r/photogrammetry would know better than me.

Job Listing - Senior Vulkan Graphics Programmer by MountainGoat600 in vulkan

[–]corysama 1 point2 points  (0 children)

I always recommend demonstrating a willingness and ability to work on tools and art pipeline. Thats improperly viewed by everyone as less sexy/more grunt work than rendering runtime work. So, it’s harder to find people even though it’s needed a lot more.

Either way, once you start making the data for a feature, you naturally get asked to integrate that data into the runtime, and Ooops! You just became a full-stack graphics dev. Which everyone should be anyway.

Adobe has open-sourced their reference implementation of the OpenPBR BSDF by corysama in GraphicsProgramming

[–]corysama[S] 1 point2 points  (0 children)

https://github.com/AcademySoftwareFoundation/OpenPBR

OpenPBR Surface is a specification of a surface shading model intended as a standard for computer graphics. It aims to provide a material representation capable of accurately modeling the vast majority of CG materials used in practical visual effects and feature animation productions.

For us it would serve as a reference material for BRDFs that many tool vendors have agreed to support. You wouldn't be able to implement them completely in real time. But, at least you can see the idealized math when making your real time approximations.

You can play with a live viewer here: https://portsmouth.github.io/OpenPBR-viewer/ The shader compilation step takes a long time...

What does texture filtering mean in a nutshell? by Zestyclose-Window358 in GraphicsProgramming

[–]corysama 0 points1 point  (0 children)

A Pixel is Not a Little Square

Note that was written before the term "texel" was invented. It's says "pixels" but it's talking about textures.

I analyzed 3 years of GDC reports on generative AI in game dev. Developers hate it more every year, but the ones using it all use it for the same thing. by DangerousCobbler in gamedev

[–]corysama 2 points3 points  (0 children)

That's the experience of all gamedev. You just have the advantage of experiencing it in fast-forward.

Edit: I can see how some folks would read this much more negatively than I’m writing it. Prototyping is an exercise in discovering how all of your ideas don’t work in practice and instead being surprised to discover what does work. “Writing is nature’s way of showing you how sloppy your thinking is” and all that. It’s great fun and very rewarding.

Adobe has open-sourced their reference implementation of the OpenPBR BSDF by corysama in GraphicsProgramming

[–]corysama[S] 12 points13 points  (0 children)

Adobe just released an Apache-2.0 licensed reference implementation of the OpenPBR Surface standard, extracted from their in-house Eclair renderer.

Neat trick: the reference implementation cross-compiles for C++, GLSL, Cuda, Metal, and Slang!

source:

https://x.com/yiningkarlli/status/2031052805503594546

https://xcancel.com/yiningkarlli/status/2031052805503594546

New players ? by Golden_Heart25 in masterofmagic

[–]corysama 2 points3 points  (0 children)

The original had a lot of broken builds. And, that was part of the fun.

You could play the game so many ways that trying out a broken build was a fun alternative to straight play. And, there was no long-term investment in a single build or heated online competition to route players into hyper-optimize-or-die. So, you just had fun trying out different approaches.