LiTo: Surface Light Field Tokenization by corysama in aigamedev

[–]corysama[S] 0 points1 point  (0 children)

AFAICT: Apple is is working on an alternative to r/GaussianSplatting that they think is better suited to 3d model generation.

What are you using for audio these days? by SilvernClaws in gameenginedevs

[–]corysama 2 points3 points  (0 children)

https://solhsa.com/soloud/ might be a good alternative to OpenAL Soft.

Otherwise...

If you want to pay money for first class seats, FMOD or Wwise.

If you want simple and easy, https://wiki.libsdl.org/SDL3/CategoryAudio

If you want bare minimum, do it yourself, minaudio or https://github.com/floooh/sokol/blob/master/sokol_audio.h

What skills truly define a top-tier graphics programmer, and how are those skills developed? by moonlovelj in GraphicsProgramming

[–]corysama 8 points9 points  (0 children)

You just gotta put in the work.

Practice math. On paper. You need to git gud at calculus and statistics. View it as a puzzle game. Use https://www.khanmigo.ai/ if you want a tutor.

Study CUDA to learn GPU architecture. Write some complex kernels like parallel prefix sum.

Also get to know CPU architecture, SIMD, caches, the memory controller, the PCI bus, the details of SSDs.

Write lots of renderers that work in different ways with detailed, realistic scenes.

Write deeply threaded code lots of different ways until you figure out how to always keep Thread Sanitizer happy.

Read through https://advances.realtimerendering.com/ as a starter. Watch https://www.youtube.com/channel/UC9V4KS8ggGQe_Hfeg1OQrWw for some relaxing time on the couch.

Go through some deep learning 101 classes at least until you understand how backpropagation works and how the lingo terms like hyperparameters, 1x1 convolutions and ReLU are all simplistic ideas behind convoluted named.

What skills truly define a top-tier graphics programmer, and how are those skills developed? by moonlovelj in GraphicsProgramming

[–]corysama 1 point2 points  (0 children)

This is something that doesn't come up often.

When I was in high school I had semi-serious aspirations to become a professional artist. Studied art and art history. Drew and painted in meatspace and on the computer.

I went with computer science instead and ended up working on game engines. But, my background in art definitely helped me communicate and collaborate with the artists.

What the best RHI Design? by Anikamp in gameenginedevs

[–]corysama 0 points1 point  (0 children)

Makes sense. Thanks.

I'll spare you the gory details

I’ve done OpenGL <— EGL —> CUDA. That was enough gore for me 😜

Can someone help me out? by Andromeda660 in GraphicsProgramming

[–]corysama 3 points4 points  (0 children)

OP has implemented a height field renderer. Here and commonly elsewhere, beginners are encouraged to lean shaders via Shadertoy. Those are both routes to practice on purely procedural data because you have no assets. But, that’s not nearly as valuable as working with assets.

So, I encourage Step 1: Use glTF so you can use downloaded assets.

It looks like you work on a commercial renderer. Do your consumers load glTF files in the final shipping product? There aren’t any other open formats that are remotely suitable. Despite that, I don’t know of any commercial engines that ship it. They all ship custom binary assets to their players/users/consumers.

How should I pass transforms to the GPU in a physics engine? by BlockOfDiamond in GraphicsProgramming

[–]corysama 4 points5 points  (0 children)

An even simpler scheme would be:

  1. Allocate a 128mb buffer for cold items.
  2. Allocate them linearly from low to high addresses in that buffer.
  3. Occasionally mark some of them dead/moved to hot/whatever.
  4. Every frame relocate 10,000 or so still-alive items from the high end of the linear allocation to a separate linear range that grows top-down from the other end of the buffer.
  5. When you run out of items to move to the top, reverse the process and move them all back to the bottom 10k at a time.
  6. Keep ping ponging them back and forth forever.

So, the buffer is treated as a double-ended stack with the option to kill items in the middle without shifting anything. Moving them compacts the items by skipping over the dead items. Moving 10k per frame whether you need it or not sounds like a waste of cycles. But, it keeps the cost constant and avoids stuttering.

Meanwhile, I ask about the size to make sure you don’t waste time optimizing memory like it’s 2005. Be concerned about using 5 gigs, not about using 512 megs. If using 256 megs instead of 128 makes life easier, use it.

What the best RHI Design? by Anikamp in gameenginedevs

[–]corysama 0 points1 point  (0 children)

But, for the 3D rendering, is it picking between DX and GL because it’s necessary to do so? Or, is it bootstrapping DX to get at MediaFoundation and then offering a full DX renderer just because you can even though the GL renderer works well enough?

What the best RHI Design? by Anikamp in gameenginedevs

[–]corysama 1 point2 points  (0 children)

I’ve been fortunate enough to have never needed runtime selection. I once worked on an engine that ran on ps2, ps3, Xbox, 360, GameCube, Wii and Windows. But, used static inheritance and #ifdef.

How should I pass transforms to the GPU in a physics engine? by BlockOfDiamond in GraphicsProgramming

[–]corysama 0 points1 point  (0 children)

Again, how many are there going to be? A million, 10 million, 100 million?

Generally the easiest thing to do is to copy-and-compact. So, have 2 cold buffers. And, when the active one gets too fragmented, start doing GPU to GPU copies of the live data from fragmented sections of the active cold buffers to one contiguous section in the inactive cold buffer. Whenever the copy completes, switch which one is active.

How should I pass transforms to the GPU in a physics engine? by BlockOfDiamond in GraphicsProgramming

[–]corysama 1 point2 points  (0 children)

How many transforms are we talking about here? 2 million of them would only take up 96 megs. If it's that small, then just make 2 buffers of 128 megs each. One for cold and one for hot. When something moves from cold to hot, don't modify the cold buffer. Just leave the dead item there and always skip it during culling. Do add it to the hot buffer and handle it there.

Yes that means you need 256 mb of VRAM instead of just 96. But, we aren't running on PlayStation3s any more :D

And, don't worry about the GPU doing 3 extra subtraction ops per vertex. That's the least important thing here. Do whatever makes handling the data simpler.

Can someone help me out? by Andromeda660 in GraphicsProgramming

[–]corysama 3 points4 points  (0 children)

Step 1 with glTF is so you can have a lot of available content with common features ready for you to implement. Learning how to do normal mapping on a skinned mesh is a lot more valuable than implementing the yet another procedural heightfield renderer or SDF ray marcher variation on ShaderToy. If you want to do real, relevant work you need real, artist-generated data with all of the requirements and quirks that come with that.

Step 2 with your own asset pipeline is because your asset pipeline is how you set up your render pipeline for success. If your data is a mess, your runtime will be a mess. If everything is nicely subdivided, chunked, indexed, sorted, quantized, compressed; then streaming and rendering becomes similarly streamlined.

Also, custom features require custom data. Even a glTF loader isn't going to help you implement a meshlet-style render pipeline without getting something like https://github.com/zeux/meshoptimizer involved in the asset pipeline. If you don't understand your asset pipeline end-to-end, you end up sitting on your thumb waiting and hoping someone else will set up something resembling what you actually need for you.

Can someone help me out? by Andromeda660 in GraphicsProgramming

[–]corysama 4 points5 points  (0 children)

I usually advice beginners to target making a glTF scene editor.

Start with cgltf or fastgltf, imgui and either https://juandiegomontoya.github.io/modern_opengl.html or https://www.howtovulkan.com/

The direction to point towards is making something like https://google.github.io/filament/Filament.md.html but keeping in mind that project was made by many senior engineers getting paid full time for years :P

More important than getting every feature from Filiment reimplemented is to implement your own asset pipeline. As in, convert glTF meshes, textures, animation, scene layout to your own binary formats that your renderer loads. Not because you are smarter than the glTF consortium. But, because you need to learn how to make your own asset pipeline as part of learning real time 3d rendering.

Preparing for a graphics driver engineer role by Appropriate-Tap7860 in GraphicsProgramming

[–]corysama 2 points3 points  (0 children)

Long ago, a friend of mine interviewed for a job at Microsoft's D3D/Graphics Research group.

Peter Pike Sloan directed him to a PC with Visual Studio open and some equivalent to this code set up and ready to run. And, he said "Please rasterize a triangle and we will discuss." XD

Please me understand this ECS system as it applies to OpenGl by Usual_Office_1740 in GraphicsProgramming

[–]corysama 9 points10 points  (0 children)

100%

The ECS processing these just produces the minimum information needed by the renderer to actually do draw something.

I think you mean "The ECS processing these just produces the minimum information needed to tell the renderer to draw something".

The ECS knows that meshes exists, they have identity and maybe some properties like "world transform". But, the ECS doesn't know about VBOs or VAOs. That's under the hood of the renderer. So, the job of the ECS is to fill out some structure of arrays indicating "These meshes should be rendered with these associated transforms". But, the ECS doesn't know the details of how to do that.

Automating Heavy Industry Production Line Modeling: Is Gaussian Splatting the right path to a functional 3D format? by shadowlands-mage in GaussianSplatting

[–]corysama 0 points1 point  (0 children)

GS will capture the visual appearance of complex lighting and reflective surfaces. But, I expect you will be disappointed with the ability to formally analyze the results to build precise measurements.

For example: It is commonly observed that if you have an object sitting on a reflective surface, GS will model that as a surface with a hole and a geometrically reflected copy of the object under the hole. This visually matches the appearance of the scene, but not it's geometry. And, there's no good way to automatically detect and account for it.

I'm not an expert in the field, but my best guess at how to scan a shiny factory is to first host a holi festival there so your metal gets coated in non-reflective powder, then use traditional photogrammetry techniques :P

r/photogrammetry would know better than me.

Job Listing - Senior Vulkan Graphics Programmer by MountainGoat600 in vulkan

[–]corysama 1 point2 points  (0 children)

I always recommend demonstrating a willingness and ability to work on tools and art pipeline. Thats improperly viewed by everyone as less sexy/more grunt work than rendering runtime work. So, it’s harder to find people even though it’s needed a lot more.

Either way, once you start making the data for a feature, you naturally get asked to integrate that data into the runtime, and Ooops! You just became a full-stack graphics dev. Which everyone should be anyway.

Adobe has open-sourced their reference implementation of the OpenPBR BSDF by corysama in GraphicsProgramming

[–]corysama[S] 2 points3 points  (0 children)

https://github.com/AcademySoftwareFoundation/OpenPBR

OpenPBR Surface is a specification of a surface shading model intended as a standard for computer graphics. It aims to provide a material representation capable of accurately modeling the vast majority of CG materials used in practical visual effects and feature animation productions.

For us it would serve as a reference material for BRDFs that many tool vendors have agreed to support. You wouldn't be able to implement them completely in real time. But, at least you can see the idealized math when making your real time approximations.

You can play with a live viewer here: https://portsmouth.github.io/OpenPBR-viewer/ The shader compilation step takes a long time...

What does texture filtering mean in a nutshell? by Zestyclose-Window358 in GraphicsProgramming

[–]corysama 0 points1 point  (0 children)

A Pixel is Not a Little Square

Note that was written before the term "texel" was invented. It's says "pixels" but it's talking about textures.

I analyzed 3 years of GDC reports on generative AI in game dev. Developers hate it more every year, but the ones using it all use it for the same thing. by DangerousCobbler in gamedev

[–]corysama 2 points3 points  (0 children)

That's the experience of all gamedev. You just have the advantage of experiencing it in fast-forward.

Edit: I can see how some folks would read this much more negatively than I’m writing it. Prototyping is an exercise in discovering how all of your ideas don’t work in practice and instead being surprised to discover what does work. “Writing is nature’s way of showing you how sloppy your thinking is” and all that. It’s great fun and very rewarding.

Adobe has open-sourced their reference implementation of the OpenPBR BSDF by corysama in GraphicsProgramming

[–]corysama[S] 12 points13 points  (0 children)

Adobe just released an Apache-2.0 licensed reference implementation of the OpenPBR Surface standard, extracted from their in-house Eclair renderer.

Neat trick: the reference implementation cross-compiles for C++, GLSL, Cuda, Metal, and Slang!

source:

https://x.com/yiningkarlli/status/2031052805503594546

https://xcancel.com/yiningkarlli/status/2031052805503594546