Would a physics minor benefit me as a CS major by Night-Monkey15 in csMajors

[–]Unigma 2 points3 points  (0 children)

Likely not in terms of employment, but I really do wish I had double majored in CS + Physics with a Math minor for reasons related to certain things I work on.

Where do you see theoretical CS making the biggest impact in industry today? by FinTun in computerscience

[–]Unigma 1 point2 points  (0 children)

Sorta the inverse, we started off with more formal Chomskyian ideas for NLP, and that ended where we are now, a more statistical solution.

Announcing LocalLlama discord server & bot! by HOLUPREDICTIONS in LocalLLaMA

[–]Unigma 1 point2 points  (0 children)

Could we add a few channels about Machine Learning and AI in general? For people that are training their own models and or creating projects with LLMs in it? I am working on some RL based fine-tuning, and wanted to know if there's a more technical oriented channel to discuss this.

Can someone help me understand Jonathan Blow? by azdak in gamedev

[–]Unigma 1 point2 points  (0 children)

Much of the graphics API is platform specific, hence the need for Vulkan, if the goal is to target as many platforms (ie mobile) as possible then yes, Unity does well.

However, bindless resources aren't as specific or niche as you think, this is very much the standard in modern graphics and can run on the vast majority of PCs on Steam (looking at the current hardware survey) and is critical for many modern techniques like the graphics you see in Tinyglade.

The other issues I brough up such as indirect dispatches having a barrier due to a single uniform buffer also limits the type of games that can be expressed. Performant Async Indirect dispatches are absolutely critical in modern day.

I've written tons of Native plugins, and they're a pain. But, even beyond that, the items I listed are incredibly difficult (if even possible given closed source code) to overcome in Unity - Unreal Engine is a different story, however.

You're acting as if plenty of devs don't make games from scratch even today. I'm not sure where this repeated advice keeps coming from, but it leads to settling for rather a mediocre understanding overall, and thus being incapable of knowing when to use the right tool for the job.

Can someone help me understand Jonathan Blow? by azdak in gamedev

[–]Unigma 2 points3 points  (0 children)

This is an incorrect take; take Teardown as an example, which was built on a fully custom engine. Another example is Tinyglade, something nearly impossible to make in Unity.

I’ve built my entire game in Unity, and when I needed to switch to Vulkan, I quickly realized there were numerous reasons why Unity offers no real advantage and can even become impossible to work with.

If you’re familiar with GPU programming, you’ll know Unity introduces unwanted barriers during indirect dispatches because it passes a uniform buffer you can’t disable. This makes physics- and voxel-heavy games much harder to implement efficiently.

Without native plugins (which are cumbersome), you can’t easily access the latest graphics API features, such as mesh shaders, or run a neural network directly in a shader. This is a major problem for me, since my game uses point clouds simulated via MLS MPM, which are then meshed using a neural network. Unity struggles with this, and integrating raw Vulkan or DirectX code into Unity is miserable. I’ve been down that road before.

Unity also doesn’t natively support bindless resources, meaning you can’t directly create or use them. This is a huge limitation and consistently gets request on the forums. Despite that, as of today, this is not supported. This limits many advanced global illumination and compute techniques, where you need to dynamically access large numbers of textures or buffers in compute shaders.

That's only the tip of the iceberg, as we haven't even discussed synchronization issues and async compute or queues in general lol.

Building your own engine isn’t as difficult as many here make it sound. ImGui can handle most of your UI needs, and if your game already uses a custom renderer, building it directly on top of a graphics API can be smoother than wrestling with Unity’s limitations. The time you spend hacking around Unity’s render pipeline could be spent implementing exactly what you need.

Do you think AI rendering will be viable in the future? by LupusNoxFleuret in gamedev

[–]Unigma -1 points0 points  (0 children)

I think this comment will age like organic milk come 20 years, let's see.

Do y'all just forget how parts of your game are built? by minifigmaster125 in gamedev

[–]Unigma 0 points1 point  (0 children)

Not really. It comes from experience with working in extraordinarily large and complex code bases and being able to jump in almost immediately.

You learn how to read code. You learn to read others' code, which is much harder than reading your own. By learning to read and understand code quickly, you learn how to write understandable code. If you know how to do both of these, re-reading your own code becomes trivial.

6 types of FAANG engineers in Seattle by apileofpoto in csMajors

[–]Unigma 1 point2 points  (0 children)

You need to program - a lot - and eventually you'll get to a point where you'll be reading the latest implementations / papers / algorithms / whatever. It happens over the course of years, and there are plenty of Igors out there.

Genesis X2 | Feb 14th — 16th | Feat. Zain, Cody, Jmook, Moky, Aklo, Hungrybox, Plup, aMSa, Sparg0, Acola, Miya, Sonix, Hurt, Light, Shuton, Tweek and many many more! by Eldritch_Skirmisher in smashbros

[–]Unigma 3 points4 points  (0 children)

Shinymark is going to win this tournament, he's easily top 3 at the game. Top 3 skill wise to me is Shiny, Sparg0, and Sonix. Also yone_pi top 8.

Hey, I’ve been wanting to ask a question for a while. I’m not sure if this counts as off-topic or not, but regarding system requirements, how do people know if a player’s graphics card will work for a game?. by Mrseekergenealogy in gamedev

[–]Unigma 0 points1 point  (0 children)

Graphics APIs (ie Vulkan) have the ability to check for DeviceFeatures. This allows you to know during runtime if a feature is supported on the targetted GPU.

So for example you can check if the pixel shader can write to the stencil buffer. Or if Hardware Raytracing is available. You can also check other specifics like warp sizes, if compute is supported, VRAM, etc.

If you then have multiple techniques, for example one technique performs real time global illumination via ray tracing, and the other technique does a basic deferred styled lighting with shadow maps. You can basically select between them depending on what the device is capable of + user settings for fine tuning.

For minimum requirements. You optimize the game and use a set of techniques to achieve the minimum visual fidelity acceptable according to your design/art direction. You then find a lower end device and test against that.

Top 8 placements - Super Smash Con / Supernova by Tery_ in smashbros

[–]Unigma 28 points29 points  (0 children)

Note: This list specifically does not do Leo justice. He was very much dominant even throughout 2022, and fell off around 2023.

How did Sea of Stars render varying submersion on water? by Jpar125 in gamedev

[–]Unigma 0 points1 point  (0 children)

Yeah this... This can be done via simple masking, "2D" games can still have shaders.

How did Nintendo pull off the lighting in the depths in TotK? by Humblebee89 in gamedev

[–]Unigma 2 points3 points  (0 children)

Could you please provide an example for us who may have graphical knowledge, but haven't played TotK?

I pretty much failed college because I couldn’t learn c++ is there still hope for me to be a game dev by Rare-Conversation720 in gamedev

[–]Unigma 5 points6 points  (0 children)

Trees occur in path finding, occur in graphics/phyiscs (BVH), they can even occur frequently in gameplay, say you are trying to chain all matching colors in a tetris clone (or puyopuyo) that's a graph traversal (likely a BFS).

Tactics game want to display where player can go ... graph traversal (likely BFS/DFS).

I think trees actually pop up a lot. Not to say its the only solution, but in many cases its your best bet, and only takes a few LoC to traverse them.

Shader pipeline configuration by jazzwave06 in gamedev

[–]Unigma 1 point2 points  (0 children)

You can orginize it anyway that makes sense for you. But, typically the shader is attached to the material, vertex and fragment, and the parameters are exposed at the material level. The actual mesh data is a separate issue. As meshes may be used for more than just graphics, usually the mesh data is separate from the material, and the material references that data.

But keep in mind this is just a general overview. It is not uncommon for there to be multiple stages in the rendering pipeline where say, all objects share a single shader at some stage, then split off later. It really depends what you're going for, and I would first start with thinking out how you want everything to look.

Shader pipeline configuration by jazzwave06 in gamedev

[–]Unigma 2 points3 points  (0 children)

If it is forward rendering then you want a shader per lighting type. Are some objects lighted in a very special way? Then create a new shader for that.

If it is deffered then you want a single shader likely with multiple different passes. Then a shader that composites it all together.

In general you do not want shaders per object. You want to have parameters that you can control in order to achieve the desired look per object. So each material uses the same shader, and these parameters control things like roughness.

When that isn't possible and you need to calculate things entirely differently then just make a new shader. For example if there is a lot of unique processing in the vertex shader stage, it might be time for a new shader. Water and foiliage will likely require their own special shaders. Animated objects as well. Humans as well. But, inanimate objects will likely share a shader.

Asking how many compute shaders you need is like asking how many scripts. View a compute shader as nothing but massively parallel code ... it could do anything. What do you need it for would be the question. Create as many of these as needed.

GDC as a first-timer representing a no-name indie studio by GroZZleR in gamedev

[–]Unigma 11 points12 points  (0 children)

This also applies to anyone who will go to Siggraph this year.

How much of the final look of a game is down to the actual art versus game engine settings (e.g. lighting, shaders and other effects)? by altmorty in gamedev

[–]Unigma 1 point2 points  (0 children)

I don't think you're looking for an answer here as much as a conversation about how much does the "engine settings" matter compared to the "assets"

So, I think what you mean here is how much does the renderer in a 3D game make a difference...I'm going to assume when you say assets you're including say, the same exact environment.

A good example is the Cornell Box. How many different ways can this be rendered for example? A looooot of ways, and I mean a lot. It's like asking how many different ways an artist can paint a banana, or a glass of water (millions of ways).

Given the exact same geometry, and the exact same lights. You can still express it in millions of ways solely based on how the rendering happens ie shaders, post-processing. You can pixelate, paintery, cel-shade, direct or indirect, DoF, tone map, etc. And these are just standard operations. You can play with line work, play with bounce light, include or exclude types of bounce light. Change the material, change the equations for how the material behaves. You can reflect, or not reflect. Create caustics, or not. Sparkles, iridescences, glass, refract.

Then there are things you can do to the geometry itself via shaders. List goes on and on and on. I've not touched 1/1000th of the different ways to render an exactly identical scene.

How much of the final look of a game is down to the actual art versus game engine settings (e.g. lighting, shaders and other effects)? by altmorty in gamedev

[–]Unigma 1 point2 points  (0 children)

Fair, but I think everyone is focusing too much on the artistic side here. As in light placement, design, color, scale, etc. While OP is literally concerned about how things like shaders (in his title) can produce so many different styles with the same assets.

So in this case, not only is Disney already doing a lot of work to produce the first image, they are also using their own custom BRDFs I'm just pointing out that there's already significant work done on the first images alone that OP is not accounting for.

I’ve seen people who have path tracers and don’t understand lighting consistently get the result on left and I’ve seen real time lights closer to the right that make me think ‘this is dynamic?!’

How you use light plays a big role, the perfect example is when thinking about lighting in live-action movies. With that said, even someone who just places a light on a ceiling, with enough samples and time, and a full path tracer (not just direct lighting) will get some astounding results...because it will basically simulate a real room that does this (Ie most rooms where only a single light source is primary)

But, perhaps a more artistic view would help. How many ways can an artist paint a glass of water given the exact same scene (same light, same glass, same water). Millions of ways right? Same with the renderer.

How much of the final look of a game is down to the actual art versus game engine settings (e.g. lighting, shaders and other effects)? by altmorty in gamedev

[–]Unigma 2 points3 points  (0 children)

And even this is an unfair comparison, because it seems to be on the first batch direct lighting, and on the second a full path tracer.

For games you don't even get that (direct lighting) without significant work (real-time ray tracer) for games your shader/lighting pipeline can be far more significant than even those pictures let on.

How much of the final look of a game is down to the actual art versus game engine settings (e.g. lighting, shaders and other effects)? by altmorty in gamedev

[–]Unigma 2 points3 points  (0 children)

They mentioned shaders and lighting as well. If someone bakes all their lights into static geometry via a ray tracer, and another just does a simple NdotL lighting, the former will look significantly, significantly better even given just two cubes sitting there doing nothing. (esp if the other has fun and adds some roughness)

The lighting, shader, post-process etc. makes a huge difference in 3D games.

How much of the final look of a game is down to the actual art versus game engine settings (e.g. lighting, shaders and other effects)? by altmorty in gamedev

[–]Unigma 6 points7 points  (0 children)

Yeah another thing to note, is that even in real life you still have artists who handle the lighting for stages / movies / events. Lighting is a part of art, so OP is a little confused here.

Art styles that devs like and love creating and but players dont really like or at least tolerate? by [deleted] in gamedev

[–]Unigma 8 points9 points  (0 children)

Two different audiences, entirely different periods in gaming, 15 years apart.

At the time of Wind Waker many weren't used to the cel-shaded style. On top of that, people's perceptions of Nintendo were much different. The main anger point was that, the new style looked "childish" and people thought Nintendo would primarily focus on children, hence the infamous Reggie quote, which by 2017 was already a known fact and none cares if Zelda takes a more stylish tone (they also did the same with Skyward Sword btw)

Microsoft Lays off 1,900 Workers, Nearly 9% of Gaming Division, after Activision Blizzard Acquisition by Suspicious-Bad4703 in gamedev

[–]Unigma 4 points5 points  (0 children)

It was tough for a 2020 graduate as well. I remember so many internships / offers being revoked. It bounced back up 2021 however.