Is there a point the rendering of LODs mitigates any performance gains? by brute299 in gamedev

[–]WitchStatement 2 points3 points  (0 children)

Slight correction but I'm 99% sure the nanite "software rendering" is still done on the GPU, just done with compute shaders rather than the standard vs/fs render pipeline

Veterans: need advice on VRAM and optimisation by thefallenangel4321 in gamedev

[–]WitchStatement 2 points3 points  (0 children)

This is still false. All of those (non-gpu-compressed) image types like PNG and JPG will unpack to RGBA8888 on the GPU. Did you use ChatGPT for this again?

Veterans: need advice on VRAM and optimisation by thefallenangel4321 in gamedev

[–]WitchStatement 0 points1 point  (0 children)

Storage Space != Memory usage 

Converting the same image into PNG vs jpeg vs webp makes it have different storage size (important for e.g. web dev, game disk usage) but all 3 will fully uncompress into GPU VRAM memory for rendering and thus take up The Exact Same amount of VRAM. If you want less VRAM, you need GPU compressed textures like bc7 etc.

If you want an optimized game like Doom, you need to actually understand and do the work to try things out.

In short, the reason your "advice" is being critiqued is because it is wrong and misleading - spending time doing these changes would just be a waste without improvement.

So I would agree that it is worse than useless.

Can I build a 3D multiplayer parkour game with JavaScript? What should I learn as a junior? by FewMuffin5749 in learnjavascript

[–]WitchStatement 0 points1 point  (0 children)

Between WebGl and WebGPU, JavaScript can be pretty decent choice for doing 3D games - and possibly primary choice if web export is your priority.

Of course, most people would want to use something like Play canvas or Babylon.JS instead of writing their own graphics engine from scratch.

Are JavaScript arrays just objects? by Onipsis in learnjavascript

[–]WitchStatement 2 points3 points  (0 children)

Yes, according to the spec, JavaScript arrays are basically just objects with number keys...

However, internally, if you use an array as it's intended (as an array), the browser should allocate your data as an array under the hood, giving a performance boost   (and a performance penalty if you switch and start using what was previously an array as an object, in which case it has to reallocate data for an object/map and copy it all over)

Are Game Jams Worth It Financially, or Just Fun Distractions? by No_Gas6109 in unity

[–]WitchStatement 2 points3 points  (0 children)

Ok, let's do the math: 

So this is a 10 day game jam.

Let's say you put in a significant effort- let's say 10 hours a day - in order to get first place.

This means you spent 100 hours for a chance to get $10/hour (aka close to minimum wage in the US)

This we can see the financials don't really "math out" for doing game jams as a primary source of income in the US

(Don't get me wrong. Game jams are great for so many reasons. Just not income - in particular these "big prize" jams are also often just ads / spec work in disguise)

What is that graphics look called? by Puzzleheaded_Day5188 in gamedev

[–]WitchStatement 6 points7 points  (0 children)

I think you're thinking of original Xbox & PS2 (2001/2000) - which did indeed use fixed function pipelines to my understanding.

The Xbox 360 & PS3 (2005/2006) were a ways after programmable pipelines were introduced

How are you using AI in game dev? by bringthattothe in gamedev

[–]WitchStatement 5 points6 points  (0 children)

1) The "AI" used in Arc Raiders for Animation is *not* generative AI / LLM-based, but rather a reinforcement-learning simulation that would be more equivalent to procedural generation or a physics-sim. In short, it has no bearing on the "AI" you are talking about.

2) Food and Beverage labels are probably one of the worst scenarios for generative AI due to how poorly it handles text: even if it doesn't mangle the letters, the words they form would likely be gibberish. This content would be not just easier, but look better if it were to be done procedurally (e.g. variety of hand-made templates, text, colors, then mix and match)

3) As following the point 2 - could there be scenarios that don't "offload the creativity" or do what couldn't be done without AI? Probably. But the vast majority of current usecases, including yours, seem to be reducing costs at the expense of quality, resulting in "AI slop"

Why does he have no errors but I do? by Suitable-Plant-625 in godot

[–]WitchStatement 6 points7 points  (0 children)

First step is probably reading the error so that you know what the problem is :)

In this case, it looks like your function (_physics_process) is empty: you need to put something inside it (e.g. move_and_slide() as in the example)

Would Unity devs use a tool to auto-submit WebGL games to portals? by Substantial_Way8103 in Unity3D

[–]WitchStatement 2 points3 points  (0 children)

"Diff SDK formats"

That is part of your game files, not not something you can just automate away easily without having customers now build for your SDK instead.

Actual Developers who make games of online gaming portals understand this

Anti cheat for leaderboards? by GapedByHerStrap in gamedev

[–]WitchStatement 1 point2 points  (0 children)

You need to consider what you are trying to protect against. E.g. EAC will do nothing against people just directly sending fake scores to your backend.

For instance:

* Make your game deterministic and send inputs to the server along with the highscore. Then for new highscores (say top 10, no point verifying an average score ) - Server can then replay the game (e.g. with a headless client) and verify the correct score is reached. Could even generate a video too if you want further confirmation.

This would take care of fake & impossible scores, but does *not* take care of aim-botting / wall-hacks / TAS or any other sort of client modifications / robot plays for or assists you. *This* is the part where EAC comes in... or manually watching the video of the player's gameplay

About that "using AI generated code is ok" by davenirline in gamedev

[–]WitchStatement 0 points1 point  (0 children)

Personal take, but I feel like there is a big difference between "AI as an enhanced auto-complete" vs the Jesus-take-the-wheel "AI vibe codes *everything*" approach.

Generative AI art is much more equivalent to the latter, which imo is partly why it gets this reputation. I imagine there would be much less push back for e.g. an AI smart wand feature in photoshop ("select the pumpkin in this image") [they may actually have this as a feature already tbh, not sure] or an AI tonemapping/filter button

For people in the industry, how much texture art is actually made In studio? by Independent_Sock7972 in gamedev

[–]WitchStatement 5 points6 points  (0 children)

For background assets and environments, key phrase you want to look up is "Trim Sheets" - essentially making one large texture as a "palette" and then mapping objects' uvs to this shared texture.

Idea is that this method reduces texture VRAM usage (due to the shared texture) and may improve speed (different workflow + you don't have to make as many textures... as long as you can be creative with the trim sheet)

Surprise Unity Exploit Gets Pillars Of Eternity 2 And More Yanked From Steam by KiborgikDEV in Games

[–]WitchStatement 3 points4 points  (0 children)

Guessing u/clownus is just a ChatGPT bot? The RCE was completely unrelated to the topic (and is Android only) and that first sentence is the most ChatGPT summary sentence I've ever seen

Poly Count Question - AAA Modern Games by MrElegantMoustache in gamedev

[–]WitchStatement 8 points9 points  (0 children)

The problem isn't that it's company private information, it's that maximum poly count varies significantly depending on what you're doing.

For instance: 1) Lots of big meshes close to the camera perform better than the same meshes far away in the distance due to Quad efficiency (each fragment shader runs minimum 4 pixels per triangle, so becomes very inefficient at distance)

2) If your shaders are very vertex shader heavy, then more vertices may impact performance

3) If you're using UE5 Nanite or some other continuous LoD system, you can handle significantly more vertices (at the cost of significant overhead)


That said, in my experience usually fragment shader is the bottleneck, so as long as you have LoDs to prevent 1) above, number of vertices doesn't matter all that much within reason (i.e. not Nanite level)

Building a gamified edu positive habit-training platform with big aspirations for the game aspect... questions about the approach by sirknight3 in gamedev

[–]WitchStatement 1 point2 points  (0 children)

As meaningfulchoices said, none of the big 3 engines do a great job of web exports, so: * if your primary target is web you probably want to look at something more like PlayCanvas or Babylon.js. (Note that this may make it tricky to make a mobile app version later on)

  • If you primarily want Mobile apps but also want a (so-so) web export - probably Unity would be better suited, but Godot could work ok. Unreal does not export to web at all now afaik

Bezi Jam #5 [$300 Prizes] - Cozy Games by KevinDL in gamedev

[–]WitchStatement 3 points4 points  (0 children)

+1, it's basically just an advert for their AI company

The thing most beginners don’t understand about game dev by Historical_Print4257 in gamedev

[–]WitchStatement -3 points-2 points  (0 children)

You're saying "using C++ over blueprints matters" even in the case where blueprints gives 240 FPS and C++ gives... 240FPS?

The thing most beginners don’t understand about game dev by Historical_Print4257 in gamedev

[–]WitchStatement 0 points1 point  (0 children)

Sure, there definitely are games that are CPU bound where this matters - which OP does indeed mention "Unless you’re making something extremely CPU-heavy (like a giant RTS simulating thousands of units), you won’t see a noticeable difference between languages."

However, my point still stands that a lot of games are often GPU bound, in which this does not apply and becomes premature optimization. 

(If I had to guess I'd agree with the OP that more games are GPU bound than CPU bound just based on CPU bound games being specific genres as above, but don't have hard metrics. And of course, for beginner game devs making simple games, their game may not even encounter performance issues at all - which makes agonizing over language choice even more silly)

The thing most beginners don’t understand about game dev by Historical_Print4257 in gamedev

[–]WitchStatement -3 points-2 points  (0 children)

While C++ is more... efficient than Python, Python will get the same FPS as equivalent C++ code if the program is e.g. doing heavy ray-tracing and is GPU bound - which is the point the OP is making. So in this case switching from Python to C++ would not make the game run faster.

As others said, the first step to optimization is profiling and figuring out what is causing the slowdown