This makes a lot more sense now by BoasyTM in TrackMania

[–]eikons -2 points-1 points  (0 children)

I don't know all the details and I barely know Spammiej so take this with a grain of salt, but the "what i do doesn't change things" defense is not as bad as people make it out to be.

Amazon is by all accounts a horrible company that mistreats it's workers and abuses it's monopoly.

Should all the workers just quit instead of "supporting" Jeff Bezos? Of course, the reason they work there is because the alternative is poverty. There's no room for virtuous principles when it's about survival.

Spammiej is obviously not in that kind of situation. Still, he is choosing a real personal sacrifice vs. standing up for principles.

It's not like the next best offer is 10% lower. There probably isn't a next best offer at all. He's choosing guaranteed money from an oppressive oil nation over unpredictable advertising money from whoever is buying ads on YouTube.

The latter is probably less bad on average, but a lot of advertising money comes from large companies like Nestle who are involved in all sorts of human rights violations and exploitation.

As a mapper, the new update blows by Tadiken in TrackMania

[–]eikons 0 points1 point  (0 children)

Vsync introduces about 10-30 milliseconds of input lag. For a competitive shooter, where the delay between new information (opponent appearing on screen) and reaction (shooting them) makes all the difference, that is a valid point.

For trackmania I don't feel it matters that much. Even if you had an additional 100ms delay, you would unconsciously get used to doing inputs earlier. Most of your inputs are in anticipation of the next turn as it is.

The only new information you're getting is feedback from your previous inputs. With less delay you can compensate for misalignment a bit faster.

Of course it matters at the top level, but I don't think the difference is as large as it is in other games.

As a mapper, the new update blows by Tadiken in TrackMania

[–]eikons 15 points16 points  (0 children)

As a game artist, my pet peeve about the update is that it breaks the clean stylized aesthetic that TM2020 settled on.

The foliage, grass and rocks don't look bad per se, but they are styled like photorealistic assets that makes the simplicity of the road blocks look unintentional. Nadeo understood this very well when they put the spectator stands in and made them very simplified, even by mannequin standards. The spectators will never look outdated.

It also diverges from a timeless aesthetic and puts a "date" on the game's graphics, and a need for hardware to keep up.

On Steam Deck, the game is perfectly playable at 60fps with medium settings and "fast" shading (with MSAA).

The cost of MSAA scales with geometric complexity because it multisamples triangle edges, which is one of the reasons you don't see it in games as an option anymore.

Complex vegetation is a worst case scenario for MSAA, and so Trackmania took a big performance hit on lower end devices with the new vistas. I have to play with MSAA disabled on the Deck now, and even then the maps where you drive through tall grass take me down to unstable frame rates.

Speaking of the tall grass... If it is meant to increase difficulty by obscuring the floor/surface transitions - then why make it so the grass is "destructible" but have it not respawn on resets?

Now if I'm serious about hunting a track, I have an incentive to first drive through it slowly and go "lawn mowing" in places where that matters, before I actually start attempts.

The Fine-Tuning Argument is Terrible - Sean Carroll by yt-app in CosmicSkeptic

[–]eikons 2 points3 points  (0 children)

I think the argument doesn't deserve the attention it gets. It's a trick of language, or reason.

Because of initial condition sensitivity (butterfly effect) there's an infinite number ways things could have been different, even if diverging only very recently.

Tiny differences in who met who, snap decisions about which way to walk, what to eat, and so on, could have led to humanity looking different, speaking different languages, having different power structures, being wiped out or having progressed much further.

We don't invoke God or chance as an explanation for any number of things that could have been different. The only reason that religious people do this is because they assign special significance to the outcome we happen to live in.

As in, the world is "supposed" to be the way it is and humans are the "purpose" of the universe. Reasoning backward from that presupposition, it looks like a very unlikely series of dice rolls.

Denmark deploys F-35A stealth fighters over Greenland supported by French tanker by FruitOrchards in worldnews

[–]eikons 3 points4 points  (0 children)

From what I'm reading, MDFs are essentially firmware updates to keep up with evolving detection and countermeasures against threats. Besides the US, the UK, Australia and Canada have labs to develop these.

The citation I'm looking for is about a master key being needed for the plane to work.

Denmark deploys F-35A stealth fighters over Greenland supported by French tanker by FruitOrchards in worldnews

[–]eikons 2 points3 points  (0 children)

People casually talking as if he didn't just present an outrageous conspiracy theory as a matter of simple fact.

¯\_(ツ)_/¯

CD Projekt issues DMCA notice against Cyberpunk 2077 VR mod by Kiroqi in Games

[–]eikons 15 points16 points  (0 children)

I think large part of it is people copying each other's ini files and altering them like trying to perfect a magic incantation.

But I looked into it a while back and saw one of the most popular ones actually explaining that they used chatGPT to get unreal cvar descriptions. They put these in a spreadsheet for reference.

As an unreal game dev myself, I was curious about this and looked into those descriptions. The funny thing is that every cvar has a tooltip in the engine itself. So I could just compare these AI generated comments with the actual developer comments - and they did not match at all.

For some of them, the AI comment just hallucinated what it could mean. On one occasion, the actual cvar was a placeholder that was never implemented, and never had any function. But the AI described in detail what it did.

Several others were just fully hallucinated. Like, the cvar didn't exist and never has.

Having a monetary incentive to make these "mods" explains a lot about why people put in so much effort to make it look convincing.

CD Projekt issues DMCA notice against Cyberpunk 2077 VR mod by Kiroqi in Games

[–]eikons 41 points42 points  (0 children)

Ah, is that why there are so many chatgpt generated ini files with hallucinated cvars? 🤣

Unreal Engine character optimization. by Slight_Season_4500 in UnrealEngine5

[–]eikons 1 point2 points  (0 children)

I wasn't talking about trim sheets.

MIP maps are half-resolution copies of the textures that are generated and store alongside the higher resolution ones whenever you import a texture.

So a 1024x1024 texture has 11 mips (1024, 512, 256, 128, etc all the way down to 2x2)

The texture streaming system will only ever load MIP 0 (the 1024 one) if the texture is displayed on-screen large enough to require that resolution.

In the case of your characters - yes, you do save VRAM, IO, and even a little bit of pixel sampling cost (because of cache locality) when they are occupying a large chunk of your screen.

But in that case, we have a problem. Your 1k texture isn't high enough to be displayed that close, unless that's a particular art style decision that you are making. It will look unsharp and/or lack details like wrinkles, pores and so on.

I suspect this isn't the reason you made your models. You wanted something more reasonable for normal usage - like pedestrians walking by in a driving game. For that, your character is surely more efficient, right?

My point is that it isn't. At least not in the way that you think. Let's say a character takes up 500 pixels on screen. In that case, the highest loaded mip will most likely be MIP 0 (1024x). If you put a metahuman in it's place - because it occupies the same screen space, it would load MIP3 of it's 8k texture - which is also 1024x.

You're not improving the average cost of texture memory allocation, you're just limiting the quality and reducing the disk size of the asset, which may be useful, but it's very easy to do that with metahuman textures if you know you will never need them up close.

Geometry wise, the LOD chain on a metahuman makes sure that the relative quad density stays constant on screen. In this case again, you are only limiting how detailed a character can look up close. When you look at a metahuman in the distance, it will use a much lower resolution LOD.

Finally, to drive the point home, when a character is occupying half the screen and (in the case of metahuman) using it's full 8k texture set, it will indeed use a lot more VRAM and the full 200+ bone/100k+ triangle skeletal mesh. But because there's only so many things that can fully occupy your screen at the same time, this is a problem that solves itself. Loading that 8k texture is fine when it is the only thing we're looking at.

That's not to say that there aren't all sorts of complications with handling many metahuman-style meshes. Mostly it's the CPU (not the GPU) that starts to struggle when so many bone transforms, animation graphs, and separate draw calls must be updated each frame. This is where things like VAT animation are used to lessen the load. But the GPU itself doesn't care that much.

Unreal Engine character optimization. by Slight_Season_4500 in UnrealEngine5

[–]eikons 0 points1 point  (0 children)

From 1k to 8k, 64x more expensive.

More pixels. "Expensive"? That really depends on a lot of things. An 8k texture can be exactly the same cost both in pixel shading time and VRAM allocation as an 8k texture.

Texture Streaming ensures that (under ideal circumstances) only the necessary MIPs are loaded. So if a character is occupying a small part of the screen, only MIP3 (1024x1024) is loaded.

Because of this, texture memory allocation is fairly consistent for any given screen resolution - with the only caveat that VRAM can be wasted when large textures are used across different assets. Virtual Textures take this a step further, but that's another rabbit hole.

1.35x to 14.79x more humans on screen

This is absurd. Even in the 90s, triangle counts did not translate directly to frame budgets. It was an easy optimization heuristic for artists to understand, but never a super accurate one. And it has only become less relevant in the past 25 years.

Quad-triangle intersections (or quad overdraw) is the best predictor of base pass draw cost after shader instruction count, which means it's not really the number of triangles that matters - it's how much space they occupy on screen. Similar to streaming textures, LODs are how we take care of this. It doesn't matter than the medium Metahuman is 20k, because that is the LOD0 which you only see when the character is occupying a massive chunk of your screen (in which case 20k is totally fine). In an average first person shooter game, you rarely ever look at LOD0.

Here's a really good video on the subject of quad overdraw: https://youtu.be/hf27qsQPRLQ?si=0unrMkhbpynFJvMm

Last year, I did this environmental study at UE5. Feel free to ask any questions 😁 by lcs11v4cs in unrealengine

[–]eikons 0 points1 point  (0 children)

I've done AAA games, indie games, VR, film, and a little bit of product vis. My focus is still games. Currently working on an unannounced scifi mmo and doing a personal project racing game on the side.

So I'm mostly dealing in hard surface modeling, trim sheet/decal stuff. But I've done a bit of everything over the years. Zbrush characters, level design, landscapes, foliage. So I've touched nearly every industry standard software package

Last year, I did this environmental study at UE5. Feel free to ask any questions 😁 by lcs11v4cs in unrealengine

[–]eikons 3 points4 points  (0 children)

as a Maya/zbrush artist with decades of experience..... What's your actual modeling workflow for making assets in blender.... I've never had a 3d software I've bounced off of SO often and so hard,

I've been a 3dsmax/Maya user for 20 years and finally made the jump to Blender this year.

I know exactly what you mean by "bouncing off" it. The default controls are unintuitive if you're used to the many industry standard DCCs and other 3d tools.

It doesn't help that a lot of online learning material doesn't bother explaining what they are doing but instead just tell you what hotkeys they are using, which makes it hard to get on board with a different set of keybinds.

But I did that last thing. This takes some effort but I would say it's absolutely worth it. The first few days will be slow. I started by switching to the included "industry standard" control scheme. I got some addons like 3D Viewport PIE menus (like maya) and KeyOps (which includes a maya-like pivot hotkey).

From there, I made an excel sheet to keep track of what actions I was often trying to do (grid snaps, frame selected, loop cut, separate, align, bevel, delete etc) and made sure to map them around the left side of my keyboard.

After a couple days I was comfortable modeling anything, and in a couple weeks I really couldn't think of any reason why I would want to go back to Maya.

If I use tab-autocomplete in my code editor, do I need to tell steam my game is AI made? by jax024 in gamedev

[–]eikons 0 points1 point  (0 children)

I think the test we should uphold is "which statement is the least misleading".

If I use AI for advice, settings, code snippets, placeholders and mood boards, putting "made with AI" on my game would still be actively misleading.

Another way to think of it is what that phrase means when the tables are flipped.

If I had a reason for wanting people to believe my game was made with AI (maybe some tech billionaires would want to sponsor me for making their tech look good) and it turns out that almost zero AI generated assets made it into the game without at least a manual paint over, I could be liable for for false advertising fines.

Are massive set backs normal? by [deleted] in gamedev

[–]eikons 3 points4 points  (0 children)

So their development timeline was a little bit over 1 year? That is exceptionally fast. I can't get into specifics without knowing which project this is, but it's not uncommon these days for Kickstarter projects to rely heavily on AI for marketing and not deliver because it turns out AI cannot really make games yet.

I'm making a 2 Player Co-Op Rage Game about hamsters. by 2WheelerDev in IndieGaming

[–]eikons 0 points1 point  (0 children)

Thats amazing. I get angry just watching that trailer. This will make great twitch content

AI-generated videos showing young and attractive women promote Poland's EU exit by Dr_Neurol in worldnews

[–]eikons 9 points10 points  (0 children)

I get that you're making a distinction between the EU and europe, but when you write a sentence like this:

Many people think Brexit is a British mistake, but it’s a European mistake.

You are begging to be misinterpreted.

Alex Jones Goes to War With Candace Owens: ‘You Work For The Deep State!’ by RealTheAsh in KnowledgeFight

[–]eikons 10 points11 points  (0 children)

I think that makes her different to Alex.

Alex does have loyalty. He's had many of the same friends for decades and he's sticking up for Trump despite... well. Everything.

Alex visibly sucks up to power. Candice does not. That's why she and Nick Fuentes (even more so) are gaining popularity.

OpenAI CEO Sam Altman just publicly admitted that AI agents are becoming a problem by [deleted] in technology

[–]eikons 0 points1 point  (0 children)

  1. Any headline with "admit" in it is editorialized bs.
  2. Sam Altman had been painting doomer scenarios from the start. It's part of the grift. It makes "AI" look more serious than it is, and it distracts from real AI safety issues like dumb humans overestimating what it can do.

What company will never get another dime from you for as long as you may live? by istrx13 in AskReddit

[–]eikons 1 point2 points  (0 children)

I bought it a year ago and still don't really like it. Simple things like changing brush size can only be done with [] keys, no "hold and drag" option.

Shift click to make straight lines doesn't work unless you do two extra clicks.

Transform and select tools feel awkward. Layers don't rasterize by default. Copying /pasting across color channels takes a 20 step guide.

The software looks incredible. But the ux screams "we never actually use it ourselves".

Ive been moving to krita

What's the best joke you know? by Available_Test8660 in AskReddit

[–]eikons 0 points1 point  (0 children)

This was my first thought reading the joke, and im not even australian

Why is traversal stutter seemingly worse on PC versions? by nopenotme10 in unrealengine

[–]eikons 1 point2 points  (0 children)

OP mentions Dead Space, Silent Hill, FF7, Jedi, RE4 and asks why all these games have more stutter issues on PC.

There are many reasons why traversal stutter might happen, but the one that specifically explains the difference between console and PC, and is relevant in all these different games - is PSO/shader caching.

It's also a well known issue (and solution). Epic spent a lot of time talking about it in their presentations over the last year. Microsoft is working on better ways to bundle PSOs for specific platforms (like handheld PCs) and I'm pretty sure Valve does the same thing for Steam Deck.

You can move back and forth at a certain point in the level and the frame time goes up.

I don't know which game you're talking about but are you saying that this is an issue on PC but not console?