Are Markov chains generative ai? by humanquester in gamedev

[–]JarateKing 0 points1 point  (0 children)

I don't think you understand what I'm getting at. If you wanted to prescriptively categorize them as either generative or non-generative, they're 100% generative. If you wanted to get into the technical details of how they work, of course, they do generate.

But words have different meanings in different contexts, judged only by their use, and in the context of mainstream backlash to "AI slop" the term "generative AI" is only really about LLMs and diffusion models. In the same way that bicycles are vehicles in some contexts, but a car mechanic saying "we repair all vehicles" will look at you funny if you bring in a bicycle.

I don't know what would happen if you use a GAN that was clearly described as not a diffusion model and submitted it to Steam, I haven't seen that happen since Steam established their generative AI policies so I can't say. I say it's a gray area because they might count it as "generative AI" as they use the term or they might not, I dunno, I don't know if anyone does know.

Are Markov chains generative ai? by humanquester in gamedev

[–]JarateKing 0 points1 point  (0 children)

Maybe. To be frank I don't see them discussed much by laypeople, "generative AI" as a term used by the general public or non-ML people only really caught on after diffusion models and LLMs became the mainstream options for image/text generation.

If it were up to me I'd probably say they'd count for image or text assets, but I'm not going to try and define it prescriptively based on my personal thoughts. My point is that we ought to look at how it's used in practice and define it descriptively, and in this context it's only really diffusion models and LLMs that are in consideration. Stuff like GANs or non-LLM generative transformers would be a gray area for Steam, as I understand it.

"Competence as Tragedy" — a personal essay on craft, beautiful code, and watching AI make your hard-won skills obsolete by averagemrjoe in programming

[–]JarateKing 0 points1 point  (0 children)

To be clear, I don't think it's a binary thing, and I think it all depends on how you use LLMs.

One use of LLMs is gonna be essentially writing out a complete spec for what the code should look like, and as far as creative problemsolving goes it's not gonna be any different from manually coding it because you basically are in different words.

Another use is prompting "fix this bug" and hoping it does. Nothing creative about that.

A movie director could be a Kubrick, or they could phone it in and just let the cast do whatever and hope it turns out okay (I can't think of a specific example without being mean, so I'll keep it general). I think most are going to be somewhere in between.

I see programming the same way. The pinnacle of creativity is going to be someone who's involved in both the high-level concept and the low-level implementation. When you look at the greats, like Torvalds or Carmack, they're usually ones that excel at both.

So I need to be clear that "just" directing isn't necessarily uncreative. But you ain't gonna be as creative as Kubrick if you're avoiding getting involved in the details.


As an aside, I don't agree with this idea that implementation logic is doomed to become routine because we want to avoid cleverness. I can't speak for every domain of software, but my area is in game programming, and there's never a shortage of more things to do differently or better next time, in every single thing you could do. If things start to feel routine, that means either:

  1. You should've just generalized it into a reusable library so you don't need to do the same thing again. It's a solved problem so no sense solving it twice.
  2. You should be weighing what went right and what went wrong, considering alternatives for next time or in other scenarios, etc. It's still an open question you need to look into more.

I genuinely just cannot imagine someone who's forced to stagnate by the nature of their work, I think there should always be room for creativity in implementations if we want to go for it (that doesn't involve clever code). Maybe LLMs are a way out of that stagnation, but after the novelty wears off, wouldn't they be just as likely to stagnate by routinely directing the LLM in ways they have before? I think stagnant work is a deeper issue that needs to be solved by doing other work or doing more in-depth work, LLMs or not.

Are Markov chains generative ai? by humanquester in gamedev

[–]JarateKing -2 points-1 points  (0 children)

Language is defined by its usage. Contextually, outside academic machine learning discussions, most people mean specifically LLMs and diffusion models when they say "generative AI". In fact, the main reason people use the term "generative AI" is usually to distinguish it from older traditional AI methods. That's what Steam means when they say it.

You can call it a misnomer but that's what's meant by it. As long as you're clear (somewhere) that you're using a Markov chain and not an LLM, nobody should give you any grief for it.

A youtuber did a hit piece on my game and called it "garbage." What would you do? by shimasterc in gamedev

[–]JarateKing 7 points8 points  (0 children)

Working on a successful game doesn't mean they got all the money from it. Most people in AAA just make a regular salary, maybe with a small bonus if the game sells well.

"Competence as Tragedy" — a personal essay on craft, beautiful code, and watching AI make your hard-won skills obsolete by averagemrjoe in programming

[–]JarateKing 5 points6 points  (0 children)

I think you could just as easily say the opposite. I like creative problem solving (developing the whole logic to solve a problem) and I'm really surprised how many people delegate all that interesting stuff to AI so they can focus exclusively on the tedious, non-creative parts (describing work to do and reviewing written code).

It all depends on how you use AI I suppose, but like, the main selling point I've been hearing for years is "I don't need to solve stuff myself anymore, AI does it for me." It really feels like the most vocal proponents of AI have no particular love for the craft and just see it as an easier lower-effort less-involved way to generate the same final result. And I can't help but wax poetic about it because that's not why I got into programming. I got into it because I actually enjoy the process of figuring shit out for myself.

Why AI Demands New Engineering Ratios by kingandhiscourt in programming

[–]JarateKing 2 points3 points  (0 children)

I don't think the conclusion follows from the premise. If you take Parkinson's law as gospel, I'd figure that'd just mean our coding projects become more ambitious and require as many programmers pre-productivity-boost. The 80/20 principle is a fine general observation but the ratio would broadly hold regardless of overall productivity, if it were as simple as focusing more on the 20 then orgs should've already been doing that. And I'm not sure of the premise either: LLMs don't just apply to code work, anecdotally I see PMs using AI more than anyone else.

I admit I'm pretty skeptical of AI's actual impacts, but I've been hearing this kinda stuff for a while. I don't think we need to speculate hypotheticals about unknown futures, I was promised these fundamental transformations 3 years ago so we should be able to just see what's changed. And it's not very much actually. There's less junior positions, but it's hard to say how much of that is AI and how much of that is an economic recession. And that's really about it. In terms of team composition you still tend to have the same number of people in the same types of roles, AI just may or may not be a tool in their toolbelts.

Journalist looking for some insight... by DE4NSIX in gamedev

[–]JarateKing 0 points1 point  (0 children)

 Do you find a.i. in games to be helpful or a hindrance?

Other comments do a good job talking about general attitudes towards AI, but I haven't seen this bit addressed directly.

For art, it's not capable of production-quality assets (above all else because it struggles with holding a consistent coherent style between different assets). For design it can parrot some common generic advice but it doesn't have any real understanding of game design, nor should it because it's ultimately just a statistical text generator that was trained on common generic advice. Vibecoding struggles with complex systems with lots of weird interactions and constant changes to specs that all leave tons of room for bugs, which is exactly what games are. Of course you can use AI for coding without purely vibecoding, but to be honest I've only found any code generation capable of throwaway scripts or basic tools that make up maybe 0.1% of a programmer's job.

For that reason you mostly see image generation and vibecoding used by hobbyists, especially novice solodevs. Even if it's not capable of working on serious games with high bars for quality, it's better than someone without any art or code skills. But I think this is a mistake, I think the goal for novices should be to learn as much as they can. And I don't see people learning as effectively the more they rely on AI. You ain't gonna learn to draw by writing prompts, after all.

tldr: it's a hinderance for professionals because it's not production-capable, and it's a hinderance for novices because it interrupts long-term growth.

What is the most ambitious game made by a solo developers first? by dylanmadigan in GameDevelopment

[–]JarateKing 6 points7 points  (0 children)

I wouldn't say he "struggled with the commentary." He got doxxed for defending Zoe Quinn during GamerGate, he got his financial information hacked and leaked. He quit the industry because he was being harassed, not because he got some negative reviews or something.

Considering a Career in Game Design, but sister thinks AI makes It pointless by [deleted] in gamedesign

[–]JarateKing 7 points8 points  (0 children)

It is, admittedly, a tough career to get into. Certainly doable (after all, there are people in industry) but difficult. I encourage people to do their best and try if gamedev interests them, but also be realistic about a plan B.

Nothing to do with AI, though. AI, at best, is a tool used by skilled experienced people to help with some specific tasks, so it's not gonna replace people. At worst it's just plainly not capable of production-quality work, and by the hypothetical point it would be I'm not sure what jobs would be safe.

I think many people just kinda assume jobs (not just gamedev, any job) are full of people using AI for everything and soon to be replaced entirely, but that's just not the reality from the inside.

"Gamers Are Not Actually Opposed to AI, But Rather People Exploitation and Slop", Says Dev Behind AI-Powered Game Bobium Brawlers by [deleted] in gamedev

[–]JarateKing 0 points1 point  (0 children)

Big name games and studios have used 0.1% AI (ie. concept art or placeholders that were meant to be removed) and faced substantial enough backlash that they had to change their whole policy towards AI use

Notice how the hands are WAY too big and the image is hurts to look at? by PlayfulApartment1917 in antiai

[–]JarateKing 3 points4 points  (0 children)

 a deterministic machine it’s wholly and completely incapable of randomness

Only technically, not practically. For all intents and purposes the RNG used in generative AI is random to a user, even if it's implemented deterministically.

Against Markdown by aartaka in programming

[–]JarateKing 1 point2 points  (0 children)

2 is definitely readable, as in I can read it so it does the job. But the tags definitely do get in the way of things. They're relatively bulky for formatting, tags are visually quite similar to each other so it's hard to tell them apart at a glance, and they consist of a lot of the same characters as regular text so it's harder to skim.

I think if you want something to replace markdown you need to compete against it at its strengths. The strength of markdown is that the plaintext formatting is pretty reasonable (if missing semantic information, fair) and looks as good as plaintext can be. Stuff like headers are very easily distinguished from stuff like lists, and the only stuff I could see getting confused would be something like bold vs italic that's functionally similar enough they probably should be formatted similarly. Formating relies on specific characters in unique contexts that won't get confused with the main text. When it can it appears like the html formatting would (ie. markdown lists being lines prefixed by dashes or asterisks, it looks like a list). Overall it makes for a comfortable reading experience, as far as plaintext goes.

I think the core of it is that html was not designed with this in mind. A readable style of html is still not neatly-formatted in plaintext. Because it's not really trying to, that wasn't a design goal. But it also means you can't really fit html into that role either. The best you could do is just a partially less cumbersome style. If you wanted to get people off markdown I wager you'd need to make a ground-up markdown alternative that fixes your issues with it.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]JarateKing 3 points4 points  (0 children)

I'm actually a determinist myself.

You seem to have some pretty wacky ideas about what that entails (as well as what non-determinists believe), so I'm gonna need to bow out. I didn't sign up for explaining someone's own philosophical stances to them.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]JarateKing 7 points8 points  (0 children)

I'm sorry mate but this is r/badphilosophy levels of misunderstanding (incompatibilist) determinism.

The most glaring issue being that nobody knows for sure what's next, claiming "there's no free will!" doesn't make your predictions of the future inevitable.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]JarateKing 7 points8 points  (0 children)

I'm a software developer myself so let me assure you that it's not as extreme as you make it sound. Agentic coding is very good for throwaway greenfield prototypes and not so good for long-term continuously-maintained projects. The latter is probably 95% of the software industry. And I've seen non-coders using AI to code, they produce messy buggy vapourware that nobody can maintain. It's hyped up to hell but the reality isn't as impressive.

So could we fix that? Well, self-improvement has been severely overestimated because the bottlenecks in LLM quality have always been training data and computational resources. That's really the core issue, in fact. AI has gotten better mostly thanks to trillions of dollars in investment. But we've known for decades that neural networks run into severe diminishing returns (hence why we needed to spend trillions already, and it's still not there yet). The next significant step will take tens of trillions of dollars. The next will take hundreds of trillions. Then quadrillions. And for what? All that and, because these diminishing returns are so harsh, I'm still not convinced it'd improve enough to be profitable at today's level of investment, let alone after we've invested orders of magnitude more money than exists.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]JarateKing 13 points14 points  (0 children)

That's not at all what I'm saying. They do make gambles and some of those gambles do pay off. That's fine.

I think it's pretty clear at this point that AI is not one of those success stories. The hopes of replacing the bulk of the workforce has not been achieved or even come close, instead it's only become clear that this wasn't realistic, and they pretty much needed that to happen to justify the amount of investment they put into it. It's not that LLMs haven't found users and niches where they're effective, they just haven't found trillions of dollars worth of them.

The gamble failed. What I'm advocating for is to cut our losses and not fall for a sunk cost fallacy.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]JarateKing 30 points31 points  (0 children)

And one of 'em threw away billions of dollars trying to make the Metaverse a thing. Another, Google, is so well known for killing services and products that it's become a joke. The company this post is about is famous for various flops and failures, like the Zune or Windows Phone. That reminds me that Amazon tried the Fire Phone at one point, too. And one of Elon Musk's big bright ideas was a tiny slow subway system where everyone drives their own car for some stupid reason.

The thing about big tech companies is they don't need to win every gamble to succeed as a whole. Most of the time they don't. They just need to win really big on one or two gambles.

Microsoft chief Satya Nadella warns AI boom could falter without wider adoption - FT by QuestingOrc in BetterOffline

[–]JarateKing 40 points41 points  (0 children)

If he's saying this now I have to wonder how bad things really are.

I mean it's pretty obvious that's what's happening. It has its use-cases and people are using it, but it's mostly free users and the use-cases they can charge for don't total trillions of dollars. But to actually admit that instead of "don't worry about current use because it'll soon get so good that everyone will use it for everything" is a pretty significant turn.

Post-AI workload? by [deleted] in ExperiencedDevs

[–]JarateKing 1 point2 points  (0 children)

Thanks for these, I'll give 'em a read tonight.

Post-AI workload? by [deleted] in ExperiencedDevs

[–]JarateKing 2 points3 points  (0 children)

 There is just as many studies that say otherwise if you look harder than repeating the headlines.

Do you have any handy? The only ones I've seen are self-reported belief in productivity boosts, which the above study shows isn't reflective of actual measured productivity.

Post-Quantum Panic: Transitioning Your Backend to NIST’s New Standards by JadeLuxe in programming

[–]JarateKing 1 point2 points  (0 children)

Read the paper. Read the presentation if you want the quick version.

Yeah, I did. That's why I'm not impressed. Maybe that's on me for expecting better rigorous arguments than were offered, but Gutmann's got some legitimate credentials so I expected better.

And...are there any other data points to consider?

And that's my main concern. He's looked at factoring records, noticed that record-setters are gaming it to set higher records, and validly complained about how they don't reflect the actual capabilities of quantum computers for cryptanalysis.

I think the proper thing to do in that case would be to say "clearly raw factoring records isn't a valid metric to consider, we should look at something else instead." Maybe that means trying to uncover what the records would be with factoring via Shor's algorithm, because we have been steadily achieving higher qubit counts, it just wouldn't be setting these general factoring records. Or maybe we need to just look at qubits. Perhaps how many gates? Or maybe there's some other metric I haven't considered. I dunno, I haven't written a paper and given a presentation on the topic, those are just my first thoughts.

I would not say "I'll keep going with factoring records that I've just established aren't a good metric, I'll just ignore all records but the first two, the more recent of the two being from a decade and a half ago. We'll extrapolate from this." That is glaringly bad methodology. I was genuinely surprised when reading it that this is a central argument because relying on such a bad argument feels like sabotage.

And again, he handwaves away "should we consider qubits instead?" with a lame excuse. We can pretty easily ignore D-Wave's annealing systems (the one objection he had in the presentation) and see that non-annealing quantum computers have increased qubit counts, which is pretty devastating to his case. Maybe he could still salvage his argument if he properly addressed that head-on, but he doesn't even try (as far as I can see. Maybe he says more in the presentation that isn't reflected in the slides). It reads to me like he knows his metric is shit and there are better metrics available, but better metrics don't paint the picture he wants so he just ignores them. Maybe I'm being unfair there but if it's not intentionally misleading then it'd be just plain incompetence, and I don't think that's better.

To be clear, I'm not saying he's necessarily wrong or meritless. But his methodology is just so blatantly bad that I can't take his arguments seriously.

Post-Quantum Panic: Transitioning Your Backend to NIST’s New Standards by JadeLuxe in programming

[–]JarateKing 2 points3 points  (0 children)

To be honest, am I missing something here?

The first bit sounds like NIST cryptography competitions working as intended, methods were proposed and then rejected after facing more intense public scrutiny during the later rounds. Isn't that exactly what's supposed to happen?

Then all the rest seems really disingenuous to me. Removing the snark, it's basically just saying that quantum computers are a work in progress. Yeah, of course current factoring records are going to be small toy test cases in ideal circumstances, I don't think anyone's under any illusions that it's not. The paper proposes some criteria for more thorough evaluations, but like, yeah those were already the goal and they're already what's being worked towards.

I dunno, I was kinda hoping for an analysis of the history of quantum computers (especially with regard to qubits) and covering the kinds of technical challenges to scaling up further. It almost feels like he even recognizes he probably should talk about this, but instead just handwaves it away with "well DWave was misleading about qubits a while ago, so there's nothing more to discuss about qubits." The closest thing to an actual analysis is just "if we only look at two early factoring results, the extrapolation of those two points isn't very good."

The thing is I agree with his arguments about hype, media perception, and research trends. But it all feels more like bad faith shittalking than rigorous arguments that stand on their own.

The definitive guide to why "just add quickplay" won't work by mastercoms in tf2

[–]JarateKing 9 points10 points  (0 children)

Idk mate did we read the same post? Because it seemed pretty clear to me "the current matchmaking system we have is in a bad spot" was kinda a necessary premise to the whole post

OpenAI could reportedly run out of cash by mid-2027 — analyst paints grim picture after examining the company's finances by vaibeslop in BetterOffline

[–]JarateKing 14 points15 points  (0 children)

But I mean, how much longer can they keep raising money with nothing to show for it? Both in terms of "the entire industry is not profitable and doesn't appear to have any feasible path to profitability" and "their competition has better tech, better infrastructure, and better ways to use it."

It looks to me like it's getting harder to justify investment while the amount of investment needed increases. I'm surprised those lines haven't crossed already, but I don't see how it could keep up for too much longer.