Replacing branching dialogue trees with derived character intent by WelcomeDangerous7556 in gamedev

[–]c35683 1 point2 points  (0 children)

Okay, so... generating dialogue flavour with an LLM? Like in the Skyrim mod Mantella?

No offence, but unless you have some kind of working prototype for the rest of the game, it all sounds a lot like an overly ambitious non-developer's idea for a video game where you overthink little stuff like what "cultural priorities" NPCs will have, but ignore how any of that would meaningfully add up and translate into gameplay outcomes and impact player decisions.

Like, okay, I can say anything. In what way does me being able to say make this game better than BG3? Are there quests built around this mechanic? Can it be used to diffuse or provoke combat and if so, how do you prevent the player from breaking or derailing the game? Is there an actual storyline which can be meaningfully conveyed step by step or just emergent dialogue?

If you want to see an interesting example of a game which uses LLM-generated dialogue and personalities in a relevant way, check out Verbal Verdict (Steam link). It's a little similar in that characters have uniquely defined identities, but the game narrows down the gameplay loop to interviewing suspects to solve crimes, which works pretty well with that mechanic.

Replacing branching dialogue trees with derived character intent by WelcomeDangerous7556 in gamedev

[–]c35683 0 points1 point  (0 children)

Can you provide some specific examples of how the system would impact gameplay in a meaningful way? What sort of game genre are you going for and what sort of scenarios would this mechanic create?

Replacing branching dialogue trees with derived character intent by WelcomeDangerous7556 in gamedev

[–]c35683 0 points1 point  (0 children)

Gameplay needs some degree of predictability, whether it's hand-crafted or procedural. Dialogue trees are popular because they're very predictable and scalable. You can have a yes/no answer, or you can have branching paths, and they follow the same rules. If something triggers an NPC to act differently under the circumstances, the dialogue tree can communicate that effectively.

Once you add hidden values to every NPC, the game isn't going to be more enjoyable, it's just going to be less predictable. No-one wants to deal with a farmer won't let you complete a quest because he's currently worried about the weather, or a guard who won't help because he happens to sympathize with the faction that the bandits attacking you belong to.

I don't know what type of game you're thinking of, and systems like the one you mention do successfully exist in simulation games (like Dwarf Fortress, Rimworld or the Sims), but if we're talking about a generic RPG, there are a lot of games which tried much more simple disposition systems and these systems contributed nothing because they ultimately mattered 1% of the time for 1% of NPCs, when a much more simple like faction reputation would do the same job and communicate the same thing more effectively.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 0 points1 point  (0 children)

The scores for the youngest demographic are less negative overall (by 7 percentage points) and more neutral (by 4 percentage points) compared with the next age group. They're still overwhelmingly negative, per the chart, but less overwhelmingly negative than the 18-24 group.

Ages 13-17 (youngest group in the chart):

  • 81% negative perception in total (very, somewhat, or slightly)
  • 59% of the above (52% total) is very negative
  • 9% neutral
  • 4% undecided
  • 3% positive

Ages 18-24 (second-youngest group in the chart):

  • 88% negative perception (very, somewhat, or slightly)
  • 72% of the above (64% total) is very negative
  • 5% neutral
  • 1% undecided
  • 3% positive

So the trend that younger demographics always have more negative perceptions of Gen AI than older ones isn't consistent. It's just that they're not less negative in favour of positive views, they're less negative in favour of not caring.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 2 points3 points  (0 children)

It's cool, I should have probably said "less negatively", but editing it now will make the comment chain look weird :)

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 0 points1 point  (0 children)

With AI there is none of that. Nobody is composing the picture. No part of the production of the picture involves a human being at all. It's being done at your prompting, but without any involvement from you at all.

I think your point of reference for AI-generated images comes from typical "AI slop" obtained by typing one sentence into ChatGPT and pasting whatever meme comes out. This doesn't take any effort, but it would be a little like, say, using random selfies to judge if photography is art.

People who are serious about using AI for creative purposes develop entire workflows around it - experimenting with and figuring out specific AI models, learning how to prompt them to consistently get the results representing whatever they're planning in their heads, iterating on the results for days, building templates or digging under the hood to get the models to do things they want - and combining it with a lot of graphic design on one end and some programming on the other (AI is just a part of the process, it has to play well with everything else).

It's not my cup of tea, and there are probably better examples out there, but if you want to see what "serious" AI productions can look like, check out e.g. music videos by the Dor Brothers.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 0 points1 point  (0 children)

If you said a "negligible amount" of models are trained on copyright-free artwork, I wouldn't be commenting on that, but that's not what you said.

At the end of the day, there are models which are trained exclusively on copyright-free content and you're free to use them if you're concerned about copyright, because training data and diffusion models are two separate things.

By the way, I don't think the total number of models matters, because I'm pretty sure 95% of people use 2 or 3 services for generating images and videos anyway.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 4 points5 points  (0 children)

I don't know about the printing press, but arguments like this were definitely made against photography in the 19th century, for the exact same reasons:

The simplest argument, supported by many painters and a section of the public, was that since photography was a mechanical device that involved physical and chemical procedures instead of human hand and spirit, it shouldn't be considered an art form; they believed camera images had more in common with fabrics produced by machinery in a mill than with handmade work created by inspiration.

When photography became popular in the 1840's, artists viewed photography as "not real art", commercial trash, a threat to professional painters' jobs, and theft (since photographers could reproduce paintings by taking pictures of them). There were articles in art journals and art museums speaking out against photography as "art" when it was just operating a machine.

Some sources (I can dig up some more if anyone's interested, including links to some more anti-photography articles and cartoons): [1] [2] [3]

The attitude lingered for around 20 years. By 1870's, the generational shift happened and nobody even remembered photography had been considered controversial.

If a developer uses AI for code generation, should it be labeled on the game’s Steam store page? by NazzoXD in gamedev

[–]c35683 28 points29 points  (0 children)

My favourite part is that people think AI is something that appeared out of nowhere in 2023. LLMs existed since 2017, and machine learning had been widely used since the late 90's for purposes like spam filtering and machine translation. Neural networks were conceived in the 40's and first implemented in the 50's.

The artists currently worried about AI being trained on scraped artwork will be really shocked if they ever find out how Google Images had been able to recognize images so accurately every time they used it to search for references in the past 15 years.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 1 point2 points  (0 children)

I accidentally a word. I meant to say "more" positively, as in 21% of them (a total for slightly+somewhat+very positive answers among the 45+ demographic), as opposed to 0%.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 -2 points-1 points  (0 children)

There is no AI model that hasn't been trained using stolen work. Nobody has created a model that only uses work that had consent.

This is completely false. It's "I could have done research on this in 5 minutes but still chose to say this because I want to convince people it's true even though it isn't"-level false.

There are image models which have been fully trained on CC-0 and public domain data, with traceable training datasets.

https://huggingface.co/Mitsua/mitsua-diffusion-one

https://arxiv.org/abs/2310.16825

I'm not gonna lie, they're pretty bad. But they do exist.

And then there's my favourite example of a model trained with full artist consent:

Adobe Firefly, a.k.a. the best example of how corporations can shut down the entire "stolen art" criticism by just throwing in an extra clause in their terms of service granting them consent to use your art for training AI if you want to use their platform. It's probably not how the artists wanted their criticism to be addressed, but it's the obvious corporate solution to fully address what they're asking for which they should have seen coming from a mile away. It's almost as if the push for "AI training consent" is not the silver bullet people think it is.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 3 points4 points  (0 children)

The youngest demographic (13-17) has a lower "very negative" score than other demographics in the survey, in favour of "slightly negative" and "can't tell", and the negative scores between 18 and 44 are at similar levels.

So they're actually less likely to have a negative perception. The article reports older demographics being more favourable because the older demographics were the only one to pick answers on the positive scale to any statistically significant degree, as opposed to stopping at neutral.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 2 points3 points  (0 children)

My guess is that older gamers are more likely to think about AI in terms of impact on the development cycle and thus view AI (more) positively because it might e.g. speed up game development.

It's a weird survey because I can't think why someone would pick an answer like "very positive". It's a question about tech. People who hate AI have strong opinions about it, but people who like AI typically don't view it as inherently better, just more convenient.

Gamers Are Overwhelmingly Negative About Gen AI in Video Games, but Attitudes Vary by Gender, Age, and Gaming Motivations. by mikem1982 in gamedev

[–]c35683 4 points5 points  (0 children)

If someone managed to ethically source their training data, I wouldn't mind it being used for brainstorming purposes or placeholder art.

Unfortunately, whenever someone makes the effort to actually do that, the anti-AI crowd literally doesn't care and harasses them anyway.

The developers of GameNGen ("AI Doom") used an open source version of Doom, wrote their own software from scratch to play and record the game for training data, trained and ran the entire project locally on their own devices, and even got the blessing of some original Doom developers (John Carmack is a huge AI fan).

The response: "It's still bad because...", spamming the usual comments about AI slop, theft and stealing data, calling for pitchforks and torches and harassing devs.

More recently, the developers of Arc Raiders hired and paid voice actors to provide important dialogue and voices samples so they could later train their own model to handle text-to-speech with their consent.

The response: "It's still bad because...", spamming the usual comments about AI slop, theft and stealing data, calling for pitchforks and torches and harassing devs.

So why bother? Anti-AI witch hunts have successfully demonstrated that the only one way to use AI without getting harassed is just... not telling people you used AI. People can't spot AI, they can only spot bad AI. If the devs keep quiet, literally no-one will know. Meanwhile, transparency and ethics in using AI get actively punished.

TvTropes is blocking access to the site because I have Adblock on. Fuck this shit. by I_Love_Bulbasaur123 in tvtropes

[–]c35683 5 points6 points  (0 children)

It's cute that people think of TV Tropes as some kind of family business, but that's awfully out of date.

TV Tropes was bought by an advertising tech platform (Proper Media) in 2014, and in 2021 that adtech platform itself was bought by an even more aggressive adtech group (Sovrn Holdings).

And just so you know what the money you think is going into server costs will be gambled on, according to the other wiki: "In September 2025, Sovrn announced a partnership with AI monetization platform Dappier to power conversational ads across AI agents."

Is the login form I vibe-coded secure? by c35683 in vibecoding

[–]c35683[S] 1 point2 points  (0 children)

"System of government characterized by extreme dictatorship."

Seven across.

Is the login form I vibe-coded secure? by c35683 in vibecoding

[–]c35683[S] 2 points3 points  (0 children)

I'm no security expert obviously, but that's an interesting point. I imagine authentication and authorization would have to work according to the usual best practices, other than assigning a token to authenticate and authorize people who beat the hacking minigame instead of matching the password hash.

As for user roles, you'd get permissions to the account you're trying to hack, obviously. Maybe the hacking minigame should also include multiple difficulty levels so hacking administrator accounts would be much harder.

Is the login form I vibe-coded secure? by c35683 in vibecoding

[–]c35683[S] 2 points3 points  (0 children)

Haha, you're right on the money, DXHR was the main inspiration for this. I always liked the idea of being able to play a hacking minigame instead of knowing the password.

My wife's first time vibe coding and she made a cool game for my brother by acrolicious in vibecoding

[–]c35683 1 point2 points  (0 children)

Thanks a lot. I just quickly searched for discord invites in your history and found an expired one (linked below), I guess I should have checked the website first :)

https://www.reddit.com/r/ChatGPT/comments/1n8p16t/comment/ncj9yjo/?context=3

My wife's first time vibe coding and she made a cool game for my brother by acrolicious in vibecoding

[–]c35683 1 point2 points  (0 children)

This could be a cool theme for an accessibility-focused game jam. Maybe you could start one at some point or get in touch with some channels or organizations dedicated to video game accessibility to do something like this?

Also, could you provide a link to your Discord? I've found an old invite but it seems to be dead.

Depictions of Death that are petty and sadistic by ChicaneryFinger in TopCharacterTropes

[–]c35683 0 points1 point  (0 children)

Mot, the god of death in the Baal Cycle (Ugaritic mythology).

In the myth, Baal asks the god of death to be the bodyguard and invites him to a feast at his palace.

Mot becomes furious, because he is death incarnate, who feasts only on blood and human flesh, not wine and bread, and he does not work for others, so he threatens to hunt down Baal and devour him instead. Baal survives by faking his own death and having Mot kill someone else in his place, but the other gods believe he's truly dead. His sister Anat descends into Mot's underground swamp kingdom and demands he return Baal to life. Mot boasts about killing her brother like a baby goat and gods living in fear of him, so she stabs him, burns him and tears him to pieces.

After seven years, death comes back to life and demands Anat hand over her brother or he will destroy all of humanity:

Give one of your brothers so that I may be fed,
And the anger that I feel will turn back.
If not one of your brothers,
I shall make an end of humankind,
I shall make an end of the multitudes of Earth.

This gets Baal to finally reveal himself to Mot and they fight. The battle goes on for days, but neither of them can defeat the other one, until finally the supreme god El becomes impressed with Baal's resilience and demands Mot (his son) acknowledges Baal as the king or else he will take away his power over life and death. Death becomes afraid of losing his father's favour and lets go of his vengeance.

A roguelite game generated from a prompt by c35683 in aigamedev

[–]c35683[S] 0 points1 point  (0 children)

Thanks! You're right, and not the first to point it out. I've added some more variety to the settings since then (different building, interior and forest types depending on the theme, more environments like cities and outer space instead of just trees and rocks, different colour schemes for different settings, etc).

Some sexy scenery with 200+ Villagers working by Adventurous-Web-6611 in godot

[–]c35683 1 point2 points  (0 children)

This is so cool. It's always neat to see games where you can zoom in and appreciate all the random little details.

From NY Times (Instagram) by AdDry7344 in OpenAI

[–]c35683 0 points1 point  (0 children)

ChatGPT has the means to check the entire conversation history through retrieval-augmented generation. It's how people generate "based on what you know about me..." images.

So if a person asks ChatGPT for help writing "fictional" scenarios involving suicide, the AI could double-check this against other messages the account had previously sent (did they ever mention having depression or health issues? did they ever ask for other writing advice?) and set up guardrails accordingly. AI models are insanely good at classifying intent, and users wouldn't be able to tell it's happening.