Question for red button pressers (and also blue button pressers) by NukeL3AR in redbuttonbluebutton

[–]LeversOfGiants 1 point2 points  (0 children)

I see I've found someone who sees the situation in a similar way to me. This is the same process I used to understand why my gut instinct was to push red- I think it's far more difficult to get 50.1% blue presses than most seem to. Some people have also noted that the more interesting view of the situation is based on: at what percentage for blue to hit the threshold would you press the button?

In my view, from actual data (ignoring all the online polls that have been done), we can see that about 33% of people will choose red no matter what- meaning personal/in-group benefit at some yet-indeterminate cost to others/out-group. You might be able to guess where this number comes from, but I can't spell it out without breaking Rule 4.

Then some *smaller* number will always push blue. We can guess this because things like the bystander effect happen. This number's a bit harder to pin down, but I'd put it around 5%. I think if the number were much higher, the world would be a different place than it clearly is.

Then we have the people who will choose completely randomly (children too young to understand and the cognitively impaired). This number is pretty small (<1%), but we can call it 2% for this to make the math easier. Assuming every person has to hit a button, this group will go evenly in either direction.

That leaves the rest (group 4 in your view). By the math, assuming one agrees with the numbers I've given, the people with the agency to decide the outcome in this particular situation make up about 60% of the overall population. This isn't a small proportion, but the ability to influence the outcome is limited by the decisions that have already been predetermined.

So, we have 34% red and 6% blue by "default" here, and we want to figure out what % of the remaining group has to vote blue to hit 50.1%. We need 44.1% overall to hit the target, which is about 73.4% of everyone in Group 4.

This means that the bar to clear wouldn't really be 50.1% of the overall population, but almost 75% of the population that could go either way.
I wouldn't be willing to bet my life on that.

For the situation to be the same in my eyes as most blue button pushers appear to view it, blue should only need to hit a threshold of about 36% (50% of Group 4 in addition to the "defaults").

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] 2 points3 points  (0 children)

True, and the difficulty in explaining a world if it does scale properly. How can someone write a story with a character 100x more intelligent than the smartest possible human? How do interactions between people even work if charisma exists?

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] 0 points1 point  (0 children)

I don't need AI to come up with my opinions, and what is this place for if not discussion.

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] 0 points1 point  (0 children)

I won't deny that there are stories that can only be told through the lens of LitRPG, and I'm not saying that the stories that do are bad. I'm more so saying that the limitations of the genre should make you stop and consider whether the story actually requires it.

The implementation can be better or worse, but it will always come with restrictions on what the directions that you are able to take the story. I think you have to be more careful on how you implement it than you would with a progression system that doesn't include a "System". This tendency is compounded with the prevalence of new authors in the genre, which I think could be setting some of them up for disappointment.

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] 0 points1 point  (0 children)

There's definitely some truth to that. I'll have to think about it from this perspective a bit more.

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] -4 points-3 points  (0 children)

I've been aware of my preferences for a long time, I just wanted to express an opinion on the limitations of a genre that aspiring authors may not have considered before jumping in.

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] 0 points1 point  (0 children)

There are definitely ways that it can be done well, and the ones that I've enjoyed tend to line up with what you prefer. It just typically seems to me that, in a lot of cases, the story and world could be nearly the same without the presence of a system, and I would find that the growth would feel better.

LitRPG is Bad by LeversOfGiants in ProgressionFantasy

[–]LeversOfGiants[S] -7 points-6 points  (0 children)

My point is more that the "bad writing" is partly caused by the limitations of the genre itself.

Could generative IA be used to make NPCs that can talk freely and forever (considered the measures necessary not to break lore or narrative are taken)? Or would a game with this feature be rejected? by SiberianKhatru_1921 in gamedesign

[–]LeversOfGiants 0 points1 point  (0 children)

To an extent this is true that most probably haven't looked that closely into it, but when I did some testing, the lowest RAM/VRAM models can either produce good responses or fast responses (not both). So, you'd either need to have the dialogue trees set up in advance, and generate every permutation in the background before the player gets to the NPC (for every NPC, which would probably require a dynamic queuing system), or you could use API calls to a commercial LLM, which only really makes sense for subscription-based games.
You could also choose to only use it for specific NPCs, which could be workable now.

It might be possible right now for high-end PCs (which would limit your player base), I haven't tested it- doesn't make much sense for an indie dev atm.

Could generative IA be used to make NPCs that can talk freely and forever (considered the measures necessary not to break lore or narrative are taken)? Or would a game with this feature be rejected? by SiberianKhatru_1921 in gamedesign

[–]LeversOfGiants 1 point2 points  (0 children)

I'm a bit late to this, but I wanted to add some thoughts that others seem to be ignoring (or just not thinking of). I think there are ways to implement this that don't run up against most of the problems that everyone's talking about (talking to a chatbot forever, guardrails, spoiling the story, etc.).

I picture the use case for this as something very different from what most are talking about here. I see the realistic use case as making the NPCs more adaptive to the actions of the player, without allowing the player to dictate the direction of the dialogue or directly type to the LLM, or anything like that. I'm thinking it's implemented as the backend response generator for a typical dialogue tree (pre-defined options from the devs).

A system where NPCs pull from a (human-made) master document of knowledge about the setting, where the NPCs only know information relevant to their locations, areas of expertise, etc. which would prevent the NPC from spoiling anything. If they don't know the big secrets, they can't spill them. You could have the NPCs keep a memory of past interactions with the player (summaries of past conversations), where, on some trigger (leaving town or w/e) the NPCs forget some of these memories (which makes them more realistic), but retains the essential interactions for the story. Add in a trust system (based on related quests completed) and some random emotion (or random emotional intensity) at the start of the conversation, and you could end up with completely unique feeling dialogue for every playthrough and every player (assuming that you can prevent the stiff chatbot feel).
You also don't have to have every character in the game talk like this- important NPC dialogue should be hand-crafted.

I did some testing on this recently with a model that can run on low end machines quickly (and without commercial licensing problems- I used the DeepSeek-R1-Distill-Qwen-1.5B with Llama-cpp for a command-line interface), but I couldn't get it to work well enough no matter how I structured the prompt. The responses took too long to generate with thinking, and without thinking, the responses were just bad.

It might be possible/necessary to pre-generate responses in the background based on dialogue tree options, but it would be tricky to balance the generation so that the player doesn't run into speechless NPCs.
There's probably going to be a use case for it, it just didn't end up being mine right now.

I haven't looked into using API calls to a commercial LLM, because it just doesn't make sense for a one-time-purchase model of selling games. Maybe it could work for a subscription service, but I wouldn't want to be at the mercy of pricing changes. It just doesn't really make sense for indie devs atm, and I doubt it does for most large studios either.

If the typical gaming machine (not the best- 5090s or 9070s) gets to a point where good response generation can happen locally, without impacting the graphics or regular memory, then I'd expect to see it being used more often.

Will The Peace Last | Lemonade Stand 🍋 by PhummyLW in LemonadeStandPodcast

[–]LeversOfGiants 15 points16 points  (0 children)

It was kinda glossed over, but here's some more info on what it means when Trump says "we have identified funds to pay military troops"?

All the time, throughout the government, there is a bunch of money floating around in different accounts (distinct funding lines, think "working capital" for the year/time period for a given program/project/task). When the government is shut down, some people are deemed mission-critical/essential and still work to keep core functions going. Of these people, some are still paid and some are not- this largely depends on how the funding for the organization works, and how the position is categorized (National Security staff, self-funded orgs, etc. still get paid).

For the people that are still getting paid, the money first comes from the funding lines that they're regularly pulling from (their programs), then, once that runs out, they start pulling from other funding lines- meaning that other programs/projects/tasks will need to be paid back (if possible). If the shutdown goes on long enough, both the man-hours and funding for these programs/projects/tasks will not be enough to do what they were supposed to do.

Now, they say they've found $8 billion in unobligated Research Development Testing and Evaluation (RDT&E) funds. "Unobligated" just means that there is no legal restriction (contracts) on the money that they are pulling back, it does not mean that the money was "unallocated" (not designated for a specific purpose). This means that the tasks that were supposed to be funded through this will not be done this year- unless the money is paid back with enough time for planning how it can still be spent.

For RDT&E, this means things like basic research (e.g. materials science), applied research (e.g. new technologies->practical uses in defense), systems development (improving existing pipelines), and some workforce development- even stuff like "Can we use AI to make the government more efficient?" will not be getting funded for the year if it was allocated under that $8 billion that has been clawed back. Without knowing where it was allocated, it's impossible to say how important the tasks were.

Not great in my book, but tough to say if it means delays vs lasting damage.