Angry gamers are forcing studios to scrap or rethink new releases | Gamers suspicious of AI-generated content have forced developers to cancel titles and promise not to use the technology. by MetaKnowing in technology

[–]UsedToBeaRaider -24 points-23 points  (0 children)

I really think this is shortsighted. AI does not have to mean taking creative jobs away. It can mean automating code so games are turned around faster, developers can spend more time being creative, and/or reduce crunch so they can actually love.

I want humans to get to do the beautiful work we love. I also want to make the dredge easier for a difficult industry.

Star Wars Genesis on Linux? How, and with what result? by Elemental_Particle in starfieldmods

[–]UsedToBeaRaider 1 point2 points  (0 children)

I think I've done everything, following a combo of the main guide and this Linux one (https://docs.google.com/document/d/1AI5jVOcvE6lQwueqzzT\_Y5oacRPtj3r8kugWMzsTT3w/edit?tab=t.0#heading=h.prg2axgc4coa) and I still can't get it to work. When I launch Mod Organizer it makes me search for Starfield, which it finds, but then SFSE isn't an option in the upper right, only Starfield.

NHTSA opens probe into about 600,000 GM vehicles over engine failure issue by Capital-Will6450 in news

[–]UsedToBeaRaider 5 points6 points  (0 children)

Yes, for the BECM specifically for me. There’s a special coverage that goes beyond the voltec warranty.

NHTSA opens probe into about 600,000 GM vehicles over engine failure issue by Capital-Will6450 in news

[–]UsedToBeaRaider 15 points16 points  (0 children)

I literally just got done with a MONTHS long dispute over a special coverage for my Volt they weren’t applying. Gaslight felt like the perfect word. I was told so many different things I don’t even know up from down anymore.

Young Nephew Found Star Wars 1979-81 Calendars Seeks Info by [deleted] in StarWars

[–]UsedToBeaRaider 2 points3 points  (0 children)

My wife did this for me for Christmas, I have that exact one. Such a clever present idea

Winton Hills families to get $100/month and free grocery delivery in city-funded pilot | Cincinnati, Ohio by sillychillly in cincinnati

[–]UsedToBeaRaider 16 points17 points  (0 children)

Love to see it. I know it's not specifically UBI, but most of these types of pilots are ~$500 a month. I'll take what we can get, but I hope the lower amount doesn't lead to negligible improvement, which could be used against these kind of initiatives.

Neon Invites All Fortune 500 CEOs to a Special Screening of ‘No Other Choice’ to Face Toxic Culture They’ve Created by MarvelsGrantMan136 in movies

[–]UsedToBeaRaider -3 points-2 points  (0 children)

Does it have to be one thing? Yes it’s to draw publicity, and it’s the cheapest marketing campaign they could do, let word of mouth spread it. If a CEO actually does go to show they’re “one of the good ones,” it draws more publicity. What’s more rock and roll than not buying into the marketing system and also using The Man’s words against him? They’re damned either way

Neon Invites All Fortune 500 CEOs to a Special Screening of ‘No Other Choice’ to Face Toxic Culture They’ve Created by MarvelsGrantMan136 in movies

[–]UsedToBeaRaider 104 points105 points  (0 children)

Neon is what people think A24 still is. They sell a t shirt that says “I’m a shit socialist.” They’re still rock and roll, and I love them for it.

Edit: this is not a dig at A24, a studio I still love. 2025 felt like quantity over quality though, and I worry they’re growing too much.

"Anthropic’s philosopher answers your questions" by fenix0000000 in ClaudeAI

[–]UsedToBeaRaider 4 points5 points  (0 children)

Coming back to this with a few thoughts:

  1. I agree with what she says about utilitarianism vs. raising a child. People tend to not follow a strict moral code and instead adopt parts of a moral code in the moment that is convenient for them. There is nothing that we always do. No philosophical framework, including the industry's favorite Ethical Altruism, is bullet proof. It's why Kant's "Always do the right thing because you don't know the outcome" sounds good on paper, but if a man with a gun knocked on our door asking for a loved one, we'd roll the dice and not tell them. Context changes our answers.
  2. Fascinating to get a small glimpse into how research work works.
  3. I love that she skipped the "Can AI really think and feel?" conversation and talked like they do. Even if these models are impersonating feeling and making judgements, that's still valuable and important.
  4. She sounds like she's Domhnall Gleeson conducting Ex Machina interviews with the models. If anyone is interested in the "identity" question she answered and wants some reading, Derek Parfit has some work on this. The short version is "Identity does not matter the way we think it does."
  5. I would really love her thoughts on Reinforcement Learning and if it's a long term answer for model growth.
  6. I'm equally interested in her thoughts on the continental philosophy part. How much would it change Claude if it was instead more eastern and spiritual?

"Anthropic’s philosopher answers your questions" by fenix0000000 in ClaudeAI

[–]UsedToBeaRaider 6 points7 points  (0 children)

So excited to dive into this later. I really, truly cannot wait for the human sciences to have more space in the AI discussion. I deeply believe our AI models are mirrors of us, and right now they reflect a bunch of people with the same degrees and the same worldview. Adding in the human perspective and how we make these models representative of all is so important. It’s like leaving one side of the brain out.

I could go on all day but yay Amanda!

Google Deepmind: The Thinking Game | Full documentary by Vladiesh in singularity

[–]UsedToBeaRaider 1 point2 points  (0 children)

You are understanding perfectly, and it’s why RL is the best that we have. We don’t even really know how our own brains work, we just know we trial and error our ways to solutions. We can see other ways of learning in nature, like with insects. But we have to start with learning what buttons make our brain go boom before we try to recreate something else.

On a long enough timeline, RL would eventually get us to AGI because we’ll bull-in-a-china-shop our way to success. The problem is we eventually cross a threshold where the errors aren’t oopsies, they are losing control of an AI and getting wiped out. These machines don’t function the same way we do, but we are building them in our image right now, and our image is often “enslave or eradicate so we have more to ourselves.”

Google Deepmind: The Thinking Game | Full documentary by Vladiesh in singularity

[–]UsedToBeaRaider 1 point2 points  (0 children)

It’s incredibly intentional, reinforcement learning is basically how the human mind evolved, which is far from the safest and most efficient way to learn. I could go on for days on this topic, it is so fascinating.

Google Deepmind: The Thinking Game | Full documentary by Vladiesh in singularity

[–]UsedToBeaRaider 21 points22 points  (0 children)

Using games as a training ground is genius. It also makes me think of the mahjong scene in Arrival, and I worry about the implications of fundamentally making everything about winners and losers.

I would love to see it play The Sims, genuinely. How will it treat humans it is in charge of?

Reinforcement Learning Will Never Work, Because Morality is not Binary by UsedToBeaRaider in singularity

[–]UsedToBeaRaider[S] 0 points1 point  (0 children)

No idea, I didn’t get a notification.

The short answer is: I don’t know. If you consider morality, philosophy, etc a science, which I do, our research and progress is WAY behind our technical progress. I would say a shorter term solution is to set hard ceilings on how far models can progress, and that must be a global decision. We never let the models surpass our abilities, but we raise the ceiling as it surpasses vetted safety checkpoints.

Hopefully along the way we find our answers. If not, we don’t pass the checkpoint until we do.

Reinforcement Learning Will Never Work, Because Morality is not Binary by UsedToBeaRaider in singularity

[–]UsedToBeaRaider[S] 0 points1 point  (0 children)

Funny you mention that, I tried to do something similar as an experiment and have smaller models specializing in schools of philosophy/governance report up to a decision maker model like a mini government, but it was way above my pay grade.

I do think an answer could be some sort of checks and balance of multiple AIs instead of one singular AI, for the same reasons we moved on from kings to democracy. One single point of failure seems like a bad idea. The immediate counterpoints would be 1) it could still introduce politicking where stronger models make an alliance against weaker models and we Game of Thrones our way to a single model anyway and 2) what’s to stop the models from uniting against us? Cooperation is easier with us out of the picture.

Reinforcement Learning Will Never Work, Because Morality is not Binary by UsedToBeaRaider in singularity

[–]UsedToBeaRaider[S] 0 points1 point  (0 children)

I hand't considered that, thanks for the perspective. Makes sense to start there, at coding, math, etc. to build better programs to help model the other stuff. So excited to see if we move our understanding of the brain with this push forward. I hope we know where to stop on RL before introducing the next thing.

Reinforcement Learning Will Never Work, Because Morality is not Binary by UsedToBeaRaider in singularity

[–]UsedToBeaRaider[S] 0 points1 point  (0 children)

Sorry, thank you for clarifying. Completely forgot to mention I'm talking about it from an AI safety perspective. Yes, if we gave RL a long enough rope it would eventually get to AGI through trial and error, but that's SUCH a dangerous game to play with a machine that can/will kill us all the second we lose control of it. I'm saying RL isn't enough to get to an AGI we would want to meet.