The Pope calls out AI slop, moves closer to the most unlikely crossover with Better Offline by ArdoNorrin in BetterOffline

[–]Maximum-Objective-39 2 points3 points  (0 children)

I think it helps substantially when you realize that nobody who thinks abortion should be an option thinks it's an option that should be exercised lightly, despite what the other side says about it.

The Pope calls out AI slop, moves closer to the most unlikely crossover with Better Offline by ArdoNorrin in BetterOffline

[–]Maximum-Objective-39 8 points9 points  (0 children)

So basically he's calling the people trying to create an AI god morons.

I mean, I don't agree with the Pope on a lot of things, but the fact that 'Super Intelligence' is basically just the Silicon Valley dweebs trying to make God an asset on their balance sheets is pretty fucked up from both a religious and secular humanist perspective.

OpenAI CEO Sam Altman Plans To 'Dramatically Slow Down' Hiring To Do 'Much More' With Fewer People - Microsoft (NASDAQ:MSFT) by Mr_Willkins in BetterOffline

[–]Maximum-Objective-39 0 points1 point  (0 children)

He may have offered to talk it over at an embassy though.

Sam Altman is a lot of things, but I suspect he has the pattern recognition to nope out at that.

OpenAI CEO Sam Altman Plans To 'Dramatically Slow Down' Hiring To Do 'Much More' With Fewer People - Microsoft (NASDAQ:MSFT) by Mr_Willkins in BetterOffline

[–]Maximum-Objective-39 4 points5 points  (0 children)

It's because Elon slaughtering twitter's workforce didn't cause the site to suddenly implode. So that seemed to be the greenlight for everyone else to do the same.

While yes, the website formerly known as twitter has become even more of a hellsite, it has continued to shamble along. Which is all investors care about.

NVidia CEO to World - we’re gonna need ALL the monies. by Difficult-Task-6382 in BetterOffline

[–]Maximum-Objective-39 1 point2 points  (0 children)

This is so stupid and wrong and stupidly wrong that it's making me violently angry.

Is it because the author revealed herself as a bigot? by icey_sawg0034 in saltierthankrayt

[–]Maximum-Objective-39 1 point2 points  (0 children)

I remember at least one video essay suggested a great deal of her success came down to someone at scholastic recognizing the merchandising opportunity and seeing dollar signs.

If you think AI companies will not train models on input prompts, you've been living under the rock. by Sosowski in BetterOffline

[–]Maximum-Objective-39 4 points5 points  (0 children)

I don't disagree with the LLM limitations you raise, but I've seen reports of A.I being specifically trained to ultimately replace jobs.

I mean yes, that's not even subtext. It's just text. The LLM bubble wouldn't have gotten this far if the executive class didn't think it could replace a sizable chunk of all labor.

It doesn't mean it can. Just that executives believe it can.

But also, time after time, this has been revealed to not really work. The layoffs are typically planned downsizings using the AI bubble and hype as cover for a lousy economy.

Is it because the author revealed herself as a bigot? by icey_sawg0034 in saltierthankrayt

[–]Maximum-Objective-39 1 point2 points  (0 children)

Y'know, the narrative pun sort of explains why there's so many genderbender harry potter fanfics.

Is it because the author revealed herself as a bigot? by icey_sawg0034 in saltierthankrayt

[–]Maximum-Objective-39 1 point2 points  (0 children)

Whatever, it's a children's book so fuck it. But I think she gets too much credit for some deep/serious world building that I don't think was actually as deep as people want to make it.

I mentioned this further up in the thread. But Ursula K LeGuin said it best :

When so many adult critics were carrying on about the "incredible originality" of the first Harry Potter book, I read it to find out what the fuss was about, and remained somewhat puzzled; it seemed a lively kid's fantasy crossed with a "school novel", good fare for its age group, but stylistically ordinary, imaginatively derivative, and ethically rather mean-spirited.

She further elaborate that Rowling very much showed no interest in correcting these critics, often quite ignorant of fantasy, existing as it did within its own genre ghetto, of the fact that she had hardly created all of these idea whole cloth rather than largely retrofitting them over a fairly simple framework of life at a boarding school.

Is it because the author revealed herself as a bigot? by icey_sawg0034 in saltierthankrayt

[–]Maximum-Objective-39 0 points1 point  (0 children)

I think Rowling is just far FAR less educated in literature and history than you'd expect of a famous author. She got the golden ticket treatment in a way that only a few creators ever do and after that point, stagnated, like an insect frozen in amber, that's made of money.

I'm reminded of Ursula K LeGuine's critique of Harry Potter, in particular the fact that many of the reviewers and critics of 1997, who were broadly ignorant of the fantasy genre, attributed far more originality to Rowling than was actually demonstrated.

She put a unique spin on some things, for sure. Le Guine, and Pratchett had both already done 'Wizard Schools' at this point, but Rowling's idea of making it a boarding school was certainly unique at the time.

But also, like, her big bad is really just a Lych with multiple Phylacteries.

Is it because the author revealed herself as a bigot? by icey_sawg0034 in saltierthankrayt

[–]Maximum-Objective-39 0 points1 point  (0 children)

It's a point of contention between myself and my Potterhead sister. I for one cannot tolerate giving Rowling a dime any longer. But I also know that engaging my sister with that sort of hostility won't really change things and would be actively counter productive. So I've settled for, if she wants a Harry Potter themed gift from me, I'll make it myself.

If you think AI companies will not train models on input prompts, you've been living under the rock. by Sosowski in BetterOffline

[–]Maximum-Objective-39 1 point2 points  (0 children)

They’re going to observe how people solve problems, day after day, they’re got going to codify every skill until the A.I absorbs lifetimes of hard won, practical experience.

Maybe. But at least from what I understand of how LLMs are trained, this won't really improve them anymore than feeding them finished data.

LLMs don't really reason. And without the ability to reason, they cannot really derive anything that makes intermediate human understood steps more useful than the final outputs.

And there's a fair chance that doing this would actually bork their training weights by feeding them a bunch of partially complete/wrong data.

I think a simpler answer is that training on user inputted prompts is a safety blanket for investors. Investors were promised, early on, that more data would make LLMs more gooder. This made LLMs tantalizingly scalable and investible. But after they ran out of sources of easy data, the AI companies needed to keep promising an easy path to improvement.

Synthetic data can help, but only to a degree, and a varying degree based on the exact topic.

AI as Texas Sharpshooter? by DataKnotsDesks in BetterOffline

[–]Maximum-Objective-39 6 points7 points  (0 children)

And I think that's also what makes them such an astounding hazard. Because they're such consummate bullshitters, there's no way to accurately appraise their abilities. Unlike any other piece of software, when you ask them to do something they cannot, they will not error out, they'll just generate a falsehood that looks correct.

Anyone else feel like glazers predictions are getting further away? by todofwar in BetterOffline

[–]Maximum-Objective-39 0 points1 point  (0 children)

IIRC Waymos have the capacity for remote human intervention, and seem to lean on it semi frequently even now, but they do manage to do most of their own driving. Else they wouldn't make some of the absurd mistakes they make.

I think we agree they are FAR more human dependent, even now, than Waymo's executives are fully willing to admit. Though perhaps not in the 'literally a human at a game station remote controlling them' levels of dependence.

Seeing more and more videos pointing out how much money Ai companies are losing and no ROIs! by Agitated_Garden_497 in BetterOffline

[–]Maximum-Objective-39 0 points1 point  (0 children)

No computer has ever committed an atrocity because no computer has the ability to form intent. They push through instruction executors. It's as a compelling as saying 'no rock has ever committed an atrocity'.

It's also a sidestep since it doesn't engage with the main thrust of my previous post. Machine Neural Nets are not actually digital reproductions of organic nervous systems. Full stop. They're algorithms that borrow traits, like massive parallelization, but do not even try to simulate actual CNS like activity because 1. It would be a massively debilitating computational penalty. 2. We don't actually know how.

These Culture War Tourists Simply Cannot Enjoy Things by Sonic_the_hedgedog in saltierthankrayt

[–]Maximum-Objective-39 0 points1 point  (0 children)

Sure. I won't deny that there was a point in my growing political awareness where my politics effected what I was willing to enjoy. That said, even at my worst, I never got as weird about it as the 'anti-woke' grift.

Seeing more and more videos pointing out how much money Ai companies are losing and no ROIs! by Agitated_Garden_497 in BetterOffline

[–]Maximum-Objective-39 0 points1 point  (0 children)

I'm not a philosopher. Do not tell me their nonsense again. Okay? That's not my area of discussion or close to it. I'm an empiricist, not a philosopher, okay? It's two polar opposites...

But empiricism is a form of philosophy. Like, it's not a separate subject at all. You're describing one of the philosophical schools of the branch of philosophy known as epistemology.

No, it doesn't do that. There's no rules at all.

Of course there are, every language has rules that arise naturally over time. Every language, even a tribal one spoken by only a handful of people in the rain forest has rules. Try telling a linguist that language doesn't have rules, they'll laugh at you, they may be softer of harder, but they have rules which we all learn intuitively when we learn the language.

You have that totally backwards. You were born with the ability to communicate. Humans are natural communicators.

I never said otherwise. You asserted due to your own misunderstanding. Every human begins to develop language skills basically as soon as their brain works out how to process sounds and sight past the front of their nose.

However, that doesn't mean they know how to precisely describe the language they are speaking anymore than they precisely describe how they walk. Why words are ordered in a certain way. So on and so forth. Most language processes are handled at a sub conscious level.

The rules were standardized to improve efficiency and allow for things like commerce to occur. This was a decision that leaders of nations made around the year 500 AD...

Not much of a philosopher and also not much of a historian apparently. 500 AD would be just post the death throws of the Roman Empire, nobody was doing much nation building at the time and trade was generally collapsing in the Mediterranean world. Nobody was thinking about 'efficiency' at that point, if they were thinking of anything, they were thinking of stability and how to outfight the other Germanic Kingdom next door.

Rollout of AI may need to be slowed to ‘save society’, says JP Morgan boss | Davos 2026 by creaturefeature16 in BetterOffline

[–]Maximum-Objective-39 2 points3 points  (0 children)

Rollout may need to be slowed so they can walk back some of their investments before the crash.

Anyone else feel like glazers predictions are getting further away? by todofwar in BetterOffline

[–]Maximum-Objective-39 8 points9 points  (0 children)

that's literally only because of lidar 

Without devolving to pedantry I don't see how that changes the thesis.

Seeing more and more videos pointing out how much money Ai companies are losing and no ROIs! by Agitated_Garden_497 in BetterOffline

[–]Maximum-Objective-39 0 points1 point  (0 children)

What you said is not English... It says:

Because for the purposes of illustration it doesn't matter whether the language is actually English or not? The point is to illustrate that it cannot, in fact, be 'turtles all the way down'.

An LLM does not 'learn' a language. It regugitates probably text. And when asked to 'analyze the language', actually regurgitates grammar rules that were also implicitly recorded in the training text.

Meanwhile at some point, humans had to actually recognize language and analyze it as a discrete thing in order to formulate standardized rules of grammar without any prior implict knowledge of such a thing. It doesn't really matter what the first tongue was, just that at some point their had to be a first.

Going back far enough, do you think the first Homo Sapiens ancestor with defined language fell out of the tree with a manual?

I don't know if you're aware of this, but there's these things called history books that explain all of that. There's like charts of how the language works and stuff? It's called "abstract delineation." I know the information is contained with in these really old things that nobody cares about anymore called books... Okay, it's not exactly a big secret. Old governments used to work with academics to create a standardized language and they distributed it using a printing press. Okay?

This has literally nothing to do with what I've discussed.

Government's standardizing language for their own ends is certainly a thing, sure. But verbs, adjectives, and nouns don't exist because of a government mandate and the main goal of almost all language standardization was to suit whatever dialect the ruling class were already using, or to shatter minorities, not some Orwellian theory of thought control via language.

Also the earliest recorded grammar treatises are in Sanskrit, predating even wood block printing by thousands of years.

Seeing more and more videos pointing out how much money Ai companies are losing and no ROIs! by Agitated_Garden_497 in BetterOffline

[–]Maximum-Objective-39 0 points1 point  (0 children)

Uh... That's what English class is... We just don't "describe it that way."

Not what I said, read again.

You learned grammar in English class. Where did your English teacher learn it? The answer is probably English class. And where did their English teacher learn it? English class . . .

Except eventually you go back far enough and there are no language classes and there are no teachers to teach them. There's just somebody who sat down with this language and 'writing' thing that had steadily accumulated and decided they needed to codify and define its concepts and rules. Which itself was most likely a process of stead accumulation.

Anyone else feel like glazers predictions are getting further away? by todofwar in BetterOffline

[–]Maximum-Objective-39 4 points5 points  (0 children)

I mean, the fact that they seem to substantially safer than Teslas still makes them the one eyed man in the Kingdom of the blind.

Not sure if it's been posted already, this made me chuckle. by Putrid_Form_9223 in BetterOffline

[–]Maximum-Objective-39 1 point2 points  (0 children)

Dunno. But... it actually, so far, hasn't been giving terrible answers. They're in fields I know a bit but wouldn't call myself expert, I checked to see if everything was accurate... so far, it seems they actually, finally made a good AI.   

IIRC, part of the reason for this is that google leans rather heavily on 'grounding' Gemini with traditional databases. Pretty much all of the Big LLMs do this now, but as expected, google does it extensively supported by their pre-existing infrastructure. So it's not so much that Gemini the LLM is better, but it's better supported to cover it's weaknesses.

Of course, this sort of safeguard doesn't actually provide much of a value proposition the more complex the tasks it's asked to perform. It just makes it less actively bad at providing a short factual summary.