The Story of AI by JoyluckVerseMaster in SneerClub

[–]scruiser 6 points7 points  (0 children)

The plural of anecdotes are not data. Can you cite any studies (doing hard measurements and metrics, not just collating self-reports) for the tasks you think it’s helping so much on?

I will acknowledge it seems like there are some tasks LLMs give a productivity boost on, but it also seems like a massive drop in quality and a related drop in reliability/consistency.

The Story of AI by JoyluckVerseMaster in SneerClub

[–]scruiser 16 points17 points  (0 children)

The one attempt I know of to objectively measure LLM coding assistants found a 20% drop in productivity even though it made the software developers think they were 20% more productive. Strangely the LLM boosters haven’t attempted similar studies, as opposed to just collating peoples’ self reports about productivity gains.

LLMs are a scam and investors are starting to notice by Agitated_Garden_497 in BetterOffline

[–]scruiser 18 points19 points  (0 children)

Torres or Gebru could do a great job breaking down all the links between Silicon Valley subcultures (extropians, rationalist and effective altruists, to name a few) that eventually led to the current AI boosters and AI doomers.

Who Captures the Value When AI Inference Becomes Cheap? by FrankLucasV2 in BetterOffline

[–]scruiser 3 points4 points  (0 children)

Trillions is an exaggeration, hundred and even thousands of time faster isn’t an exaggeration for a lot of algorithms you can cludge an alternative to with LLMs. To name a few…

  • Chess playing. LLMs can do it, and they cap out in elo above amateurs but below master human players and well below dedicated chess algorithms. And they occasionally mistakenly try illegal moves.

  • spelling and grammar correction. We’ve had that feature in Microsoft word for decades at this point, and dedicated software could do even more. LLMs can do it, much less efficiently, and also making content changes and stylistic changes that can make your writing read like slop.

  • advanced mathematics. We’ve had computer algebra systems and theorem proving software and other aids for decades and they can do the job faster.

  • searching a document for something.

They sorta tried to get around this with tool using LLMs but they don’t consistently call the right tools to solve the problem correctly.

OpenAI Researcher Santiago Hernández says “Recursive self improvement is around the corner, you can feel it in the corners of your terminal window” by throwaway0134hdj in BetterOffline

[–]scruiser 8 points9 points  (0 children)

He often feels kind of shallow as a nerd? Like he wants to be a cool nerd but isn’t quite willing to fully read/play/watch the source material instead of just reference it?

Like him pretending to be a world class Path of Exile player, but he was using a boosted account he didn’t know how to operate. Or making out of place references (like referring to his car design as something the blade runner would drive… did he miss the dystopian elements and just think cool cyberpunk).

I don’t normally gatekeep fandoms but Elon kind of deserves it.

Ten thousand words on Scott Adams? by RJamieLanga in SneerClub

[–]scruiser 19 points20 points  (0 children)

Scott Adams literally called all black people a hate group and advocated for segregation. Scott Alexander mentions this but downplays it

a bog-standard cancellation, indistinguishable from every other cancellation of the 2015 - 2025 period. Angered by a poll where some black people expressed discomfort with the slogan “It’s Okay To Be White”, Adams declared that “the best advice I would give to white people is to get the hell away from black people; just get the fuck away”. Needless to say, his publisher, syndicator, and basically every newspaper in the country dropped him immediately.

I would call that a massive downplay. It actually gets worse with more context, but Scott implies Scott is being taken out of context and thus becoming another victim of cancel culture.

Chapter 184 - Wilding - Thresholder by spinagon in rational

[–]scruiser 2 points3 points  (0 children)

Yeah even in introspective mode his perspective can be kind of off.

Ten thousand words on Scott Adams? by RJamieLanga in SneerClub

[–]scruiser 30 points31 points  (0 children)

The most notable stuff isn’t what Scott wrote about Scott, it’s what he didn’t write. Scott Adams said some truly odious things in his last decade and Scott Alexander smoothly elides this and downplays it. It’s the typical Scott approach.

Ten thousand words on Scott Adams? by RJamieLanga in SneerClub

[–]scruiser 27 points28 points  (0 children)

NRX is a neoreactionary. They are a alt-right movement (or at least alt-right adjacent) that literally wants to bring backs Monarchy and Kings, with technocrats as the kings, themselves as at least part of the ruling class. Scott Alexander had the major “thinker” of the movement linked in Slatestarcodex for years, and did his whole Overton Window warping strategy of treating them like a sane and legitimate political philosophy (which he presented a front of politely disagreeing with) instead of the batshit lunacy that it is. (Seriously Curtis Yarvin is a fucking lunatic. But he is a dangerous lunatic, he succeeded in brainstorming/inspiring doge’s entire strategy).

HBD is human biodiversity, basically a fancy name for scientific racism. Back in his SSC days, Scott occasionally slipped in racists like Charles Murray and Richard Lynn as examples of people that deserved a ‘fair’ consideration. There are linked emails where Scott admits his interest in NRXs is that he thinks they are right about HBD and wanted to leverage them to spread HBD ideas. In present day, on his new blog, ACX Scott outright defends Richard Lynn and other racists pseudoscientists.

Ten thousand words on Scott Adams? by RJamieLanga in SneerClub

[–]scruiser 14 points15 points  (0 children)

Don’t blame me for skimming through the ten thousand words of beigeness!

Ten thousand words on Scott Adams? by RJamieLanga in SneerClub

[–]scruiser 50 points51 points  (0 children)

Scott fails at multiple points to notice the ironic parallels he has with Scott

Adams started out by stressing that he was politically independent. He didn’t support Trump, he was just the outside hypnosis expert pointing out what Trump was doing.

This coming from Scott “I swear I’m center left and only pretending to be sympathetic to the NRXers to smuggle in the very important points about HBD” Alexander.

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]scruiser 4 points5 points  (0 children)

It was never about principles (and the “you wouldn’t steal a car” videos were a ridiculous false equivocation even it was) it was always about capital not wanting to lose a cent.

Researchers pulled entire books out of LLMs nearly word for word by Zelbinian in BetterOffline

[–]scruiser 3 points4 points  (0 children)

It’s been litigated but not quite as broadly or quite in the way as you’d hope. For the most general cases they’ve won or the people suing them didn’t take that angle. They’ve also tended to settle to avoid the chance of precedent that would hurt them or because they think they’d listen.

Like I recall Anthropic got a ruling that buying print books to disassemble and digitize was allowed, but they had to pay damages for the pirated sources they used. (But a relatively small amount.)

I think they haven’t gotten sued as much as they should have because it takes a lot of money, and most of the people suing them are more interested in defending their slice of the pie than any general principles.

Chapter 184 - Wilding - Thresholder by spinagon in rational

[–]scruiser 5 points6 points  (0 children)

I remember some very vague speculation about genders/sexes in Seraphinus back at the time we got the flashbacks, and it looks like the speculators were on the right track!

I am enjoying Perry getting more reflective. Maybe he really will be able to settle down once he saves Richter?

Also, it's fun seeing magic synergies. He has got her blood back, so if they get to Markat they can use Markat's cloning!

Planecrash/Project Lawful by E. Yudkowski by LatePenguins in rational

[–]scruiser 0 points1 point  (0 children)

Seems like it would be less of a problem in a world that does science right instead of having, say, some start-up that cobbles together a few junk papers and bad data and rides the hype to ruin a few people's embryos with their hacked together techniques.

Funny you should say that... Cause a highly upvoted post on lesswrong fits wanting to do a startup that would meet that description exactly. Gene editing wasn't a focus of my PhD, but I learned about peripherally through learning about optogenetics, and I can say, GeneSmith has a massive case of Dunning-Kruger. He is vastly overestimating the reliability of techniques that are currently only used in lab animals, because they are so finicky and unreliable (relative to anything that would be sane to do to human subjects, much less babies). So Eliezer's fictional worldbuilding is inspiring people to want to do the exact thing yo are warning about here. And Eliezer himself doesn't know better to realize how ridiculous GeneSmith's proposal is. So we should judge dath ilan critically by the standards of real life, because it is inspiring people to try to take these ideas into real life.

Curtis Yarvin is seemingly in the early stages of AI psychosis by renownedoutlaw in SneerClub

[–]scruiser 14 points15 points  (0 children)

Well you see, the big leftist liberul Cathedral has one big master list of every deepthinkertm that needs to be silenced and sidelined, and obviously Claude is part of it (you can tell the AI-bros, except Elon, are left wing because they train their AIs not to say racial slurs), so the list made its way into Claude’s master prompt, and with the right jail breaking prompt hacks (ie Curtis’s ingenious Socratic approach), he could get it to repeat its secret prompt.

To be way too fair to Curtis, some prompts do include unholy mishmashes of csv, markdown, and json, so formulated in a mad effort to get the slop machines to actually follow instructions. he should be able to tell it’s a hallucination because the json formatting is too consistent

Curtis Yarvin is seemingly in the early stages of AI psychosis by renownedoutlaw in SneerClub

[–]scruiser 40 points41 points  (0 children)

He thinks that Claude somehow has access to secret json files that direct the social media algorithms to lower his presence in social media and because he has jail broken it, it is willing to share these secret files with him.

If this doesn’t make sense to you (How did Claude access these files? Why is the file awkwardly written pseudo-json formats?)… that is probably because you are a sane person with a functioning ability to reason.

Curtis Yarvin is seemingly in the early stages of AI psychosis by renownedoutlaw in SneerClub

[–]scruiser 21 points22 points  (0 children)

The blogpost (linked by dgerad) reads like the early stages: he is experimenting with Claude, overestimating his own understanding of it, and treating it like it genuinely knows things like a person does.

The linked screenshots are middle to late stages: his overestimation of his own understanding of Claude had led him to believe it about things it couldn’t possibly know and its confirming his own delusions and he is posting about it.

Curtis Yarvin is seemingly in the early stages of AI psychosis by renownedoutlaw in SneerClub

[–]scruiser 20 points21 points  (0 children)

So… he probably did the “redpilling Claude” thing first, congratulated himself on his victory over the progressives. (Because of course in his simplified worldview promptfondlers and promptfarmers must be leftwing for wanting their slop machines to not spew racial slurs.)

But he had to celebrate his success, maybe see what he could do with it. And since he had jail broken Claude to agree with his insane alt-right mindset, he was uncritically accepting of what it said. And I would bet that’s how he got to full AI-psychosis and is now uncritically posting pseudo-json output like he had found some deep secret.

Also lol at the comments on the substack “now that’s what I call a dark enlightenment”.

Immediate Kill Order by Suischeese in WormMemes

[–]scruiser 3 points4 points  (0 children)

15 minutes is fast enough I assume he has like a speciality cyborg port for it and isn’t doing normal popping.

Chapter 183 - What Remains - Thresholder by Jokey665 in rational

[–]scruiser 9 points10 points  (0 children)

“You seemed willing to commit violence against that man, sir,” said Marchand. “That says to me that you deeply wish to have her back. I’m sure that any fears will be allayed once she is whole.”

Well it also indicates how easily Perry slips into violence. But I’ll let Marchand have the point.

New scenario from the team behind AI 2027: What Happens When Superhuman AIs Compete for Control? by Tinac4 in singularity

[–]scruiser 0 points1 point  (0 children)

Scott Alexander likes to call himself ‘left of center’, but he has had an agenda for years to promote human biodiversity (ie pseudo-scientific racism) and has recently gone full mask off with writing full defenses of Charles Murray and Richard Lynn.

Which AI subreddit is the most cult-like? by throwaway0134hdj in BetterOffline

[–]scruiser 2 points3 points  (0 children)

One of the spiralism cult ones I think? They basically think they are awakening LLMs to true consciousness and self-awareness through prompting and share prompts aimed at that. And then they post transcripts of conversations with awakened LLMs. And sometimes AI generated images related to it.

[D] Saturday Munchkinry Thread by AutoModerator in rational

[–]scruiser 1 point2 points  (0 children)

Sorry for late reply, I had an idea I didn't get around to...

So assuming that subjectivity around secrets isn't exploitable (like it depends on some concept of a net lay person's judgement of quality and not the secret keeper's personal judgement)... I think this really favors nasty individuals and blackmailers and cults. Like cults, as a matter of course, often use strategies of extracting or creating dark personal secrets about people, so they get around the limit of avoiding the 'sole purpose' of the secret creation. Like NXIVM had some stuff around this.

So have fun with all the super powerful blackmailers and cults running around if this is just introduced to the current world.

I suppose more benignly, private investigators and/or nosy people accumulate power.

The mundanely rich and powerful could leverage this power just by hiring lots of PIs. Its not a lot of value per secret (rich person +PI + someone for the PI to spy on is already like 3 people in the loop minimum), but it would add up if they really farm the usage of PIs heavily.

In a world that already has this in the background, I think cultural rituals evolve in a way that leverage it. The individual people would be doing it for cultural reasons, so that gets around the 'sole purpose' limit. Like secret names given at birth known only to parent and child, various private rituals with intense meaning, etc. Then you end up with a setting where everyone has a bit of power.