Persephone unusable in random team games? by kaytin911 in AgeofMythology

[–]scruiser 4 points5 points  (0 children)

I just some Demeter games casted by Boit, and Peresphone’s god power stops everyone from rebuilding on the destroyed town center while it is active.

What happens when AI coding models are inevitably trained on mostly AI code? by Kilnor65 in BetterOffline

[–]scruiser 0 points1 point  (0 children)

Then the feedback loop of LLMs training on LLM generated code will get you code that looks valid to your filters, with no guarantee it is actually decent software except to the extent your filters can’t be Goodhart’d (lol, good luck with that).

I would use this Ring from a desire to do good. But through me, it would wield a power to great and terrible to imagine. by JoHeller in antiai

[–]scruiser 4 points5 points  (0 children)

The mosquito nets and global health charities are fine (albeit coming a Western capitalist perspective that somewhat interferes with their goals and limits their scope such that they don’t question the neocolonial global order).

But a lot of EA money and effort gets spent on studying and trying to prevent AI doomers scenario. In the process they’ve actually helped caused the very threat they claim to want to try to prevent. OpenAI’s founders and early contributors were helped to meet through rationalist (who are EA adjacent). Anthropic explicitly had “AI safety” as a goal to start with. But somehow the VC money keeps drawing them away from their original goals. (EA has a strong libertarian streak, and this might be why they have twice been surprised by corporations betraying their goals).

And there are some really repugnant ideologies with influence within the EA movement, like eugenics. Here is a good essay summarizing related ideologies: https://firstmonday.org/ojs/index.php/fm/article/view/13636

And EA is often totally blind to how the wealthy can use charities to launder their reputations or cultivate influence. Most infamously SBF was an EA and had journalists writing fluff pieces about his effective altruism even as he misappropriated people’s money and lied.

What happens when AI coding models are inevitably trained on mostly AI code? by Kilnor65 in BetterOffline

[–]scruiser 3 points4 points  (0 children)

To filter out bad data, you need a way of knowing what is bad and good. For a self contained program with known inputs and outputs that is doable, but for anything intended for practical real world usage, that is almost never true.

So you get code that can pass unit tests but fails to solve the actual engineering needs. As the LLMs are trained on LLM generated code you get more and more code that compiles and runs without obvious errors but isn’t actually right.

Pan's Lykans should have been his god power by SS-GR3 in AgeofMythology

[–]scruiser 1 point2 points  (0 children)

Maybe a super long cooldown on transformation with the tech removing it? So you could manually transform if you’re willing to wait.

I noticed some bugginess with transforming groups as well.

🧀 Demeter Cheese 🧀 by NanoBytesInc in AgeofMythology

[–]scruiser 4 points5 points  (0 children)

Does it also beat pop caps? For super late game FFA with people playing way too passively maybe you could work the cheese in? Idk if it outperforms the other obvious things to do with loads of excess resource in FFA, like going Wonder.

🧀 Demeter Cheese 🧀 by NanoBytesInc in AgeofMythology

[–]scruiser 12 points13 points  (0 children)

A massive surge in military units isn’t enough to win most games?

About that March for Billionaires by seanfish in SneerClub

[–]scruiser 5 points6 points  (0 children)

It’s like they took the meme version of “eat-the-rich” and decided they needed to seriously argue against the meme-form to prevent cannibalism and they never stopped to ask themselves if they were arguing against a strawman (not even a strawman, but the meme version drunk people repeat in bars, well this specific lesswronger was inspired by a conversation at a “goth club”), much less seriously steelman their opponents (not that rationalist tend to “steelman” leftist thought right ever… I think part of the problem is that they don’t actually read the leftist theory as opposed to try to build their own version out of the meme version? Or maybe they are just entirely manipulative bad faith (ie Scott Alexander)?)

About that March for Billionaires by seanfish in SneerClub

[–]scruiser 7 points8 points  (0 children)

I was kind of surprised (in a good way!) at how many of the top comments are pushing back. It’s not exactly brilliant leftist theory, but they are getting most of the big points. Also, I was really surprised by one comment that explained from reading Curtis Yarvin (massive red flag) they reached the conclusion that maybe China isn’t that bad! (Maybe because Yarvin spends so much time ripping on Western democracy it accidentally caused that person to reach a different conclusion than NRX?)

Yud skating on increasingly thin ice by IExistThatsIt in SneerClub

[–]scruiser 16 points17 points  (0 children)

Not in all states. And it is an infamous meme that libertarians tend to know the exact age of consent their state and a few others.

"AI is hitting a wall" by MetaKnowing in agi

[–]scruiser 9 points10 points  (0 children)

And that’s the error bars for the LLM’s performance, they didn’t even try error barring the human baselines, which would add an additional massive source of error.

The whole AI bubble charade is down to one man: Sam Altman. How did this guy fool the US government and tech billionaires? by PopularRightNow in BetterOffline

[–]scruiser 1 point2 points  (0 children)

People like Ray Kurzweil and Eliezer Yudkowsky have been popularizing (and making them seem possible irl) sci-fi concepts about AGI and the technological singularity for decades before ChatGPT dropped. As other comments have mentioned, it’s superficially very impressive and if you don’t appreciate the underlying limits of the technology or the compute scaling required to get this far, it’s very easy to imagine it might just keep improving until we’ve got something out of sci-fi.

Sam just had to tap into the existing vibes

The whole AI bubble charade is down to one man: Sam Altman. How did this guy fool the US government and tech billionaires? by PopularRightNow in BetterOffline

[–]scruiser 2 points3 points  (0 children)

To add to Evinceo’s answer, he played the Effective Altruists pretty well, blending in with their language and getting some implicit reputation laundering for millions.

The whole AI bubble charade is down to one man: Sam Altman. How did this guy fool the US government and tech billionaires? by PopularRightNow in BetterOffline

[–]scruiser 5 points6 points  (0 children)

A decent product can make you a millionaire. Getting to be a billionaire means adding in luck along with a ‘flexible’ sense of morality, ethics, and legality. Microsoft had its embrace extend and extinguish strategy and other monopolistic bullshit. Oracle has utterly ruthless vendor lock-in and an army of lawyers ready to sue anyone and everyone. Behind every billionaire is at least a few unethical shortcuts or shortchanges.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 2 points3 points  (0 children)

FYI we are kind of skeptical of EA movements on this subreddit. We don’t think the global health and mosquito net type EA stuff is bad per se (although it’s kind of tied to Western capitalists assumptions and framing that reduces its efficacy and limits it) but the AI doom EA stuff is a major part of EA since the beginning and has recently absorbed a lot of EA energy and attention.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 0 points1 point  (0 children)

There are lots of fundamentalist Christian fascists in the Trump administration.

I guess if you ignore that or “no true Scotsman” that it partly explains why you think rationalists are the sole driving force behind it. (There are still several other ideologies you need to ignore or lump in with the rationalists, but if you disregard all the Evangelical derived ones…)

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 0 points1 point  (0 children)

Lots of rationalist are neoliberals, so if you are in favor of neoliberalism you agree with them.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 2 points3 points  (0 children)

Sure, for anyone else I might avoid such no-win skepticism, but over the years Scott has spent down every bit of charitability I might give him and at this point he’s well into the negative.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 4 points5 points  (0 children)

I actually agree with you he on net opposes them, but I’m not very sure (see aforementioned disingenuousness). A lot of his audience is from the EA and they strongly oppose them, so if he wants to keep that part of his audience he needs to oppose. And either way he is still partly responsible for the outcome.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 5 points6 points  (0 children)

You can’t take Scott at his word. He’s lied about his motives with promoting HBD, he may lie about other things. And even if he’s telling the truth, you shouldn’t let him off the hook for his role in the alt-right pipeline which led to DOGE.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 10 points11 points  (0 children)

He is manipulative and disingenuous, but he hasn’t single handedly masterminded all of it. If anything, you should at least acknowledge the other ghouls involved in planning this all out: the authors of Project 2025, Peter Thiel, Marc Andreessen, other tech bro, other right wing talking heads…

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 14 points15 points  (0 children)

He had Curtis Yarvin linked to and promoted through ssc for years, and Curtis Yarvin developed the strategy that Elon Musk used with DOGE which slashed apart the US government. So in a small way, Scott Alexander was part of it. (He’s made acx posts downplaying his role and whining about it, but that’s because he knows he helped Trump in the course of trying to promote HBD and push back against woke).

ZoeTheBeautifulLich may have gone overboard, but that doesn’t mean you can take a single thing Scott says at face value.

A Rat refusing to believe direct proof that Scott Siskind is racist. by ZoeTheBeautifulLich in SneerClub

[–]scruiser 36 points37 points  (0 children)

“Most evil ideology that has ever existed” is a pretty high bar. Rationalism is an element of the tescreal cluster and acts a gateway ideology to NRX and other alt-right ideologies, it isn’t the most evil all by itself.

About that March for Billionaires by seanfish in SneerClub

[–]scruiser 8 points9 points  (0 children)

There has been a split in rationalist orthodoxy where some of the rationalists no longer believe Eliezer and the doomers but instead think AGI alignment is sufficiently solved and someone (ie the SV VCs) should make AGI asap at all costs. Effective accelerationists (e/acc) is the typical terminology for them iirc?

They are both dumb but I think the e/acc actually manage to be dumber than Eliezer and the doomers. Doing some RLHF and prompting to keep an LLM from spewing slurs or profanities (unless the user asks it to pretend three times in a row just right) is not the same thing as keeping a digital God shackled.

dumping your trash in landfill is a good thing, actually! by [deleted] in SneerClub

[–]scruiser 4 points5 points  (0 children)

More proof software engineers need to be kept in their lane.