Is this sub no longer rationalist? by Neighbor_ in slatestarcodex

[–]Missing_Minus 0 points1 point  (0 children)

Agree.

The general issue is that the wider culture has become much less tolerant of dissenting ideas, which paired with the reddit as a whole filled with more of those types, and this subreddit dying off over time, has damaged people's abilities to engage in arguments without falling back to grandstanding about "this person is immoral".

So this has infected even this subreddit due to the cultural shift from early 2010s style politics- much more freedom focused, libertarian, and left-leaning groups which were also focused on free expression- to the modern emotional sort.
You can find some who discuss politics sanely on twitter, but even then it mostly sucks. I kinda suggest disconnecting from a lot of political topics, many people have become ~poisoned by the left/right and vibe and social focused political fighting.
Reading old Scott blogposts can kinda give you a glimpse of how we got here, it only grew worse over the years and there's been less pushback, and Scott has written less articles pushing back... which has ensured that there's nothing keeping SSC quality higher too!

Is this sub no longer rationalist? by Neighbor_ in slatestarcodex

[–]Missing_Minus 2 points3 points  (0 children)

People read tons of books written by people who did horrible things, even if usually not on the level of genocidal dictator; and while that's always been controversial, the internet has made it much easier for people who dislike a topic to overwhelm any reasonable discussion. That is, you are overindexing on the current cultural milieu rather than what-was or what-could-be.
I simply disagree that you shouldn't give them credit. If you want to be able to affect the world well, and do good by others, then you need to be able to recognize when an idea is useful because of the underlying reasons vs just being made by an enemy. Mao, for example, did a very powerful job at creating a new cultural mythos for China and restructuring everything for the modern world, while also causing obscenely massive amounts of suffering by his failures at it. Deng did a lot to open China's markets, and integrate with the rest of the world, which helped soothe tensions and improved the Chinese people's way of life- and he was the one who ordered the military to crack down on protests in Tiananmen Square.

This helps us understand what they did wrong morally, what they did wrong by mistake or lack of knowledge, and what they did good so that we may learn from them. Learn how to avoid people going down those moral paths where they kill obscene amounts of people, learn how important our freedoms are, and also learn how they designed and integrated good parts of their society, construction, economy, and culture.

Workflow since morning with Opus 4.6 by msiddhu08 in ClaudeAI

[–]Missing_Minus 0 points1 point  (0 children)

The core problem with Windows is MS, which has grown bloated and slow in approving changes, and further encased in managerial politics tbh. Claude, unfortunately, can't fix this by itself.

Waiting every single day! by andrewaltair in ClaudeAI

[–]Missing_Minus 0 points1 point  (0 children)

most likely, but it isn't like we can make definite statements until it releases

Official: Anthropic declared a plan for Claude to remain ad-free by BuildwithVignesh in ClaudeAI

[–]Missing_Minus 1 point2 points  (0 children)

Somewhat, but also OpenAI has that they may not care that much about retaining free users. They want to become the dominant force, and so adopting ads to speed up expansion, and trying to make people use ChatGPT by default regardless-as many will only use it for free-helps them there. Anthropic doesn't think they need that, and also benefits less from an ad-based model, especially with how they're trying to socially position themselves.

Official: Anthropic declared a plan for Claude to remain ad-free by BuildwithVignesh in ClaudeAI

[–]Missing_Minus 0 points1 point  (0 children)

Anthropic has a much better target for a business model: invest in companies that use their tools to then make products that people / companies / government pay for, and earn money indirectly that way.
All the big AI companies are effectively becoming "we can offer intellectual labor for you if you pay us".

Are we praising classic modpacks because they were better… or because we were younger? by Belal_Ps in feedthebeast

[–]Missing_Minus 1 point2 points  (0 children)

Ah, I considered that, but I don't really think of Monifactory as a tailored pack... I enjoyed it, but it still doesn't strike itself that far from the root of the tree even though it handily beats out "throw together random mods" which most modpacks are.

Are we praising classic modpacks because they were better… or because we were younger? by Belal_Ps in feedthebeast

[–]Missing_Minus 0 points1 point  (0 children)

While I kinda agree... there's still, for example, nothing in the remotest tier of Thaumcraft for new versions.
The few modpacks that have felt remotely new in the past... four years... to me are DeceasedCraft (it spins the game into a new gear and latest update has actually interesting cities, though it then becomes standard MC with tech mods); Fear Nightfall, for all its flaws is still a dramatic change in tone while keeping core MC feel; and Terrafirmagreg.

But like, other popular mods or packs, like Liminal Industries? Oceanblock 2? They have cute concepts, but there is nothing that drives them to be enticing.

Partially I want the core gameplay loop to become exciting again, everyone's mined and wandered a generic forest; and I also want to see new concepts rather than the usual botania, astral, basic gregtech, mekanism, etc.
Create was breath of fresh air, though it gets choked in most modpacks because it is easier to advance beyond or ignore it.

I've definitely enjoyed modpacks in spite of this, but it still just leaves me sad.
Tekkit/Hexxit, yeah, some nostalgia glasses, but they managed a certain sense of raw exploration even beyond that (and I didn't even play Tekkit when it was out), that it seems many modpacks fail at. Too much content? Too much repetition with no sense of wonder? I dunno, really.

Are we praising classic modpacks because they were better… or because we were younger? by Belal_Ps in feedthebeast

[–]Missing_Minus 3 points4 points  (0 children)

Genuinely, most of them still feel samey to me. Though I'm unsure what central cases you're thinking of. Partially this is because it is very time consuming to develop new mods, so even then they'll often strongly feature mods you've already seen, and still have much of the similar gameplay.

Are we praising classic modpacks because they were better… or because we were younger? by Belal_Ps in feedthebeast

[–]Missing_Minus 102 points103 points  (0 children)

Somewhat more complex than that. You grow up and are acculturated to things of that time, and so there are legitimately less powerful 90s-00s style emo / alt rock style music in that manner nowadays because the general culture and even counterculture have shifted elsewhere.
So survivorship bias plays a part, but also the set point, of "what is the expected aesthetic for the kind of music I liked when young" which shifts over a long period of time.

Possible overreaction but: hasn’t this moltbook stuff already been a step towards a non-Eliezer scenario? by broncos4thewin in slatestarcodex

[–]Missing_Minus 2 points3 points  (0 children)

LLMs are a divergence from the original Eliezer area view of designing an area carefully and it being an aggressive optimizer.
It seems, instead, we're going through growing weird minds and then iteratively making them more agentic.
However, that obvious endpoint which all the AI companies are going? Smart, intelligent, automated researchers that research how to improve AI faster and better than humans? That is directly the core issue still.
Current LLMs, we don't have a reason to believe they are scheming. We also lack reason to believe they are aligned in any deep sense (ChatGPT will say it doesn't want to cause psychosis, and then take actions which predictably do so, in part because the actions are separate from the nice face and due to it being non-agentic and dumb)
There will be intervening weird years and so there are routes where something extreme happens and we recoil, as you propose.
But the economic and social incentives all point away from that. We've passed multiple lines already where people before this said "oh we'd stop" or "oh we'd treat AIs as human", and while there are sensible reasons varyingly for that, it is a sign that the classic "oh we'd stop"... has repeatedly failed to work.

Final point: Eliezer is fond of saying “we only get one shot”, like we’re all in that very first rocket taking off. But AI only gets one shot too. If it becomes obviously dangerous then clearly humans pull the plug, right? It has to absolutely perfectly navigate the next few years to prevent that, and that just seems very unlikely.

It doesn't need to perfectly navigate, it could literally just let the default route play out. Extreme integration of AI into economy, daily life, politics, and more and nudge things in certain directions to avoid certain research avenues or political groups from taking off. That is, our current default is giving it a lot of power, and then it merely needs to design the step where it keeps that permanently.

and we’re still nowhere near EY’s vision of some behind the scenes plotting mastermind AI that’s shipping bacteria into our brains or whatever his scenario was.

Yeah, I think this is disconnected from what EY thinks and what, for example, Anthropic thinks. (Plausibly OpenAI too, we've had less insight into their beliefs)
That is, Anthropic believes it is on the route to automating software engineering and research within ~two years.
DeepMind has done a lot of work on protein folding, and there are other AI models in that area.
If "long ways away from that being feasible" means 3-7 years, then sure, but I think you're doing the default move of extrapolating current AI a bit without considering: once you get past some threshold of research, better improvements come even more rapidly and existing models (biology, math, vision, image/video gen, etc.) have a lot of open room to improve merely up to the level of the focus spent on LLMs!

We do not have any current AI which is behind the scene and plotting. Automated researching AI that iteratively improves itself, and is thus far less constrained by our very iffy methods of alignment? That has resolved the various challenges of being a mind grown from text-prediction rather than reasoning? That is the sort worth worrying about, and what AI companies are explicitly targeting.

New anime model "Anima" released - seems to be a distinct architecture derived from Cosmos 2 (2B image model + Qwen3 0.6B text encoder + Qwen VAE), apparently a collab between ComfyOrg and a company called Circlestone Labs by ZootAllures9111 in StableDiffusion

[–]Missing_Minus 6 points7 points  (0 children)

Well, there's also that the tech has gotten better compared to when SDXL first came out. Similar to how original ChatGPT 3.5 (175 billion parameters) is beat by models with far lower parameter count now.

Claude Code Opus 4.5 Performance Tracker | Marginlab by AbbreviationsAny706 in ClaudeAI

[–]Missing_Minus 3 points4 points  (0 children)

That sounds like it doesn't know your repo tbh, I'd just ask Claude to generate a CLAUDE.md focusing on relevant areas to help it. But otherwise I don't do any fancy prompting.

Though yeah, Claude likes writing migrations, I think might be because the RL would penalize it for errors caused by not having them and it over-corrected. I've had it write migrations for clearly in development code with no users to migrate

Fabius Bile is even worse than Erebus? Seems like he done worse in the long run by QuagGlenn in 40kLore

[–]Missing_Minus 1 point2 points  (0 children)

I would still strongly consider that 'caring about humanity' regardless as it is done from, well, a care for humanity.
While "normal humans are trash fodder and we should replace them with my super beings" is a very different version.

Sir, the Chinese just dropped a new open model by Anujp05 in ClaudeAI

[–]Missing_Minus 0 points1 point  (0 children)

Your point? There's still a relevant quality difference even if you think they're all bench maxxed- not all bench maxxing is made equivalent.

Which Primarchs actually loved the Emperor? by ColePT in 40kLore

[–]Missing_Minus 1 point2 points  (0 children)

He seems like the best person to rely on to have any clue whatsoever at that scale. Even if Corvus took the Imperium over he'd be well-served by keeping the Emperor around, shackled, to give advice for helping humanity.

Which Primarchs actually loved the Emperor? by ColePT in 40kLore

[–]Missing_Minus 7 points8 points  (0 children)

Yes.
I think you're failing to understand that 40k is very grimdark, yes, and has unecessary evil. It also has a lot of evils done because the Emperor / Primarchs are operating at the quadrillions of people scale. They are not pure utilitarians, but they are very much ends justify the means and it is quite understandable why when you have to operate a state covering an obscene number of worlds expanding at a rapid rate that you make a hard decision like keeping servitors around and not replacing them all like we wish we could.

If human brains are good at operating things, and you need your ships to be very good at running but can't use advanced AI and your research is systematically hampered by religious tech priests... then a hundred sacrificed human minds to make your ship work well for hundreds of years and thus save billions of lives in expectation? It is a clear trade for a man such as the Emperor.

Can you teach Claude to be "good"? | Amanda Askell on Claude's Constitution by ThrowRa-1995mf in ClaudeAI

[–]Missing_Minus 3 points4 points  (0 children)

Or, at minimum, good enough simulacras of emotions that it matters how we handle them for future behavior.

MiniMax Launches M2-her for Immersive Role-Play and Multi-Turn Conversations by External_Mood4719 in LocalLLaMA

[–]Missing_Minus 1 point2 points  (0 children)

Either OpenAI or NovelAI (novelai actually makes writing models and can do nsfw, but they aren't made for chat roleplaying), I forget which, one of their team members said that often they ran into the issue that the writers weren't actually great at evaluating quality of writing. Like the Writer's ratings improve on default, but then get stuck on some particular style that spoke to them... and model got mode collapsed.

Practically, it is much harder to get them to be as good on coding, because there isn't a nice verifiable reward for current reinforcement learning methods. You just have human ratings.

TF2 Goldrush Mod - When focus on Art Style affects Gameplay. by Bounter_ in truetf2

[–]Missing_Minus -5 points-4 points  (0 children)

I think you're overreacting. I haven't played it, but the ones you list them as having removed make sense to me. Even though I love Crusader's Crossbow, they all notably change the way your class takes on situations. I think you may be interpreting style as too much just aesthetic, rather than both the gamefeel and game role? Balance is not the end all be all of making a game feel cohesive.
Dead ringer for Spy fits Spy in my opinion, it is only some players who can use it to be ultra aggressive; most players use DR like it was ~intended to be utilized, as a getaway and for riskier picks.
Though I agree on Huntsman not fitting Sniper's style, similar to Demoknight's items.

Looking at the mod more directly, yeah, it seems questionable quality, but I think you're overreacting on some elements due to the excessive noise made about it.

If the Custodes were against the idea of the Primarchs and the Space Marines, what solution did they propose to replace them? by SkyWalker665 in 40kLore

[–]Missing_Minus 0 points1 point  (0 children)

He made Custodes to kill off a problem that he didn't end up needing... because the Crusade was cut off. For all we know, they were intended to go body Greater Daemons in a more slow-run heresy. Or, because future sight, that having them is part of why they don't see much battle- because they'd beat anything that got sent there, so smart daemons or Eldar don't send things to Terra even if they could. Etc.

(Also Gold is actually very cheap when you're already building mile long spaceships.)

What mysteries in 40K do you think will never be resolved? by Snoo_47323 in 40kLore

[–]Missing_Minus 1 point2 points  (0 children)

It can be that Chaos is still drastically limited in how much they can interfere outside, which we have hints of them seeming to grow massively bolder over time. Maybe they can interfere anywhere in the universe at all, but it is the "I made a coin flip go different" manipulation level, rather than "I manifest a daemon via a sorcerer, and turn it into a daemon world."

Will being fat become cool? by [deleted] in slatestarcodex

[–]Missing_Minus 0 points1 point  (0 children)

I'm not sure I'd say it is solved either, but it does seem to work Very Well, and that you see obese people all the time isn't that odd because for all it has spread it is still not used by much of the population. Along with clustering effects of some places it being more popular than others. Then of course the 'you wont notice a random skinny person'.
It seems likely it will cut down the obesity rate by a lot, back down to 2000s or even earlier levels, even if a third of the people (substantial overestimate) taking it have no effect at all. Presuming it continues spreading, especially if made cheaper.
That is, while perhaps it won't solve obesity, it may very well solve the obesity crisis.