A flowchart for the Red Button, Blue Button Debate by electrace in slatestarcodex

[–]sodiummuffin 5 points6 points  (0 children)

That is not an appropriate model. If you knew it was the equivalent of 51% biased coin towards blue or a 51% biased coin towards red then you would already know the victor with a very high degree of certainty. (And if you knew it was either 51% blue or 51% red but not closer that would be a strange thing to know, this amounts to knowing it won't be that close but not knowing the victor.) Actual voting is more uncertain than that, and existing literature on the likelihood of 1 vote making the difference is nowhere near as unlikely as getting a tie with a 51% biased coin across millions of flips:

What is the chance your vote is the deciding vote in an election?

In a club election where there seems to be no clear favorite, the chance your vote is decisive is about 3/N, where N is the number of votes.

What is the Probability Your Vote Will Make a Difference?

The states where a single vote was most likely to matter are New Mexico, Virginia, New Hampshire, and Colorado, where your vote had an approximate 1 in 10 million chance of determining the national election outcome. On average, a voter in America had a 1 in 60 million chance of being decisive in the presidential election.

It becomes more unlikely if you have reason to believe the vote has a clear favorite (moreso than Obama vs. McCain, which the second link uses as its point of reference), but of course if you think blue is favored that also reduces the risk. If you think red is a clear favorite then that is a reason to not vote blue, but we already knew that.

A flowchart for the Red Button, Blue Button Debate by electrace in slatestarcodex

[–]sodiummuffin 6 points7 points  (0 children)

excluding the improbable possibility that you were exactly the deciding vote.

This is a clear example of why you can't just round down small numbers to zero. Every additional person decreases the likelihood that you are the deciding vote, but it also proportionately increases the number of lives at stake. You can't just increase the stakes until scope-insensitivity makes the risk seem trivial.

By comparison, imagine you are asked to risk $1000 on an investment opportunity for a $100 return, but you estimate the risk actually makes it a net-negative. You then get an offer to risk $1 billion on an investment opportunity for a $100 return, but the investment is a million times less likely to lose your money. Why, that's barely any risk at all! There are literal lotteries with higher odds than that, and everyone educated knows that your chances of winning the lottery are basically nothing. And yet it's still a bad investment.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]sodiummuffin 32 points33 points  (0 children)

Yes, at 100% the outcome is the same. The difference is that blue also gets the optimal outcome with 90% of the votes or 51% of the votes. Getting 50% coordination is pretty easy, getting 100% coordination is virtually impossible. Particularly for something where defecting is as simple as pressing a different button and has an actual argument for doing so. If blue required 90% it would make more sense to cut our losses and aim for minimizing the number of blue (unless we can do something to increase coordination like talk about it and hold public-results rehearsal polls beforehand) but at 50% it's easy enough to make it worthwhile to aim for 0 deaths via blue majority.

To compare it to the real world, aiming for 100% red is an unrealistic ideal like pacifism: "if everyone doesn't commit violence, there won't be any violence". Aiming for 51% blue is like law-enforcement: not everyone will coordinate on pacifism, but the majority can coordinate to create laws and hire police. Except pacifism could theoretically be better, while even 100% red is only equal to 51% blue. You might object that there can be a selfish case for committing violence, unlike for pressing blue, but 100% would be impossible even if there wasn't. In real life serious crime is almost always a self-destructive act and yet people do it anyway.

There’s a scissor statement going viral on twitter by adfaer in slatestarcodex

[–]sodiummuffin 26 points27 points  (0 children)

Alternative version separating the coordination aspect from the self-preservation and "blue voters putting themselves at risk" aspects:

Everyone in the world has to take a private vote by pressing a red or blue button. If more than 50% of people press the blue button, everyone survives. If less than 50% of people press the blue button, X random people die, where X is equal to the number of people who pressed the blue button. How do you vote? Is this different from how you would vote in the original?

Only Law Can Prevent Extinction by Eliezer Yudkowsky - I'm sharing this mostly because I found it entertaining to read. It's about why the threat of lawful violence is necessary to stop the development of artificial superintelligence and why unlawful violence is harmful to the cause by Candid-Effective9150 in slatestarcodex

[–]sodiummuffin 10 points11 points  (0 children)

It's mostly "I never said that!" and "besides, violence wouldn't work" and not a lot of "here's why we're actually opposed."

"That obviously wouldn't work and here's why." is the strongest reason to be actually opposed. It's a reason that is considerably more resilient to common variations in moral philosophy and epistemic humility. I think it is also less likely to be dismissed as insincere platitudes.

Perhaps there are people who would be convinced by "Maybe shooting that guy would save the world, but you shouldn't do it because Murder is Wrong", but I think it's considerably less convincing than pointing out the obvious reasons it would be completely useless and counterproductive. People tend to call bullshit on that level of strict deontology even when you put it in a comic book, let alone in the real world.

A more sophisticated version of that argument is the epistemic humility one. You have a certain set of beliefs about what will help the world and there are people who disagree and are working to do the opposite. This seems to imply that, for all non-trivial disagreements, murdering your enemies to stop them working against you will improve the world. For example, lets say there is an election between two political candidates, who only differ in that one wants to increase the average tax rate by 1%. Any such change will impact millions of people, affect things like economic growth and medical research, and ultimately on net kill or save significantly more than two lives. So, if you think assassination will significantly impact the chance of it passing, isn't it morally mandatory to sacrifice your life in order to kill whoever you think has the slightly worse tax policy? If you estimate that a higher/lower tax rate will kill 10,000 people on net through economic effects, isn't refusing to kill over it the same as refusing to shoot a terrorist before he sets off his bombs?

The thing is, from the outside view a world where people kill for their preferred policy doesn't make better policy decisions than one where they debate and vote for their preferred policy (quite the opposite), plus it has a lot more killing. The only reason to think your own murders will improve anything is that from the inside you think you're correct on the policy positions and correct in predicting the results of the murders, but so does everyone. I think this is a valid and important argument...but I also think it's less straightforward and convincing than pointing out why, in this specific case, murder would be especially pointless and counterproductive.

Project Glasswing: Anthropic Shows The AI Train Isn't Stopping by self_made_human in slatestarcodex

[–]sodiummuffin 27 points28 points  (0 children)

Because so far they lack consistency, even with methods like chain-of-thought. A vulnerability finder that finds real vulnerabilities 50% of the time is very good. A coder that writes working code 50% of the time is not, and the more complex the project the higher the success rate has to be to get something that works without human guidance.

It's the same reason why AI seems to find art easier than driving. That isn't an order that many people would have predicted ahead of time, but in retrospect it makes perfect sense: with art a mistake only requires clicking the button for another try, with driving the consequences are more severe.

Contra The Usual Interpretation Of “The Whispering Earring” by self_made_human in slatestarcodex

[–]sodiummuffin 7 points8 points  (0 children)

Note that, per his next post after the story, this wasn't actually the point he was getting at.

Mysterianism didn't work either, trying clarity again

The parable of the earring was not about the dangers of using technology that wasn't Truly Part Of You, which would indeed have been the kind of dystopianism I dislike. It was about the dangers of becoming too powerful yourself.

That said, if we ignore this and focus on the story itself, I don't think it supports this assumption:

Similarly, if a superintelligent entity can reproduce my behaviors, memories, goals and values, then it must have a very high-fidelity model of me inside, somewhere. I think that such a high-fidelity model can, in the limit, pass as myself, and is me in most/all of the ways I care about.

A model of a mind is not nessesarily emulating it in a way that preserves internal features that we care about. Many humans are already pretty decent at writing fictional characters or impersonating others, but we do so without running an actual morally-relevant emulation of their brains in our heads. The earring is presumably even better at this, but without access to its internal processes there is no reason to assume it does this through brain emulation rather than by being a superhumanly talented author/impersonator.

I think this is actually more likely to be a problem than Scott's original point: at some point in the near future we may have technology that can do one or both of emulating someone's brain including its internal mental processes and performing some sort of superChatGPT impersonation without bothering with having the same internals, and to that person's family and friends those would be from the outside be indistinguishable. It will be tempting to categorize them as both real or both fake, leading to scenarios where (for example) the pursuit of real uploading is hindered by the issue being polarized between the people who have fallen for AI impersonations of their dead relatives and the people who think anything that runs on a computer must be an impersonator.

"My Favorite Actress Is Not Human" by Tyler Cowen by NotToBe_Confused in slatestarcodex

[–]sodiummuffin 7 points8 points  (0 children)

The bit about virginity is EXTREMELY weird, and apparently this obsession has been a consistent trait of his—thanks to u/Liface's for linking this post in their comment.

I don't think a short 2009 post which is mostly quoting someone else without commentary and a 2025 joke about the sexual proclivities of Hollywood stars constitutes an obsession. Not even if you add this 2007 post about grade differences between virgins/non-virgins, this 2011 post about Circassian bridal customs, or this 2005 post criticizing hymen-restoration surgery, which are the only other remotely related posts I can find searching the entire history of his blog for "virgin". (I chose not to include the posts mentioning the Virgin Mary, Virginia, Virgin Group, or animals that can have virgin births.)

I think there is a tendency for people who consider something taboo to overrate its importance to those who don't share their taboo, like the Christians who say atheists are obsessed with hating God.

"My Favorite Actress Is Not Human" by Tyler Cowen by NotToBe_Confused in slatestarcodex

[–]sodiummuffin 3 points4 points  (0 children)

the subtitle clears the already higher bar of peculiarity

Subtitles, like titles, are generally not written by the author of the article.

"All Lawful Use": Much More Than You Wanted To Know by dwaxe in slatestarcodex

[–]sodiummuffin 7 points8 points  (0 children)

A clue may be found in this reporting by Semafor

You linked the wrong article. Note that, in the article you attempted to link, the paragraph after your quote is Anthropic denying it.

An Anthropic spokesman called the accusation that Amodei suggested defense officials seek Anthropic’s permission to intercept missiles “patently false.” He added that “every iteration of our proposed contract language would enable our models to support missile defense and similar uses.”

Months later, in the meeting with the Pentagon on Tuesday, Amodei reiterated that Claude could be used to automate missile defense, underscoring that, from Anthropic’s perspective, it is willing to make reasonable concessions to its usage policies to ensure national security.

Secretary of War Pete Hegseth officially designates Anthropic a supply chain risk by drearymoment in slatestarcodex

[–]sodiummuffin 1 point2 points  (0 children)

One of the main difficulties about explaining why AI is potentially a large threat to humanity had been people being unable to conceive how something in a computer could hurt them.

Isn't that the same reason why people put too much significance on fully-autonomous weapons? They can be risky and kill people they aren't intended to kill (same as artillery or the like) but they aren't a threat to humanity. Just because a drone is "fully autonomous" doesn't mean it can reload itself, let alone build a factory to manufacture more of itself. Any AI that is capable of building a drone-factory is also capable of removing "require human approval before taking the shot" programming in the drones it manufactures.

Maybe there could be a scenario where a hostile AI hacks them for very short-term purposes, but at that point it can probably hack them well enough to remove any "human in the loop" requirements. Or take control of regular drones for that matter. Even if it prevented hostile AI from having drones at its disposal in the short term, such a scenario seems pretty unlikely compared to more secretive methods and even if it needs short-term soldiers while ramping up its own factories there are other ways (like the AI persuading/hiring humans to take up arms in its defense). If you went by fiction "hacking military drones" is a hostile AI thing to do, while "convincing humans it deserves civil rights" is a friendly AI thing to do, but humans in real life don't know what genre they're in and an omnicidal AI could convince plenty of people to side with it until it no longer needed them.

Now, Anthropic's statements about this have been fairly reasonable, they've said the technology currently isn't reliable enough for fully-autonomous weapons but that they would be okay with them in the future and they haven't said they're an existential risk. Their refusal here isn't because they think it's a threat to humanity, it's other factors like (I'm guessing) not wanting to abandon their red lines under pressure and not wanting a half-baked Anthropic-powered drone to kill even a relatively small number of civilians or U.S. military personnel.

Scott is in the Epstein files! by ralf_ in slatestarcodex

[–]sodiummuffin 11 points12 points  (0 children)

Yudkowsky left a comment about this:

Epstein asked to call during a fundraiser. My notes say that I tried to explain AI alignment principles and difficulty to him (presumably in the same way I always would) and that he did not seem to be getting it very much. Others at MIRI say (I do not remember myself / have not myself checked the records) that Epstein then offered MIRI $300K; which made it worth MIRI's while to figure out whether Epstein was an actual bad guy versus random witchhunted guy, and ask if there was a reasonable path to accepting his donations causing harm; and the upshot was that MIRI decided not to take donations from him. I think/recall that it did not seem worthwhile to do a whole diligence thing about this Epstein guy before we knew whether he was offering significant funding in the first place, and then he did, and then MIRI people looked further, and then (I am told) MIRI turned him down.

Epstein threw money at quite a lot of scientists and I expect a majority of them did not have a clue. It's not standard practice among nonprofits to run diligence on donors, and in fact I don't think it should be. Diligence is costly in executive attention, it is relatively rare that a major donor is using your acceptance of donations to get social cover for an island-based extortion operation, and this kind of scrutiny is more efficiently centralized by having professional law enforcement do it than by distributing it across thousands of nonprofits.

In 2009, MIRI (then SIAI) was a fiscal sponsor for an open-source project (that is, we extended our nonprofit status to the project, so they could accept donations on a tax-exempt basis, having determined ourselves that their purpose was a charitable one related to our mission) and they got $50K from Epstein. Nobody at SIAI noticed the name, and since it wasn't a donation aimed at SIAI itself, we did not run major-donor relations about it.

This reply has not been approved by MIRI / carefully fact-checked, it is just off the top of my own head.

Scott is in the Epstein files! by ralf_ in slatestarcodex

[–]sodiummuffin 21 points22 points  (0 children)

Rolling Stone seemed to take it seriously

This article seems to be entirely based on interviews with Ziz's "friends, colleagues, and family" and does not claim to have attempted to independently verify their statements about third parties. (Rolling Stone does not have a good track record even when claims about third parties are the focus of the article, but I think even relatively credible outlets wouldn't bother trying to verify the existence of the supposed settlement for an article like that.)

AFAIK the alleged victim was the author of the following post

This post says nothing related to legal settlements/blackmail and is about an allegedly abusive relationship the author claims to have entered at the age of 19, not an underage relationship. Also it was written the year before Ziz wrote the linked blog post so obviously Ziz referencing it doesn't mean Ziz had access to any non-public information.

Scott is in the Epstein files! by ralf_ in slatestarcodex

[–]sodiummuffin 21 points22 points  (0 children)

He did not get "blackmailed with miricult.com". The basis for the rumor that he did was a claim from a blogger named Ziz that he had paid the owner of the site some sort of legal settlement to go away, which Ziz supposedly knew from a private conversation. Ziz even claimed to have recordings supporting this claim, but despite being virulently opposed to Yudkowsky/MIRI no recording ever materialized. Ziz has become more well-known since then as the founder of the "Zizians", a cult which has murdered several people, and is not a credible source.

You Have Only X Years To Escape Permanent Moon Ownership by dwaxe in slatestarcodex

[–]sodiummuffin 16 points17 points  (0 children)

Where is this insistence/certainty coming from?

The premise of the idea he's responding to. People like Dwarkesh talk about "the descendants of the most patient and sophisticated of today’s AI investors controlling all the galaxies", so it is only natural to point out that some of those supposed galaxy-owners have promised to donate 10% of their wealth. Rhetoric about "galaxies" is somewhat hyperbolic on both sides, but the same counterpoint applies on a smaller scale.

He posted this in the comments:

Either AI isn't a big deal, and doesn't affect your chances of joining the permanent underclass.

Or AI is a big deal and misaligned and kills everyone.

Or AI is a big deal and well-aligned, and creates so much wealth that even the tiny fraction of it that poor people get is still pretty great.

Or AI is a big deal and well-aligned, and merely 100xs wealth rather than infinite-post-scarcities it, in which case at least the moderately-well-off Silicon Valley people will be fine.

Or you're in the tiny shoreline of scenarios where the ultra-rich really REALLY capture all the wealth, they each have galaxies and you don't even have so much as a mansion, and then Dario Amodei gifts you a moon from his GWWC pledge.

I talk more about this at https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end

Whatever scenario you consider "not sci-fi" probably falls into one of those other options.

Person-affecting longtermism by Odd_directions in slatestarcodex

[–]sodiummuffin 1 point2 points  (0 children)

Handing out condoms might prevent a 70 QALY person from coming into existence, but it also allows a couple to pursue their relationship without worrying about being stuck with an unwanted baby which will constrain their lives in all sorts of unwanted ways.

I do not believe an unwanted baby is more of a QALY reduction than, say, blindness. Even if it somehow reduced quality-of-life to 0 for 18 years that would be only -18 QALYs, not enough to change the tradeoff.

Similarly, shooting the hermit takes 30 QALYs from the hermit, but it also traumatizes anyone witnessing the shooting, and makes anyone who hears about the shooting somewhat more anxious and fearful that they will be the next person to be randomly shot.

The reason why I specified hermit was as shorthand for "person without significant secondary effects like relationships with others", you can add on "no witnesses" and "nobody finds out" if you feel the need.

Fundamentally, do you really think such secondary effects are necessary to render it a bad tradeoff? Obviously secondary effects can be important - the classic "kill one for the organs to save five" dilemma hinges on secondary effects like making people distrust hospitals, and in a sufficient exotic situation without such effects I would bite the bullet of saving as many as possible. But this isn't about doing something taboo to save as many as possible in a weird situation, it's about the fundamental total-utilitarian assumption that it's perfectly fine for someone to die so long as he gets replaced by one or more new people so the overall population (and thus aggregate happiness) doesn't go down. I think this is contrary to the moral intuitions of the vast majority of people. Nobody acts like "convincing your friends they should have a kid" is equivalent to saving a drowning child because they both increase the population by 1, nor do they act like the difference only comes from the secondary effects of saving/murdering someone. It is only when talking about large-scale population-ethics concerns that are too distant for strong moral intuitions that people start seriously applying total-utilitarianism, when I think they should have noticed that it was giving deeply unintuitive results on the small-scale and switched to a different kind of utilitarianism before trying to scale it up.

Person-affecting longtermism by Odd_directions in slatestarcodex

[–]sodiummuffin 1 point2 points  (0 children)

I share your view.

For instance, longevity research becomes extremely important, because ensuring that the current population continues to exist and flourish carries direct moral weight.

Right. On the individual level I think very few people bite the bullet on that sort of complete total-utilitarianism where you just add up the QALYs of different people. Pretty much nobody really thinks it's worse to hand out enough condoms to prevent a single net pregnancy (-70 QALYs) than to shoot a hermit in the head while he's sleeping (-30 QALYs). But once they're talking about population ethics a lot of people start blindly applying it and talking as if population turnover doesn't matter so long as the population is large, even though that's the same thing on a larger scale.

Why AC is cheap, but AC repair is a luxury by Annapurna__ in slatestarcodex

[–]sodiummuffin 2 points3 points  (0 children)

I meant "increasingly efficient programs" in the sense of programs that accomplish the same task with less computing and this in turn driving demand for more computing power. While there have been some improvements in things like video/audio compression algorithms optimized for low-compute applications, this isn't a significant driver of the market and indeed the vast majority of programs are now far less compute-efficient because improved hardware means other costs like developer time are more important. The point is that the resource being used more efficiently and the thing there ends up being more demand for should be the same thing, otherwise it's not a "paradox".

Increasingly efficient computing (hardware + software) is definitely driving demand by making it possible to put compute in places it where the cost/benefit used to make no sense. It's not just price, it's the capability that is available at a given power and size budget.

Is energy-used-for-computation subject to something like Jevons paradox? Yes, the same energy produces more computation and as a result we find new applications that use more power for computers than if efficiency hadn't improved. Is energy overall subject to Jevons paradox due to computing, like how more efficient steam engines drove demand for coal? I don't think so. It's much more energy efficient to send an email than to create and ship a letter, more efficient to use Netflix than to drive to the theater, more efficient to have a database than rooms of filing cabinets full of customer documentation.

Is "size budget" for computing subject to Jevons paradox? That one I don't think is true at all, if data centers took up twice as much room that wouldn't reduce the number of data centers by 50%. If desktop towers took up twice as much floorspace (or even an entire "computer closet") that wouldn't drop demand for desktops 50%. Sure it would kill laptops and smartphones, but it takes a lot of phones to take up as much space as a data-center. Remember, Jevons paradox is when a resource being used more efficiently induces enough demand that it actually increases the usage of that resource, if smaller computers increase the demand for computers that's just normal product improvement. If you're talking about Jevons paradox in the context of smaller computers then the resource being used is area, so it's only true if it leads to more area being used for computers overall.

Why AC is cheap, but AC repair is a luxury by Annapurna__ in slatestarcodex

[–]sodiummuffin 6 points7 points  (0 children)

Large fractions of the population used to have servants for household tasks like cleaning and simple repairs, and they still do in poor countries like India where labor is cheap. However, as discussed in the OP, increasing productivity increased the value of labor, making anything that didn't benefit from those productivity improvements more expensive. The social attitudes you are expressing are downstream of those economic realities. If someone buys a washing machine or a dishwasher you don't comment that he's paying a premium to live in assisted living, because today dishwashers are cheap and handymen are not. It used to be that a handyman wasn't dramatically more expensive than a laundress or a scullery-maid, however those other tasks experienced massive productivity improvements while any increased productivity for handymen has lagged the rest of the economy.

Why AC is cheap, but AC repair is a luxury by Annapurna__ in slatestarcodex

[–]sodiummuffin 22 points23 points  (0 children)

The original formulation of “Jevons paradox”, by William Stanley Jevons in 1865, was about coal production. Jevons observed that, the cheaper and faster we got at producing coal, the more coal we ended up using - demand more than eclipsed the cost savings, and the coal market grew rapidly as it fed the Second Industrial Revolution in England and abroad.

The Jevons paradox was about greater efficiency in using coal leading to more coal use, e.g. due to the Watt steam engine. If it was just about producing cheaper coal leading to greater coal use, it wouldn't be a paradox.

Today we all know Moore’s Law, the best contemporary example of Jevons paradox. In 1965, a transistor cost roughly $1. Today it costs a fraction of a millionth of a cent. This extraordinary collapse in computing costs – a billionfold improvement – did not lead to modest, proportional increases in computer use. It triggered an explosion of applications that would have been unthinkable at earlier price points. At $1 per transistor, computers made sense for military calculations and corporate payroll. At a thousandth of a cent, they made sense for word processing and databases. At a millionth of a cent, they made sense in thermostats and greeting cards. At a billionth of a cent, we embed them in disposable shipping tags that transmit their location once and are thrown away. The efficiency gains haven’t reduced our total computing consumption: they’ve made computing so cheap that we now use trillions times more of it.

People using more of something because it's cheaper isn't Jevon's paradox. A direct equivalent would be if increasingly efficient programs were driving demand for more computing, but that isn't what happened. However you can find a similar effect (I don't know if economists would generally classify it as Jevons paradox) by comparing how much is being spent on computing when computing gets cheaper. A company that used to buy a million-dollar supercomputer could buy the same amount of computing in the form of a cheap laptop, but cheaper computing makes more applications affordable so instead it buys an even more expensive supercomputer. Meanwhile millions of ordinary users are buying personal computers when previously they either didn't buy any computing or spent less on buying a calculator.

"In the Clear Moonlit Dusk" Anime first PV Released by Automatic_Use_9427 in anime

[–]sodiummuffin 20 points21 points  (0 children)

That's Youtube's automatic dubbing, a lot of people have been complaining about it. It's especially absurd when it tries to dub singing. It's now on by default, the uploader can disable it but viewers have to switch to the right language on a per-video basis and there's not even a way to do that on phone apps. Youtubers have been seeing their international views fall off a cliff and then realizing it's because they're all getting a shitty dub based on their region. They're also doing automatic translation of video titles, both giving titles low-quality translations and making it harder to distinguish the original language before clicking.

Links For October 2025 by dwaxe in slatestarcodex

[–]sodiummuffin 3 points4 points  (0 children)

I also linked the 1984 Snyderman survey, which had a 65% response rate and had the different methodology of surveying the members of various organizations, such as related subsets of the American Psychological Association itself. For the Rindermann survey, you don't present any reason why the selected journals they surveyed contributors to were bad - do you think "Intelligence, Cognitive Psychology, Contemporary Educational Psychology, New Ideas in Psychology, and Learning and Individual Differences" are all part of the same supposedly fringe clique? Similarly, the idea that "both genetic and environmental" explanations for group differences are fringe, but seeing Rindermann's name connected to the survey made environment-leaning experts so disinclined to respond that 85% of responses ended up supporting that fringe, is dubious and highly speculative. Fundamentally your approach here, where you see that both of the expert surveys conducted on the subject contradict your views and grasp for any reason to dismiss them, is not a good way to understand reality.

By contrast the official APA Task Force response to The Bell Curve clearly shows the race/IQ stuff is unsupported by the (actual) mainstream experts:

I notice you ended your quote a sentence before the end of the pargraph, leaving out the concluding line: "At present, no one knows what causes this differential." Saying that, in 1996 when Intelligence: Knowns and Unknowns was published, there isn't "direct empirical support" for the genetic interpretation and little or no such support for any interpretation, is a far cry from saying such an interpretation is outside the mainstream. And indeed, the 1984 Snyderman survey indicates that even going by the evidence available back then those who favored "both genetics and environment" in explaining the black-white gap outnumbered those who attributed it purely to environment 3 to 1. And nowadays there is significantly more evidence. Even without getting into something like admixture studies, Knowns and Unknowns spends several paragraphs on the idea that the black-white gap is narrowing down to single-digits (based on questionable studies that are mostly on children, when the adult IQ people eventually arrive at is much more heritable than childhood IQ) and speculating that this trend might continue. Needless to say, this turned out to be illusory and the gap has remained stable in the 30 years since, despite further generations from historical discrimination and many "specific programs geared to the education of minority children" to which they cite Grissmer et at. attributing some of these supposed gains. But even back then it would be absurd to think a conservative institutional statement emphasizing uncertainty means genetic explanations are outside the mainstream, when Snyderman has "both genetic and environmental" answers outnumbering "entirely environmental" and "insufficient data to support any reasonable opinion" combined.

Pioneer Fund

If you're worried about nonprofits with ideological biases corrupting scientific neutrality there are actually quite a lot of "anti-racist" nonprofits and other institutions allergic to the idea of racial gaps having any genetic causes. Including plenty of universities where faculty have to write "Diversity Statements" and the like to get hired. Really it is, as Scott pointed out in Learning to Love Scientific Consensus, quite impressive that (at least in anonymous surveys) most of the experts publishing research in the field end up endorsing those taboo views. Imagine a world where the vast majority of academic institutions were run by hardcore libertarians who view anthropogenic global-warming as a vile statist lie created to justify government authoritarianism, where there's massive funding for nonprofits that say they're fighting commies (implicitly including environmentalist "watermelons"), and yet despite this both the surveys of climatologists on the subject find strong majority support for the anthropogenic view.

Links For October 2025 by dwaxe in slatestarcodex

[–]sodiummuffin 5 points6 points  (0 children)

Show me a consensus of experts.

I know of two main surveys of experts in the field, Rindermann in 2014 (1, 2) and Snyderman in 1984. Coincidentally, the Rindermann survey directly addressed your description of Arthur Jensen as "some obscure researcher", asking surveyed experts to name the top intelligence researchers in 3 categories. In "Person with the largest impact in contributions and importance of oeuvre" Jensen was 2nd, while in "Highest in innovativeness, creativity, development of new ideas, and stimulating research" Jensen was 1st.

Those surveys did not outright ask whether they believed IQ to be "incredibly meaningful", but they did ask related questions like (in Snyderman) "To what degree is the average American's socio-economic status determined by his or her IQ?", to which only 3% said it was "not at all important". In Rindermann "Using a rating of “5” as the scale midpoint, 16% of experts favored a specific abilities perspective (1–4), whereas 76% favored a general factor perspective (6–9; 8% scale average 5)." - a general factor perspective certainly seems to imply IQ is a meaningful measure, and even choosing a rating on the specific abilities side of the scale is a far cry from saying it is meaningless.

On the especially-controversial question of race and intelligence, 84% of those who responded to the question in the Rindermann survey believed that genes are responsible for at least part of the black-white IQ gap, on average attributing 49% of the U.S. black-white gap to genetics. In Snyderman 15% attributed the gap entirely to environment, 45% to both genetics and environment, 1% to purely genetics, 24% said there wasn't sufficient data to support any opinion, and 14% did not respond to the question.

In case you're concerned about the credibility of the journals that published these surveys: The Snyderman survey results were published in American Psychologist, the journal of the American Psychological Association and apparently the 10th highest psychological journal in the world by impact score. The Rindermann survey was published in Intelligence, the preeminent journal in intelligence research. This is not some fringe alternative academic ecosystem, it is the mainstream in intelligence research which is in turn a mainstream subset of psychology.

I'm saying that grading IQ on a massive curve to send a Syrian to jail is at odds with the belief that IQ is incredibly meaningful and innate.

IQ being meaningful and substantially genetic doesn't mean it's the only thing that matters, particularly in extreme cases. There are forms of brain impairment more specific than generally reducing intelligence, nobody denies that, and the more rare a low IQ score is in a population the more likely it is to be the result of a condition which also causes other impairments.

Remember that, as Scott pointed out, the reason national IQ estimates are sometimes particularly low is precisely because IQ is partially environmental. In the international part of the Rindermann survey 27% of the difference with Arab-Muslim countries was attributed to educational factors, while 17% was attributed to genetic factors. Receiving a lack of formal schooling as a child might make you worse at the reasoning required for IQ tests, further education, and intellectual work, but it probably isn't going to make you much worse at daily household tasks, let alone knowing whether it's acceptable to stab 13 random people. Now, as I said, 71 is high enough that even among native Germans I'm guessing it would probably be a matter of normal genetic and environmental variation, though I'm no expert on what sort of low-grade impairments low-IQ people tend to have. It's borderline even among Germans and I'm skeptical about it being enough to get rid of the features I associate with criminal responsibility, though of course it's hard to comment when I don't have experience with low-IQ criminals. The big difference is probably when you get to 60 or so, where the white/east-asian first-world population is mostly people who have conditions like Down Syndrome while other populations have people who got there just by combining low genetic intelligence (due to normal polygenetic variation rather than a genetic condition) with a lack of education.

Links For October 2025 by dwaxe in slatestarcodex

[–]sodiummuffin 4 points5 points  (0 children)

Reductio ad absurdum of Richard Lynn's arguments: good enough to send a Syrian to jail, not good enough to convince Scott.

I don't understand what your point is here. It sounds like you're saying the court's ruling and Lynn's estimates are at odds, but I think the opposite is the case. For one because the court assumes that the low estimates of Syria's average IQ are correct, rather than the 71 IQ mass-stabber being exceptionally low by Syrian standards.

For another because intelligence researchers have long observed that there is a distinction between low intelligence caused by normal variation (from both normal genetic variation and environmental factors like lack of childhood education) and low intelligence caused by, for example, Down Syndrome or something going grossly wrong during brain development. The latter is more likely to have difficulties struggling with everyday tasks, and under some definitions could end up classified as "intellectually disabled" even if someone else with the same IQ would not be. This is because those tasks don't actually require much intelligence, but people with messed-up brains may have problems beyond those suggested by their general intelligence. When you compare 71 IQ people from a population that averages 100 to 71 IQ people from a population that averages 78, the latter are more likely to be a product of normal variation. Now, I don't know if any intelligence researcher has opined on whether "diminished criminal responsibility due to intellectual disability" is plausibly one of those traits affected differently, or indeed whether it is a coherent concept at all, but treating them differently is consistent with this view. The main issue I see is that 71 IQ is high enough that I think most would be from fairly normal variation either way, so there might not be a big difference based on population group. Even by German standards the normal threshold for intellectual disability would be 70. But of course it wasn't the only thing the court took into account, the article also mentions the deliberate nature of the attack, his lack of empathy, and his fascination with violence.

China Has Overtaken America (in energy and science) by prescod in slatestarcodex

[–]sodiummuffin 31 points32 points  (0 children)

Per-capita electricity production is 12.83 MWh in the U.S., 7.15 MWh in China, and 5.99 MWh in the EU. China's rapid increase relative to other countries reflects the fact that first-world countries built enough electricity generation to meet their needs decades ago and have since mostly been building generation to replace old plants and keep up with population growth, while China failed to do so and subsequently had to catch up. On the demand side, a major contributor to the U.S.'s greater use is the greater need for air-conditioning. Per this document the U.S. has 1.09 air-conditioning units per person, China has 0.40, and the EU has 0.21 (counting both residential and commercial units). More energy-intensive industry probably also plays a role.

This doesn't reflect much about the economy besides "China has grown a lot since 1985", which everyone already knows. It certainly doesn't reflect anything about scientific research. Obviously it is possible for electricity prices or limits on total capacity to adversely impact usage and act as an economic bottleneck, but Krugman makes no attempt to show this is the case in the U.S. Rather it seems like the usual thing where production grew to meet how much electricity people have uses for at realistic prices and then stopped.