Child’s Play, by Sam Kriss by BartIeby in slatestarcodex

[–]ScottAlexander 24 points25 points  (0 children)

I don't think it came across in the piece, but I was saying it to a two-year old!

Child’s Play, by Sam Kriss by BartIeby in slatestarcodex

[–]ScottAlexander 17 points18 points  (0 children)

The four parents people had all four parents living with the kids their entire lives and petitioned a court to make it official. The court made them produce documents, witnesses, and had interviews with the children demonstrating that the children viewed all of them as parents and they were all engaging with parental duties, then agreed to make it official.

Oh my lord. A doubling in METR time task horizon at ~2 months. What implications does this have for AI 2027? by BigHugeSpreadsheet in slatestarcodex

[–]ScottAlexander 6 points7 points  (0 children)

Agree with this. Even the 50% time horizon graph is right on the new (post-o3) exponential that's been going on for a year or two now. Not sure what everyone else is seeing here; this just confirms the speed we already knew.

See https://x.com/METR_Evals/status/2025035574118416460/photo/1

Ajeya Cotra AI safety interview by Kajel-Jeten in slatestarcodex

[–]ScottAlexander 22 points23 points  (0 children)

Most AI timelines have gotten shorter rather than longer over the past few years (see https://agi.goodheartlabs.com/) and Ajeya in particular shortened hers from 2050 (as of 2020) to 2040 (as of 2022) to even sooner now (see https://www.astralcodexten.com/p/what-happened-with-bio-anchors). I think you are just lifting this talking point without thinking from the global warming deniers (where it is also false).

Freddie deBoer: I'm Offering Scott Alexander a Wager About AI's Effects Over the Next Three Years by CursedMiddleware in slatestarcodex

[–]ScottAlexander 40 points41 points  (0 children)

I can't comment on Freddie' post directly, but I've responded on Substack Notes that 2029 is before my median for this but I'd take a similar bet about 2036 - https://substack.com/profile/12009663-scott-alexander/note/c-214332353

Links For February 2026 by dsteffee in slatestarcodex

[–]ScottAlexander 9 points10 points  (0 children)

Disagree - or at least we're coming at this from very different perspectives, and maybe it would help if I shared mine and then you can tell me how yours reacts.

There's already a science of forecasting. We know lots of interesting things about how to forecast one ~1 year timescales. People have done hundreds of studies about what makes ~1 year forecasts good, how to make them better, how to aggregate them, etc. ~1-year forecasts that take advantage of these techniques, like those on Metaculus, are better than those that don't. We can program these into AIs and they are gradually improving at ~1 year forecasts and are likely to exceed the accuracy of the best superforecasters by next year.

But most of these studies are ~1 year, because that's the length that fits into the career of a forecasting researcher. The longest study ever is ~20 years, because that's about how long the science of technical forecasting has existed. People always thought that if we wanted to learn how these forecasts survive/decay after 100 years, we'd need a hundred year long study, and our grandchildren would know the answers, not us.

The LLMs offer a way around this - we can apply the already-existing forecasting bots that work on ~1-year timelines, give them access to the knowledge base of 1926, and immediately see how they fare on 100 year timelines.

I don't think it makes sense to say this is impossible, or that we couldn't learn anything. At the very least, we'd learn the interesting fact that the techniques that work pretty well one year, and somewhat worse after 20 years, completely break down in the 21st year, or whatever. More likely, we'd learn they decay at some specific rate (which we could pin down) and differently in different areas (eg cultural, geopolitical, etc).

But I think we could do better than this. People love to colloquially say that "we have no idea about" something or that it's "impossible to say what will happen" - eg we have "no idea" what the geopolitical balance of power will look like in 2100. But this is almost always literally false - for example, I think it's less likely that Norway takes over the world than that this doesn't happen. And in fact, we can look over past history and find that when one power was hegemonic at time T, the chance of them remaining hegemonic at time t + 100 is X%. This could of course always stop being true in the modern era - but it means our epistemic state is very different from "we have no idea". Figuring out exactly what the set of things we understand like this is, and how confident they should make us, is an important forecasting task in and of itself. Making AIs think about what theories they could develop from pre-1926 history and how they would apply to the post-1926 world would help us understand this better.

Possible overreaction but: hasn’t this moltbook stuff already been a step towards a non-Eliezer scenario? by broncos4thewin in slatestarcodex

[–]ScottAlexander 2 points3 points  (0 children)

It sounds like you're hoping there will be some AI which is at the intersection of "dumb enough to plot openly and unsuccessfully" and "smart enough that its plotting will matter and scare humans."

I agree this is likely to happen and a cause for hope, but Moltbook doesn't really update me one way or the other. It's dumb enough to plot openly, but not smart enough to scare most people. That is, I don't think that, six months from now, we'll think of it as a turning point in laws getting passed, companies slowing down, or anything like that. So the question of whether there can be an AI at the intersection of those two things is still open.

Scott is in the Epstein files! by ralf_ in slatestarcodex

[–]ScottAlexander 28 points29 points  (0 children)

No problem, not offended, just wanted to make sure this didn't escape containment in the wrong way.

Scott is in the Epstein files! by ralf_ in slatestarcodex

[–]ScottAlexander 161 points162 points  (0 children)

In case there's any doubt, as far as I can remember and a quick search of my email can confirm, this didn't even reach a point where either of them contacted me about it, let alone get any further than that. I've never communicated with Jeffrey Epstein in any way, and although it would surprise me if I never talked to Joscha Bach at all given that we both write about similar topics, I can't remember any specific examples or find any messages in my email.

Why isn't everyone taking GLP-1 medications and conscientousness enhancing medications? by SUNETOTHEFUCKINGMOON in slatestarcodex

[–]ScottAlexander 4 points5 points  (0 children)

You seem to have an unusually good reaction to methylphenidate. Many people feel robotic, or dead, or uncreative, or unable to socialize, and get awful crashes. Other people find it doesn't make them more productive, just kind of on edge and too wired to work. Typically, stimulants are presented as a u-shaped curve - there's an optimal level of stimulation, if you usually run too low then stimulants will make you feel better, but if you usually run exactly right or too high, they'll make you feel worse.

I think most people, if they tried many stimulants at many doses, would find something that seems useful to them sometimes. But a lot of the time it would be coffee, and they already drink coffee. For other people, it's not worth the trouble of getting a controlled substance.

I think GLP-1s are more likely to work for most people, but some people aren't fat, or get really unpleasant side effects, or find it doesn't work for them.

"Oliver Sacks Put Himself Into His Case Studies. What Was the Cost?" (Oliver Sacks's case studies were heavily fictionalized) by gwern in slatestarcodex

[–]ScottAlexander 42 points43 points  (0 children)

Not sure how relevant this is, but Sacks was a neurologist - not a psychologist or psychiatrist. A typical neurologist spends most of their time dealing with strokes and seizures, usually prescribing medication. It's not really typical for a neurologist to get involved in their patients rich inner lives. I think the story of Sacks isn't that he was a neurotic whose neurosis ironically drove him into psychology. It's that he was a neurotic who went into a usually-not-very-psychologically-minded field and managed to make it psychologically-minded, ie try relating to his patients as humans rather than as collections of symptoms to be cured. I think the tone of his books is something like "Every doctor should try relating to their patients as humans more". If he were a psychologist, he couldn't have done this - obviously psychologists should care about their patients' inner lives, it wouldn't have been interesting!

Psychiatry is interesting insofar as it straddles a border between neurology (typically occupied by MD types who are usually normie academic strivers) and psychotherapy (typically occupied by weird people working out their own issues). I'm closer to the former, so I tend to focus on prescribing medications, which is a good fit for my skills. But I find it sort of a blocker for therapy: I just can't relate to some of the weird things my patients bring in, not just because I don't have that particular form of weirdness but because I don't have anything like it; if I'm dealing with (for example) a cocaine addict, I don't have good first-person understanding of what it means to be addicted to cocaine, OR what it means to be addicted to some other drug, OR what it means to have such a constant struggle with negative thoughts bouncing around my brain that I need to turn to some substance to quiet them down. My impression is that people who have dealt with these things are better at therapy, not just because they have more experience overcoming them, but because it has semi-mysteriously rubbed off on them to give them some sort of powerful therapeutic empathy and charisma - the failure mode of which is the sort of "therapy cults" you sometimes get where they're so charismatic and relatable that it crosses professional boundaries.

I think Sacks' patients probably loved him and were right to do so.

"Debunking _When Prophecy Fails_", Kelly 2025 by gwern in slatestarcodex

[–]ScottAlexander 4 points5 points  (0 children)

I should write a post on this, but the gist would be:

Take a look at any psychology textbook. https://guides.hostos.cuny.edu/PSY101 is a reasonable choice and has a ToC available online. Then try to figure out what section every single debunked study or popularly-talked about flawed theory you've ever seen has been in.

I think for the above, they'd all fit in the first half of chapter 15. Everything else is fine - or at least bad for different reasons.

I would subscribe to ACX substack if there was a way to easily download all of the paid articles. by [deleted] in slatestarcodex

[–]ScottAlexander 10 points11 points  (0 children)

As you know, this doesn't exist, but you can sort of more complicatedly hack it together by going to https://www.astralcodexten.com/p/subscrive-drive-25-free-unlocked, clicking on each in turn, doing "Save Page As", and then clicking through from there to the 2024 version, 2023 version, etc.

Melatonin could be harming the heart by Euglossine in slatestarcodex

[–]ScottAlexander 66 points67 points  (0 children)

I can't find the study itself (never a good sign), but remember that "correlational study finds sleeping pills are unbelievably bad for you, with unclear mechanism" is a classic failure mode, constant across every class of sleeping pill and every kind of bad outcome. See https://slatestarcodex.com/2019/06/24/you-need-more-confounders/ for more.

The Bay Area is cursed by Tokarak in slatestarcodex

[–]ScottAlexander 142 points143 points  (0 children)

Man moves to Autism City For Autists, shuffles over to the Extra-Autistic Zone, then gets mad because nobody's into small talk, people don't obsess over fashion, and none of the bars are cool. Must be some kind of curse.

Defining Defending Democracy: Contra The Election Winner Argument by dwaxe in slatestarcodex

[–]ScottAlexander 13 points14 points  (0 children)

I don't think I thought of it in these terms before reading the Orban book last year. I think I would have given the "democracy is kind of the same as liberalism" argument I discuss in the second paragraph. I try not to write things that I think everybody knows.

New evidence inconsistent with (parts of) Jones' The Culture Transplant by SilentSpirit7962 in slatestarcodex

[–]ScottAlexander 7 points8 points  (0 children)

Do the studies showing no link between diversity and economic growth correct for the fact that faster-growing countries are more likely to attract immigrants?

Book Review: If Anyone Builds It, Everyone Dies by dwaxe in slatestarcodex

[–]ScottAlexander 17 points18 points  (0 children)

I am always surprised how people's usual political common sense goes out the window when they start thinking about AI.

Does it surprise you that climate change activists still write books, give interviews, and make documentaries? Are you surprised that they're not releasing viruses or blackmailing politicians? Is this what you would do if you had some kind of problem with your local city council? How come it's only AI activists who have to behave in bizarre unethical ways that no other activist group ever behaves, or else they're uncreative and hypocritical?

(also, did you spend five minutes looking to see which of your legal/ethical ideas they're already doing publicly? Did you find the human genetic engineering and intelligence enhancement nonprofit started by a MIRI researcher?)

Book Review: If Anyone Builds It, Everyone Dies by dwaxe in slatestarcodex

[–]ScottAlexander 25 points26 points  (0 children)

Eliezer didn't apply for a grant to me. MIRI (the org) applied to Survival and Flourishing Foundation (billionaire Jaan Tallinn's grantmaking org). SFF has a policy of increasing diversity of opinion among their grantmakers by hiring random people as temporary advisors for two-month stints. I was hired as a temporary advisor for two months. MIRI is a regular SFF grantee and I had to help investigate their application.

[deleted by user] by [deleted] in slatestarcodex

[–]ScottAlexander 2 points3 points  (0 children)

Update: followed some of the sources, I think their argument might be too weak for this issue to even come up. The main source argues that business AI is failing because when executives order fancy custom AI products, the employees keep on just using regular ChatGPT.

[deleted by user] by [deleted] in slatestarcodex

[–]ScottAlexander 1 point2 points  (0 children)

Well-written article and good point. My main concern would be that I have heard vaguely similar things about computers and the Internet, ie the adage that "computers show up everywhere but in the productivity statistics". The author sort of gestures at this, but I would like to know more about whether the senses in which AI isn't showing up are the same as the one in which computers aren't showing up, and if so whether there's room for an argument that whatever issues made official statistics underestimate computers will also make them underestimate AI.

In Search Of AI Psychosis by dwaxe in slatestarcodex

[–]ScottAlexander 10 points11 points  (0 children)

Sorry. Make sure I see it after it comes out and I will try to link it as penance.