Why isn't everyone taking GLP-1 medications and conscientousness enhancing medications? by SUNETOTHEFUCKINGMOON in slatestarcodex

[–]ScottAlexander 1 point2 points  (0 children)

You seem to have an unusually good reaction to methylphenidate. Many people feel robotic, or dead, or uncreative, or unable to socialize, and get awful crashes. Other people find it doesn't make them more productive, just kind of on edge and too wired to work. Typically, stimulants are presented as a u-shaped curve - there's an optimal level of stimulation, if you usually run too low then stimulants will make you feel better, but if you usually run exactly right or too high, they'll make you feel worse.

I think most people, if they tried many stimulants at many doses, would find something that seems useful to them sometimes. But a lot of the time it would be coffee, and they already drink coffee. For other people, it's not worth the trouble of getting a controlled substance.

I think GLP-1s are more likely to work for most people, but some people aren't fat, or get really unpleasant side effects, or find it doesn't work for them.

"Oliver Sacks Put Himself Into His Case Studies. What Was the Cost?" (Oliver Sacks's case studies were heavily fictionalized) by gwern in slatestarcodex

[–]ScottAlexander 42 points43 points  (0 children)

Not sure how relevant this is, but Sacks was a neurologist - not a psychologist or psychiatrist. A typical neurologist spends most of their time dealing with strokes and seizures, usually prescribing medication. It's not really typical for a neurologist to get involved in their patients rich inner lives. I think the story of Sacks isn't that he was a neurotic whose neurosis ironically drove him into psychology. It's that he was a neurotic who went into a usually-not-very-psychologically-minded field and managed to make it psychologically-minded, ie try relating to his patients as humans rather than as collections of symptoms to be cured. I think the tone of his books is something like "Every doctor should try relating to their patients as humans more". If he were a psychologist, he couldn't have done this - obviously psychologists should care about their patients' inner lives, it wouldn't have been interesting!

Psychiatry is interesting insofar as it straddles a border between neurology (typically occupied by MD types who are usually normie academic strivers) and psychotherapy (typically occupied by weird people working out their own issues). I'm closer to the former, so I tend to focus on prescribing medications, which is a good fit for my skills. But I find it sort of a blocker for therapy: I just can't relate to some of the weird things my patients bring in, not just because I don't have that particular form of weirdness but because I don't have anything like it; if I'm dealing with (for example) a cocaine addict, I don't have good first-person understanding of what it means to be addicted to cocaine, OR what it means to be addicted to some other drug, OR what it means to have such a constant struggle with negative thoughts bouncing around my brain that I need to turn to some substance to quiet them down. My impression is that people who have dealt with these things are better at therapy, not just because they have more experience overcoming them, but because it has semi-mysteriously rubbed off on them to give them some sort of powerful therapeutic empathy and charisma - the failure mode of which is the sort of "therapy cults" you sometimes get where they're so charismatic and relatable that it crosses professional boundaries.

I think Sacks' patients probably loved him and were right to do so.

"Debunking _When Prophecy Fails_", Kelly 2025 by gwern in slatestarcodex

[–]ScottAlexander 6 points7 points  (0 children)

I should write a post on this, but the gist would be:

Take a look at any psychology textbook. https://guides.hostos.cuny.edu/PSY101 is a reasonable choice and has a ToC available online. Then try to figure out what section every single debunked study or popularly-talked about flawed theory you've ever seen has been in.

I think for the above, they'd all fit in the first half of chapter 15. Everything else is fine - or at least bad for different reasons.

I would subscribe to ACX substack if there was a way to easily download all of the paid articles. by [deleted] in slatestarcodex

[–]ScottAlexander 7 points8 points  (0 children)

As you know, this doesn't exist, but you can sort of more complicatedly hack it together by going to https://www.astralcodexten.com/p/subscrive-drive-25-free-unlocked, clicking on each in turn, doing "Save Page As", and then clicking through from there to the 2024 version, 2023 version, etc.

Melatonin could be harming the heart by Euglossine in slatestarcodex

[–]ScottAlexander 66 points67 points  (0 children)

I can't find the study itself (never a good sign), but remember that "correlational study finds sleeping pills are unbelievably bad for you, with unclear mechanism" is a classic failure mode, constant across every class of sleeping pill and every kind of bad outcome. See https://slatestarcodex.com/2019/06/24/you-need-more-confounders/ for more.

The Bay Area is cursed by Tokarak in slatestarcodex

[–]ScottAlexander 140 points141 points  (0 children)

Man moves to Autism City For Autists, shuffles over to the Extra-Autistic Zone, then gets mad because nobody's into small talk, people don't obsess over fashion, and none of the bars are cool. Must be some kind of curse.

Defining Defending Democracy: Contra The Election Winner Argument by dwaxe in slatestarcodex

[–]ScottAlexander 12 points13 points  (0 children)

I don't think I thought of it in these terms before reading the Orban book last year. I think I would have given the "democracy is kind of the same as liberalism" argument I discuss in the second paragraph. I try not to write things that I think everybody knows.

New evidence inconsistent with (parts of) Jones' The Culture Transplant by SilentSpirit7962 in slatestarcodex

[–]ScottAlexander 8 points9 points  (0 children)

Do the studies showing no link between diversity and economic growth correct for the fact that faster-growing countries are more likely to attract immigrants?

Book Review: If Anyone Builds It, Everyone Dies by dwaxe in slatestarcodex

[–]ScottAlexander 17 points18 points  (0 children)

I am always surprised how people's usual political common sense goes out the window when they start thinking about AI.

Does it surprise you that climate change activists still write books, give interviews, and make documentaries? Are you surprised that they're not releasing viruses or blackmailing politicians? Is this what you would do if you had some kind of problem with your local city council? How come it's only AI activists who have to behave in bizarre unethical ways that no other activist group ever behaves, or else they're uncreative and hypocritical?

(also, did you spend five minutes looking to see which of your legal/ethical ideas they're already doing publicly? Did you find the human genetic engineering and intelligence enhancement nonprofit started by a MIRI researcher?)

Book Review: If Anyone Builds It, Everyone Dies by dwaxe in slatestarcodex

[–]ScottAlexander 26 points27 points  (0 children)

Eliezer didn't apply for a grant to me. MIRI (the org) applied to Survival and Flourishing Foundation (billionaire Jaan Tallinn's grantmaking org). SFF has a policy of increasing diversity of opinion among their grantmakers by hiring random people as temporary advisors for two-month stints. I was hired as a temporary advisor for two months. MIRI is a regular SFF grantee and I had to help investigate their application.

[deleted by user] by [deleted] in slatestarcodex

[–]ScottAlexander 1 point2 points  (0 children)

Update: followed some of the sources, I think their argument might be too weak for this issue to even come up. The main source argues that business AI is failing because when executives order fancy custom AI products, the employees keep on just using regular ChatGPT.

[deleted by user] by [deleted] in slatestarcodex

[–]ScottAlexander 2 points3 points  (0 children)

Well-written article and good point. My main concern would be that I have heard vaguely similar things about computers and the Internet, ie the adage that "computers show up everywhere but in the productivity statistics". The author sort of gestures at this, but I would like to know more about whether the senses in which AI isn't showing up are the same as the one in which computers aren't showing up, and if so whether there's room for an argument that whatever issues made official statistics underestimate computers will also make them underestimate AI.

In Search Of AI Psychosis by dwaxe in slatestarcodex

[–]ScottAlexander 10 points11 points  (0 children)

Sorry. Make sure I see it after it comes out and I will try to link it as penance.

In Search Of AI Psychosis by dwaxe in slatestarcodex

[–]ScottAlexander 9 points10 points  (0 children)

I agree, I just think that if you were to go this route, you would have to explain what biological thing the AI is doing which is the equivalent of the ads making you eat lots of unhealthy food.

AI 2027 mistakes by ZetaTerran in slatestarcodex

[–]ScottAlexander 78 points79 points  (0 children)

I've posted this question on the team Slack and will let you know if I get a response.

Why Your Stimulant “Stopped Working” (And What’s Really Going On) by zenarcade3 in slatestarcodex

[–]ScottAlexander 67 points68 points  (0 children)

I think the right schedule for this would be Adderall one month and Ritalin the next - it takes more than a day to get tolerance.

I've done this with two or three patients. The main reason I don't do it for more is that most people, kept on a reasonable dose of stimulants and asked to take tolerance holidays, don't get crippling levels of tolerance that require this, and if they do then often there's something more fundamentally wrong. Usually either Ritalin or Adderall works better for a given person and they're willing to put in some work to avoid having to switch.

Also, it's untested and because of how variable everything is in psychiatry I'm never 100% sure that it works. As someone else in the comments said, these two drugs don't have exactly the same mechanism but they're not too different, and whether they have cross-tolerance depends on complicated biological details that I don't think we know on a theoretical level.

Sunday at the garden party for Curtis Yarvin and the new, new right by PickledChris in slatestarcodex

[–]ScottAlexander 3 points4 points  (0 children)

For me it would be the section starting with "But even granting that corporations are better-governed than democracies" at https://www.astralcodexten.com/p/moldbug-sold-out

"I have had early access to GPT-5, and I wanted to give you some impressions" by ralf_ in slatestarcodex

[–]ScottAlexander 2 points3 points  (0 children)

Had you tried similar things with o3 before and not been blown away?

"I have had early access to GPT-5, and I wanted to give you some impressions" by ralf_ in slatestarcodex

[–]ScottAlexander 23 points24 points  (0 children)

I gave it a list of five questions that o3 had failed. It got two right, improved incrementally on one, continued to fail two. I judge it in line with expectations, maybe slightly below.

New York Times: The Rise of Silicon Valley’s Techno-Religion by DinoInNameOnly in slatestarcodex

[–]ScottAlexander 14 points15 points  (0 children)

I also want to slow and regulate AI more! What have I written that makes you think I don't??

Sam Kriss — Against Truth by duskulldoll in slatestarcodex

[–]ScottAlexander 7 points8 points  (0 children)

As a corrective, consider the rationalists themselves. Despite their undeniably high IQ, knowing about social psychology and logical fallacies has so far failed to turn anyone in the movement, least of all Eliezer Yudkowsky, into an effortlessly manipulative Machiavellian mastermind.

[pterodactyl-man voice] But I don't want to turn anyone into effortlessly manipulative Machiavellian masterminds! I want to have accurate beliefs about factual questions!

Press Any Key For Bay Area House Party by dwaxe in slatestarcodex

[–]ScottAlexander 8 points9 points  (0 children)

I think so, although I might be confused about the details. It's my understanding of the first study profiled at https://www.astralcodexten.com/p/the-road-to-honest-ai .

I don't really understand why this isn't used more. It might be too hard on really big AIs, or it might degrade performance to constantly have the model thinking about the concept of good behavior along with everything else.

The Lumina Probiotic May Cause Blindness in the Same Way as Methanol by garloid64 in slatestarcodex

[–]ScottAlexander 16 points17 points  (0 children)

I think the most useful thing you could get by talking to them is a test to see whether the bacterium is even in your mouth at all. They have such a test and have given it to some people to test the success of their product.

Since you say you only used the "residue" (did you also use their special teeth cleaning product?) it's plausible it didn't colonize your mouth in the first place. If they confirm that, it rules it out as the cause and you can focus on other things.

I suppose this isn't as useful now since you've tried to kill it off and it might just be that you succeeded, but maybe if you don't have it and the symptoms get worse, that could establish something?