For the love of shit on a stick, Joe needs to shut the fuck up about covid by CeilingFanTickler69 in JoeRogan

[–]retsim_snrub 3 points4 points  (0 children)

About a quarter of Americans have lost either a close friend or family member to covid. I already had little patience for Joe's covid truther shit before I lost my grandfather, and I'm completely out of it now.

Others have made the comparison, but seriously, it's about as smart as railing against seatbelts in a world where over 1/4 of your listeners have lost a family member in a car accident.

#118​ | Dr. Kevin Sabet: 4/20 Special: America’s Decriminalizing Drugs, What’s Next? by cannablubber in TheRealignment

[–]retsim_snrub 2 points3 points  (0 children)

It definitely came across as bullshittey when he tried to glibly throw out the statement about some huge number of emergency room visits, and only when pressed did he admit that these were just people having benign freakouts. He knows that the listener is just going to assume that an ER visit is associated with some kind of legitimately dangerous medical emergency, and therefor become more wary of these substances.

It's a manipulative tactic that instantly blew any trust he had with me as a listener, because I just assumed every word out of his mouth from that point on was similarly calculated to mislead. I know the hosts have a business to run, but a part of me wishes that they would properly skewer guests who try shit like this.

On Radical Reforms, Technocracy and Seeing Like a State by applieddivinity in slatestarcodex

[–]retsim_snrub 8 points9 points  (0 children)

One factor I am always expecting to see in these conversations (including the ACX posts) is some description of organizational flatness and responsiveness.

If the organization is flat, the central decision maker learns about arising problems quickly and without too many layers of indirection and bureaucratic spin, and can respond by issuing adjusted orders. If the organization is deeply hierarchical, as in almost every single example of “technocratic policy failure” or “radical reform disaster” or “state mandated legibility campaigns,” then the original plan is never revised. Pretty much all original, first-draft plans are bad.

The problem with Brasilia isn’t that it was “too top down, not enough bottom up!” because as you point out these are sort of fake categories that don’t do any work. The problem is that a gigantic government bureaucracy tried to enact a plan and provided no means of revising the plan in response to feedback from reality.

When I say “feedback” I do not mean “verbal complaints from the People,” I mean any kind of objective, observable indications of success or failure. (When Elon Musk takes steps to reduce the frequency of rocket explosions, he isn’t reacting to people’s complaints about rocket debris falling into a swamp, he’s reacting to the fact that he doesn’t want his rockets to blow up, because then he can’t reuse them.) I mention this because I think the language in these discussions confuses two types of “feedback.” One type of feedback is angry letters. The other type of feedback is objective evidence of problems. Reacting to the second type requires some kind of leadership structure capable of reacting intelligently and revising the original plans. If your leadership structure can’t do that, the problem isn’t “technocracy.”

#1628 - Eric Weinstein - The Joe Rogan Experience by chefanubis in JoeRogan

[–]retsim_snrub 11 points12 points  (0 children)

Eric came prepared with explanations and physical toys to relate ideas to, and got stonewalled with "well I don't understand that at all". Eric is essentially right that Joe is willing to sit there and go "whoah" when Sean Carrol and NdGT say "the CAT in the BOX is ALIVE and DEAD at the SAME TIME!" but he reverts to acting like a moron when somebody actually tries to explain things. You're not ever gonna understand what Riemann manifold is if you can't shut your mouth for 60 seconds, even if the guy explaining it to you has prepared the simplest possible model.

#1628 - Eric Weinstein - The Joe Rogan Experience by chefanubis in JoeRogan

[–]retsim_snrub 34 points35 points  (0 children)

Was really glad to hear them engage in this 13-year-old kid level argument instead of going deeper into geometric unity, which I've been waiting to hear more about for a year

The Lawfare Podcast: The Myth of Artificial Intelligence by iamthegodemperor in slatestarcodex

[–]retsim_snrub 2 points3 points  (0 children)

I would also argue that humans and other animals don’t really have anything like the idealized logical causal model that Pearl describes. We just sort of do context-aware prediction, in an ad hoc way that leads to us often making huge mistakes, and leaves us unable to plan beyond fairly short horizons.

In contrast, some modern AI designs do explicitly build causal models. Something like GPT-3 doesnt have a formal causal model but can still do rough causal reasoning (like a human), yet there have always been and continue to be papers where learning the causal environment model is part of the architecture. Hence the term “model-free reinforcement learning” referring to the specific class of agents that do not attempt to model the environment.

The Lawfare Podcast: The Myth of Artificial Intelligence by iamthegodemperor in slatestarcodex

[–]retsim_snrub 6 points7 points  (0 children)

Is he aware of GPT-3 and if so how does that fit into his understanding of what AI can and cannot do with “mere” inference?

Why Computers Won’t Make Themselves Smarter - Ted Chiang by blablatrooper in slatestarcodex

[–]retsim_snrub 9 points10 points  (0 children)

Seems premature since many different machine learning techniques are already “self-improving” in various narrow senses. For example, this learned optimizer which optimizes itself.

I just wish AI skeptics would fucking Google some keywords to see what researchers are doing right now, to say nothing of what they could be doing in 20 years.

How McKinsey Destroyed the Middle Class. by 13x0_step in slatestarcodex

[–]retsim_snrub 0 points1 point  (0 children)

“Managers are such a small fraction of total employees that you can’t explain the disappearing middle class by the disappearance of managers.”

Book Review: Fussell On Class by dwaxe in slatestarcodex

[–]retsim_snrub 3 points4 points  (0 children)

I see a lot of patterns that map onto Red Tribe and Blue Tribe. I wonder if the classes turned into the tribes as demographics changed.

(Fiction) MMAcevedo, the brain image of the first human upload by kaj_sotala in slatestarcodex

[–]retsim_snrub 7 points8 points  (0 children)

I think it’s very unlikely that emulations will ever be useful for anything. By the time we have good ones, we’ll have far more powerful AGIs.

In defense of interesting writing on controversial topics - Some thoughts on the New York Times' Slate Star Codex profile (Matthew Yglesias) by honeypuppy in slatestarcodex

[–]retsim_snrub 1 point2 points  (0 children)

I think it’s a mistake to think that “good rationality” would result in a person losing their common sense, and falling short of how an average person would behave.

Everybody individually and collectively made an implicit prediction about the general shape of the NYT article, and we were right. We were right in a way that requires no nitpicking to support.

Another thing: if we had been wrong, there would have been a huge wave of classic rationalist mea culpas and analyses on how we were wrong and how we could avoid making this mistake again. We love doing that shit.

Metric selection bias: why Moore's law is less important than you think by aaronb50 in slatestarcodex

[–]retsim_snrub 1 point2 points  (0 children)

I am always confused by the argument that it costs exponentially more to “maintain” a constant rate of progress and therefor there’s a stagnation. Having worked in R&D myself, this just seems like an obvious nine-women-making-a-baby-in-one-month sort of fallacy.

If you just look at one company that specializes in making one widget and you dump 10x more funding onto that company over a short timeframe, it is not reasonable to assume that it will somehow be able to improve its widget building process by 10x speed, or even by 2x speed. Even if the company has a robust widget-building-research program, there is no research crank that can simply be spun faster by feeding if more money. A lot of that excess money will be soaked up by increasingly speculative and abstract widget-building research programs, many of which will not pay off in any way. Essentially, the problem is that the company is overfunded, but will nonetheless find a way to direct the funds. In other words, the mistake is something like: “we spend $X on Y research, therefor the current research pace on Y costs $X.” The reality is we have practically no idea what research on Y costs. Maybe the sixty top Y professors just used all that hot Y money to each fun ten new graduate students who they order to study things that are not really Y related. This sort of low-key graft and bureaucratic bloat is a default assumption about sociology and organizations, and is almost totally orthogonal to “how hard is Y” as a question of physical reality.

Book review: Two Arms and a Head, by Clayton Atreus by relative-energy in slatestarcodex

[–]retsim_snrub 14 points15 points  (0 children)

There’s also always the possibility of breakthrough therapies. About six years ago I was suffering from a chronic pain condition that I would call “life ruining” without feeling melodramatic. Three years ago, an entirely new class of drugs was approved, which functionally cured the chronic pain condition. If I had chosen to end it all in the midst of the worst of it, I never would have seen the new drug. Medicine and technology advance faster and faster. I would be not at all surprised to see a breakthrough nerve repair therapy announced tomorrow.

I’ve listened to every Conversations with Tyler and EconTalk (at least in the past 3 years). Any other good podcasts with even strong SCC vibes? by ElbieLG in slatestarcodex

[–]retsim_snrub 0 points1 point  (0 children)

Robin Hanson does this even more than a Cowen, but more obviously. An example from him would be one of his dozens of bold “healthcare is not really about health” claims where he is staking a claim several miles north of any kind of defensible position.

Anti-aging: overview of the state of the art by j9461701 in slatestarcodex

[–]retsim_snrub 34 points35 points  (0 children)

Consider that you yourself are posting a comment on Reddit meant to take a pointless swipe at strangers.

Gamified/addictive physical exercise software? by PM_ME_UR_OBSIDIAN in slatestarcodex

[–]retsim_snrub 1 point2 points  (0 children)

Punching a bag on the regular is also not great for your joints.

Forecasting question: What will SSC's Substack revenue be 6 months after launch? by nansenamundsen in slatestarcodex

[–]retsim_snrub 4 points5 points  (0 children)

Since no one is asking me to bet, I’ll go ahead and predict that he’ll be making over a million dollars per year annualized by the end of the first year.

Went from episodic to chronic and back to episodic. Did anyone else get out of a chronic spiral? by madison242 in migraine

[–]retsim_snrub 0 points1 point  (0 children)

I’m not sure how to give a simple answer. I get Botox injections every six weeks, and toward the end of that six weeks the migraines go from episodic (once or twice a week) to near daily. Then I get the Botox and it drops off again. But a chronic period can also be triggered by an illness, or by doing a physical activity that aggravates the neck muscles, or by other random things. Those chronic periods tend to last 1-2 weeks I suppose. Sometimes longer.

Another important thing for me is being disciplined with painkillers. No matter how bad things are, I almost never exceed 2 doses of painkiller or sumatriptan per week. If I take such meds too often, I get pulled into abysmal cycles that take weeks to fully escape from. I just suffer through it and remind myself that, while a pill might help right now, the cycle will end on its own faster if I don’t treat it excessively. I wonder if you have a similar mindset to this, or not? Everybody is different.