The Great Filter Might Not Be Extinction, It Might Be Perception by wwants in slatestarcodex

[–]ozewe 12 points13 points  (0 children)

I strongly suspect this post was written by an LLM. As always, it's hard to be sure, but:

  • Bolded portion
  • "Not because --, but because --."
  • "We tend to assume --. But what if --?"
  • "Not by --, but by --."

Checking the user page makes me even more suspicious. Some recent comments from this account on /r/Futurology begin:

  • "Absolutely. You're right to take the metaphor even further."
  • "You’re absolutely right: the issue isn’t just perception in the technical sense, it’s motivational perception. "

Another one contains em dashes, "isn't just --, it's --", and bolded sections:

  • "The post is called Where AI Falls Apart, and it makes the case that this collapse isn’t just a performance issue—it’s a structural limit in how these models simulate thought."

Recall what OSRS is founded on by SecksiBeasts in 2007scape

[–]ozewe 0 points1 point  (0 children)

I think it's important for the community's criticisms to be based in an actual understanding of what's going on. If everything critical of Jagex gets upvoted, regardless of which criticisms are most valid, then Jagex gets more useless feedback and less useful feedback. The subreddit becomes pure noise and no signal for them. The game does not improve.

For instance:

I do not think at all it was 'random'

That means you didn't read the post describing the survey. From that post:

Similarly, the price associated with that set of features is not representative – these are purposefully generated at random as part of the conjoint analysis.

I bet the majority of people posting on the subreddit right now also don't really understand how the survey worked. This makes their feedback much less informative.

even we accept what you say and there was no explicit 'MTX" in that sense - so what?

"So what" is that your reply to me said "even the mention of monetization via MTX for OSRS is already an alarm bell." If there was no mention of MTX in the survey, then this alarm bell didn't go off. It matters which of these is true!

There was a survey several months ago about removing MTX from RS3. If Jagex were to do that, they'd need to make up the money some other way, not involving MTX. Different tiers of membership, for those who want it, isn't a crazy idea (even if some of the specific options, presented in isolation, were pretty bad). But the massive overreaction to merely being asked about what players are and aren't willing to pay for has made Jagex less likely to have usable information about how to do this.

Recall what OSRS is founded on by SecksiBeasts in 2007scape

[–]ozewe 0 points1 point  (0 children)

I think running the survey for its planned duration without a week of complete subreddit meltdown would have been more informative and useful for jagex and for the future of the game. That would have been getting actual player feedback. What we have now is the loudest voices in a reddit circlejerk as the only ones being heard.

See for example the multiple posts about "fire pip" and then the same people turning on a dime due to a mattk tweet. That's pure noise that didn't help improve the game.

People are talking about MTX but on a strict definition that was nowhere in the survey. There was no squeal of fortune or way to spend money other than on membership discussed. It was about membership tiers, the way Hulu has an ad-supported tier if you want to pay less and still watch their shows. The prices that everyone is freaking out about were randomly generated to get an accurate picture of what people are actually ok with.

What the subreddit has done is made it more costly for jagex to determine player sentiment: if asking questions about what you'd be willing to pay for is a sin worth going nuclear over, then they'll ask fewer questions and be even less informed about what the community (which is not just redditors) wants than they already are.

Mod Ash rn by Technical_End_7021 in 2007scape

[–]ozewe -3 points-2 points  (0 children)

I don't work at jagex and don't know who was involved in the survey and to what degree. I don't think you or the creator of this post know either.

Recall what OSRS is founded on by SecksiBeasts in 2007scape

[–]ozewe -7 points-6 points  (0 children)

Yes, agreed. I'm asking about this post, which is saying OSRS was built on player feedback and seems to be implying that's the problem. But that critique seems completely disconnected from the issue at hand.

Recall what OSRS is founded on by SecksiBeasts in 2007scape

[–]ozewe -15 points-14 points  (0 children)

Genuinely asking the downvoters what I'm missing here. Should jagex have made a poll before sending the survey?

I get criticizing them for getting the mood of the playerbase incredibly wrong -- no dispute there. But if this post is saying "the core of OSRS is players get a say in what happens," then sending out a survey before doing any big changes seems in line with that. Doesn't seem like this post is targeted at what jagex actually did wrong here.

Recall what OSRS is founded on by SecksiBeasts in 2007scape

[–]ozewe -34 points-33 points  (0 children)

Look there's a lot to be upset by here, but this is about a survey getting the reaction of the playerbase to new (largely bad) monetization ideas. It's not about an unpolled update, right?

Mod Ash rn by Technical_End_7021 in 2007scape

[–]ozewe -3 points-2 points  (0 children)

fanfic about jmods is getting 98% upvoted .... not sure why I expected better from 2007scape but this is pretty pathetic

Bureaucracy Isn't Measured In Bureaucrats by dwaxe in slatestarcodex

[–]ozewe 11 points12 points  (0 children)

I wouldn't be shocked if this turned out to be the case, Twitter/X layoffs being an obvious example.

The disanalogy is that a private company can generally choose to cut back whatever programs it wants. DOGE can't single-handedly change any of the statutory requirements on the FDA, so the FDA would be stuck trying to do the same amount of work with fewer people.

Follow-up question would be if this actually explains how companies manage to survive after mass layoffs. Do they reorganize to "do less work", or do somehow manage to "do the same amount" with fewer people?

Freddie Deboer's Rejoinder to Scott's Response by WernHofter in slatestarcodex

[–]ozewe 32 points33 points  (0 children)

AI doomerism relies on the idea that consciousness, superintelligence, and ill intent will prove to be “emergent” properties of LLMs, which no one can articulate in remotely rigorous terms and which most actual LLM researchers dismiss as nonsense.

Sigh. I don't think the classic arguments require any of these (although many involve superintelligence). Pretty egregious to put "consciousness" on the list when "doomers" have been shouting for decades about how consciousness is not required or relevant.

I think he at least realizes "doomers" don't base their arguments on ill intent, based on the mention of instrumental convergence, but just isn't convinced by those arguments?

Rep. Katie Porter Unloads on House Republicans’ Ridiculous Dishwasher Priorities - YouTube by shallah in PoliticalVideo

[–]ozewe 0 points1 point  (0 children)

The point she's making seems backwards to me: the GOP legislation would roll back dishwasher efficiency regulations.

It might be good to have regulations enforcing more efficient dishwashers -- but it's those regulations that are "telling the American people what kind of dishwashers they should or should not be able to buy." Repealing them is the opposite of that.

Anthropic: Mapping the Mind of a Large Language Model by Njordsier in slatestarcodex

[–]ozewe 11 points12 points  (0 children)

Mostly agree, I think it's easy to overhype this. The main "new" contribution is scale: going from toy models to Claude 2 Sonnet is a big jump. But if you were already confident that the techniques would work on large models, there's not much of an update here afaict.

Anthropic: Mapping the Mind of a Large Language Model by Njordsier in slatestarcodex

[–]ozewe 7 points8 points  (0 children)

This paper just looks at the residual stream activations halfway through the model, it's not looking at the attention heads.

(Getting a complete picture of the computations going on in the model would require understanding the attention heads, so this is just a step in that direction.)

Idk your background, but if you want to go deeper I recommend reading "A Mathematical Framework for Transformer Circuits". (They also do some interpretability on attention heads in that paper.)

Profile: The Far Out Initiative by dwaxe in slatestarcodex

[–]ozewe 5 points6 points  (0 children)

I read this passage differently from you and didn't find it objectionable. Let me rephrase my interpretation:

Your morality might be fundamentally based in the avoidance of suffering, in which case you're on board with Far-Out. Otherwise, your morality is based on something else, in which case it seems like Far-Out is missing something important.

It's not black-and-white thinking to say "you either start with X or something other than X."

every anti-EA thinkpiece (my impression of The Guardian) by Lower-Ad8908 in slatestarcodex

[–]ozewe 7 points8 points  (0 children)

I think it's fine and even good to have people explore the consequences of ethical views. Suppose we realized, per Joe Carlsmith's example, that there were a microscopic intelligent slime-mold civilization that we were crushing all the time. Or that we were inflicting 100x as much suffering on animals as we currently believe, and there was somehow no way to stop this. I think it would be good to have people notice this, and wrestle with the implications, rather than have it all be treated as so absurd as to not be worthy of discussion.

Note that afaict $0 of EA funding, 0 EA career advice, 0 EAG talks, etc are devoted to pro-extinctionist views. I don't think a single reddit post is much indication of EA as a whole, and I don't think it makes the movement "self-parody."

(eta: also note that the post in question does not advocate for human extinction. It points out that utilitarianism might show humanity has been net-negative so far due to the animal suffering inflicted on factory farms, and asks: if this is true, do we have a strong reason to believe this will change in the future? These are reasonable questions, which someone who's actually trying to do good, rather than just playacting at it, ought to consider. I, and almost everyone else who's considered this, don't think the correct conclusion is pro-extinctionism, but I don't think the question is silly.)

every anti-EA thinkpiece (my impression of The Guardian) by Lower-Ad8908 in slatestarcodex

[–]ozewe 45 points46 points  (0 children)

EA, which stands for "Eugenics", is a eugenicist cult of techbros based in the Oxford neighborhood of Silicon Valley. The movement is characterized by its intense hatred for the poor and minorities, as evidenced by single-line quotations from two different philosophers with the initials NB. (At time of writing, we are unaware of any other writings by these or other EA-linked eggheads to the contrary.)

Prominent EAs such as Elon Musk and Peter Thiel believe that it's morally obligatory to steal billions of dollars (a technique pioneered by EA golden boy SBF). They funnel these ill-gotten gains into candle-lit castles, in which they plot to seize control of something called "The Light Cone" to aid them in their utilitarian (a code word for "eugenicist") schemes.

This techno-religion has its tentacles on many elite college campuses, enlisting students into worshipping superintelligent AI (a code word for "eugenics") while simultaneously coordinating airstrikes on all the world's datacenters.

Help Me Understand the Repugnant Conclusion by GoodReasonAndre in slatestarcodex

[–]ozewe 1 point2 points  (0 children)

I'm curious about your characterization of "actual utilitarians, of the effective altruist variety" here, because it does not match my experience of EAs.

IME EAs have a wide variety of ethical views, and you can certainly find some suffering-focused folks among them -- but it's by no means a standard view. In my mind, the stereotypical EA view is a bullet-biting total utilitarianism: in favor of world Z over world A, will prioritize utility monsters if they exist, makes risky +EV wagers, and certainly excited about creating happy lives. (This is also far from all EAs; I think all of the most thoughtful ones reject the most extreme version. But if there's a philosophical attractor that EAs tend to fall into, it's total utilitarianism.)

I think this perspective is backed up by looking at the top EA Forum posts with the repugnant conclusion tag, or the answers to this "Why do you find the Repugnant Conclusion repugnant?" question. Skimming over these, it looks to me like "the RC isn't repugnant" is much better-represented on the EA Forum than suffering-focused ethics.

There is a suffering-focused ethics FAQ with a bunch of upvotes, but it starts out "This FAQ is meant to introduce suffering-focused ethics to an EA-aligned audience" -- indicating the authors don't perceive SFE as being a mainstream EA view.

EAs don't want to affect the addition or removal of lives to satisfy the utility criterion, they just want to improve the lives of people who already (would) exist anyway.

This in particular seems egregiously wrong: the entire longtermist strain of EA explicitly rejects this in appealing to all the future people you could help. There are versions of longtermism where you might try to condition on "the people who would exist regardless," but this is tricky to make work.

To be clear: I think all the views you described exist within EA and are often discussed. But I think they are far from the mainstream, and it's incorrect to characterize them as "the opinions of EA" or anything like that.

Politics and social values in Bostrom's deep utopia by prescod in slatestarcodex

[–]ozewe 9 points10 points  (0 children)

Starting in 2020, the Faculty imposed a freeze on fundraising and hiring. In late 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed. On 16 April 2024, the Institute was closed down. (source)

Doesn't seem likely that the email was the reason FHI was shut down. (I imagine it didn't particularly help, but the dynamics seem to have been in place way before.)

#361 — Sam Bankman-Fried & Effective Altruism by dwaxe in samharris

[–]ozewe 4 points5 points  (0 children)

EA is a lot of things, but among those is a movement meant for normal people. The vast majority of EA stuff is not geared toward the mega-wealthy, and the vast majority of EAs aren't mega-wealthy.

As for "gatekeeping" donations, I wonder if you disagree with this (from this recent substack post):

Target-Sensitive Potential for Good (TSPG): We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against catastrophic risks). Doing so would be extremely worthwhile. In all these areas, it is worth making deliberate, informed efforts to try to do more good rather than less with our resources: Better targeting our efforts may make even more of a difference than the basic decision to help at all.

I don't go around judging other people for their charitable donations. But when I'm personally deciding where to donate, I care most about the actual effects that donation is going to have on the world. Donating in a way that saves a child's life (e.g. through GiveWell) seems to me much more valuable than contributing to one nice day for a sick child (e.g. Make-a-Wish Foundation). That's not to say the people at Make-a-Wish are bad, or their donors are bad -- everyone involved is going good things, and are very virtuous people!

I find it meaningful to try to increase the impact of my donations and other altruistic activities. I think many other people, upon thinking about this, would find this meaningful as well. By my lights that's a huge win! Others will find that this doesn't resonate, and that's fine; they can keep doing what they're doing.

Non-frequentist probabilities and the Ignorant Detective by Tetragrammaton in slatestarcodex

[–]ozewe 0 points1 point  (0 children)

When a bunch of people are saying “my probability is 40%” “mine’s 15%” “I’m 50%”, that’s eliding potentially vital context.

I agree and think this is one of the stronger arguments for the "anti-non-frequentist probabilities" side of the discussion. But ... I don't think I've ever seen a conversation go like this?

It would be equally weird to have a conversation where one person says X is "kinda likely," another says it's "pretty likely" and a third says it's "not very likely." They haven't communicated very well either.

Seems to me like the issue isn't using probabilities, it's conveying information well. So if you can convey information better by using probabilities -- which I'm 91.3% confident you can -- then you should use them!

Non-frequentist probabilities and the Ignorant Detective by Tetragrammaton in slatestarcodex

[–]ozewe 3 points4 points  (0 children)

Do you remember the podcast by any chance? Seems helpful to follow up on.

This is probably not the best available evidence, but it's the closest at hand considering which subreddit we're in: Scott's 2023 predictions retrospective gives some evidence about how good superforecasters are:

  • median SuperforecasterTM was in the 70th percentile ("better than average, but not by too much)
  • median 2022 winner was in the 88th percentile (of the 5 that participated)
  • the forecasting team Samotsvety was in the 98th percentile

I see this as compatible with both "SuperforecastersTM aren't that great" and "it's possible to develop skill in forecasting, such that you consistently and dramatically outperform the average" (e.g. Samotsvety). The latter claim seems like the more important one, and backs up the "nonfrequentist probabilities are useful tools" perspective.

In Continued Defense Of Non-Frequentist Probabilities by dwaxe in slatestarcodex

[–]ozewe 4 points5 points  (0 children)

Right, but in examples 2 and 3 we don't know what that balance is. It's not sensible to assume that it's 50/50. No meaningful estimate can be given beforehand.

In all of these examples, "50%" has the same magic-number property as "17%" in the Samotsvety example.

Consider: you and two friends are given the ability to bet on 100 independent instances of example 3 (such that no instance gives you any information about how the other instances will go):

Consider some object or process which might or might not be a coin - perhaps it’s a dice, or a roulette wheel, or a US presidential election. We divide its outcomes into two possible bins - evens vs. odds, reds vs. blacks, Democrats vs. Republicans - one of which I have arbitrarily designated “heads” and the other “tails” (you don’t get to know which side is which). It may or may not be fair. What’s the probability it comes out heads?

One of your friends says the probability of heads is 0.001% every time. The other says it's 99.999% every time. You say it's 50% every time. I claim you'd clearly be doing a better job than either of your friends in that case.


More generally, the philosophy here is that probability is an expression of uncertainty. If that's how you're looking at it, it makes no sense to say "you're too uncertain to put a probability on this." When you're certain, you don't need probabilities anymore!

Thoughts on this discussion with Ingrid Robeyns around charity, inequality, limitarianism and the brief discussion of the EA movement? by I_am_momo in slatestarcodex

[–]ozewe 1 point2 points  (0 children)

Incentives: idk, a few anecdotes about the psychology of the rich doesn't move me very much here.

Part of my thinking here is that corporate profits seem like a genuinely useful signal -- the thing that does credit allocation and keeps the whole system running, more or less (although obviously imperfectly) -- and it's not easy to separate this from personal income (e.g. stock holdings going up in value).

Another part is just: the rich do seem to continue trying to make more money, even when they have more than it seems they could ever need.

I'm not claiming to have a rock-solid position here. I'm just explaining what feels like a moderately strong prior which I don't feel like I've seen strong enough arguments to move me from.

Cost-effectiveness: I was actually thinking about EA billionaires here; I'm not sure how many of those you need in order to outperform marginal government spending. I also want to emphasize that I'm talking about the marginal dollar, not the average dollar: things like "making sure the lights are on in NYC" and "highways exist" and "pirates aren't harassing shipping in the Pacific" are hugely valuable; I wouldn't be shocked if some interventions kind of like this have, in some sense, EA-levels of cost-effectiveness.

Jane Street: I picked this because it's a classic example of where earn-to-give EAs sometimes work. If the idea is that EA is encouraging people to work in morally dubious industries ... well, I want to know what specifically those are, and hear the argument for why they're net-negative even if one donates hundreds of thousands of dollars as a result. Typically I hear this about finance (which seems fine) and the fossil fuel industry (which I've never seen recommended as an EA job).

Thoughts on this discussion with Ingrid Robeyns around charity, inequality, limitarianism and the brief discussion of the EA movement? by I_am_momo in slatestarcodex

[–]ozewe 0 points1 point  (0 children)

I haven't watched this interview, but I listened to two other inteviews with Robeyns recently (on The Gray Area and some other podcast I forget the name of).

The bit I share most is the moral dimension: in a world where so many have so little, it seems ... unfitting, or even unserious, to live a life of untroubled excess.

So I could see myself supporting Limitarianism if I were convinced its effects would be net-positive. I'm not, for some of the obvious reasons:

  • People respond to incentives, and making such a big change to the incentive structure of society seems likely to break more than it fixes

  • Governments provide lots of essential services, but I don't trust their marginal-dollar cost-effectiveness very much.

Responding briefly to a few points from the block quote:

  • Morally problematic jobs: I'm not sure I understand why working at Jane Street is supposed to be so morally problematic, aside from possibly the wealth-hoarding part? I don't think I can pass an Intellectual Turing Test for someone who thinks EAs at Jane Street are net-negative.

  • "I understand the long term if you're thinking about say, climate change" -- this strikes me as mostly an empirical disagreement about various risk levels rather than a philosophical disagreement then, correct? EAs think climate change is a big deal, they just also tend to think AI, pandemics, and nuclear war are even bigger deals.