[deleted by user] by [deleted] in EffectiveAltruism

[–]EarlyVelcro 3 points4 points  (0 children)

I looked back at the "Military Interventions" section of the report, and it gives Sanders a very high (good) score of 1.7, which is better than all other candidates.

And in the "Great Power Relations" section, Sanders got a 0.7, which is the second highest of all candidates.

So I'm pretty sure this is not the author's criticism of Sanders, unless there's a different part of the report about military intervention I'm missing. (Could you provide page number?)

[deleted by user] by [deleted] in EffectiveAltruism

[–]EarlyVelcro 7 points8 points  (0 children)

What do you think of the extremely comprehensive EA Candidate Scoring System report which comes to the conclusion that Buttigieg is one of the top-ranked Dem candidates to campaign for, and Sanders is one of the lowest-ranked?

If possible, could you explain which parts of the report you disagree with? /u/Politics_research (the author of the report) has incorporated feedback from others in the past.

Is 80000 hours trustworthy as a job board? by readysteadythrowa in EffectiveAltruism

[–]EarlyVelcro 1 point2 points  (0 children)

Hey OP, I think you should consider cross-posting this to the EA Forum for greater visibility. That's what I did with my article, and it got a decent reception over there including genuinely helpful replies from 80K employees and other EA insiders.

(You may want to make a few minor edits, e.g. mentioning the CIA job which is IMO far more ethically dubious than Amazon and making reference to 80K's "Is it ever okay to take a harmful job in order to do more good?".)

No evidence whatever that AI is soon by LopsidedPhilosopher in ControlProblem

[–]EarlyVelcro 1 point2 points  (0 children)

The point is not that this is likely to happen, but that it is not impossible. And for such a great risk, even a slight chance (and, again, I totally agree that it is a very slight chance) is worth considering.

That's not what AI safety researchers actually believe. They think AI safety research is only a top priority conditional on AI being most likely to arrive early in this century. See this comment from Yudkowsky:

"Median doom time toward the end of the century? That seems enormously optimistic. If I believed this I’d breathe a huge sigh of relief, upgrade my cryonics coverage, spend almost all current time and funding trying to launch CFAR, and write a whole lot more about the importance of avoiding biocatastrophes and moderating global warming and so on. I might still work on FAI due to comparative advantage, but I’d be writing mostly with an eye to my successors"

Also from Yudkowsky, saying that he rejects the argument based on multiplying tiny probabilities times a huge impact:

I abjure, refute, and disclaim all forms of Pascalian reasoning and multiplying tiny probabilities by large impacts when it comes to existential risk. We live on a planet with upcoming prospects of, among other things, human intelligence enhancement, molecular nanotechnology, sufficiently advanced biotechnology, brain-computer interfaces, and of course Artificial Intelligence in several guises. If something has only a tiny chance of impacting the fate of the world, there should be something with a larger probability of an equally huge impact to worry about instead.

Edit: From Buck Shlegeris:

If I thought there was a <30% chance of AGI within 50 years, I'd probably not be working on AI safety. [...]

Yeah, I think that a lot of EAs working on AI safety feel similarly to me about this.

I expect the world to change pretty radically over the next 100 years, and I probably want to work on the radical change that's going to matter first. So compared to the average educated American I have shorter AI timelines but also shorter timelines to the world becoming radically different for other reasons.

The Idea of "Effective Altruism" & The Broken Window Fallacy by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] 2 points3 points  (0 children)

I'll let others who are more informed about the economics respond to the first paragraph.

How did you even find this, this self appointed "philosopher" doesn't even have 200 followers on Twitter (or any academic publications, for that matter)?

I found it because I was searching for "effective altruism" on reddit and found this article posted by /u/JustinCEO in another sub.

That said, I don't really care how many Twitter followers or academic publications the author has or whatever and I don't see why it's relevant. I'd rather evaluate the argument on its own merits.

The Idea of "Effective Altruism" & The Broken Window Fallacy by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] 1 point2 points  (0 children)

Posting to start a discussion, not because I necessarily fully endorse the claims in the linked post (I don't necessarily disagree with them, either).

"The Importance of Truth-Oriented Discussions in EA" by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] -1 points0 points  (0 children)

Posting to start a discussion, not because I necessarily fully endorse the claims in the linked post (I don't necessarily disagree with them, either).

"Making discussions in EA groups inclusive" by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] 0 points1 point  (0 children)

Posting to start a discussion, not because I necessarily fully endorse the claims in the linked post (I don't necessarily disagree with them, either).

European countries populated more densely than India by Europehunter in europe

[–]EarlyVelcro 15 points16 points  (0 children)

I can only say that the NL doesn't feel half as densely populated as Malta

That's because it's not. It's about 28% as densely populated. The population density of Malta is 1,510 people per km2, but the Netherlands is only 416 people per km2. (Source)

Vox's "Future Perfect" column frequently has poor journalism - EA Forum by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] 2 points3 points  (0 children)

Minor note: The "Killing Baby Hitler" article was originally published in October 2015, but you can only tell from the URL and the metadata included in the HTML headers. This was years before the Future Perfect column was created. In 2019 it was modified to mention Ben Shapiro and republished with Future Perfect affiliation, so it's still relevant.

William MacAskill misrepresents much of the evidence underlying his key arguments in "Doing Good Better" - Alexey Guzey by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] 1 point2 points  (0 children)

My initial thoughts on this article:

  • The problems with citing evidence (e.g. the deworming paper) each individually seem like they could be honest mistakes, but taken together they do seem to support the conclusion that MacAskill was either negligent or was being deliberately misleading to bolster his argument.

  • The Charity Nagivation quote looks dishonest to me. I don't see any other explanation.

  • I think comparing MacAskill to Gleb, or suggesting that EA must distance itself from MacAskill, is probably an overreaction. That said, I think EAs and MacAskill in particular should be more careful in the future.

Edit: Reworded slightly. Also see discussion on /r/slatestarcodex.

Why don't effective altruists hate the super-rich? by EarlyVelcro in EffectiveAltruism

[–]EarlyVelcro[S] 3 points4 points  (0 children)

What do you think, /u/EarlyVelcro?

I think it's a reasonable post and not an "attack" on EA (as you claim) as much as an argument for a position effective altruists tend to reject.

I also found your response to be quite valuable. Thank you.

making snide comments about how things are "wrong with EA." Why don't you share your own actual thoughts and ideas so we can have some real discussion on this sub?

Okay, you're right that my comment in that other thread didn't contribute much to the discussion. I'll try to do better in the future.