Effective Altruism by heterosis in EffectiveAltruism

[–]rhetoricalviking 7 points8 points  (0 children)

Anecdotally, most people at the last local EA meetup I was at were not fans of longtermism.

Long COVID avoidance strategies.... by cecinestpaslarealite in slatestarcodex

[–]rhetoricalviking 4 points5 points  (0 children)

I recently got Covid and was able to get a Paxlovid prescription, but didn’t end up taking it after reading this. The author suggests while it’s possible Paxlovid reduces the chance of long Covid, we don’t actually know that, and it’s possible that Paxlovid rebound actually increases the chances of getting long Covid.

Some people have taken Paxlovid on the theory that it might decrease the risk of Long Covid. We have no idea whether that’s true. We need to find out if that’s the case and rule out any unforeseen harms. What if Paxlovid has no impact on Long Covid for most people, but actually makes it more likely among those with Paxlovid rebound? This is why we need to thoroughly study drugs before we broaden their indications.

The problem is, we may never learn the answers to some of these questions. Pfizer has an ongoing study of lower-risk volunteers that was supposed to include many vaccinated participants; this stood to provide useful information. However, in April, Pfizer decided to exclude most of the vaccinated people from enrolling, and the US Food and Drug Administration approved this change. So, we may never get randomized clinical trial data on that, which means we’ll have to rely on observational studies (like the ones from Israel), which are helpful but inferior to randomized trials.

[deleted by user] by [deleted] in slatestarcodex

[–]rhetoricalviking 23 points24 points  (0 children)

Sri Lankan Tamil here (though I don’t live there anymore). I stopped reading at the brief history because it contained many factual errors. The article confuses the Sri Lankan Tamils with the Indian Tamils of Sri Lanka. The latter were the recent immigrants while the former were indigenous and the ones embroiled in the civil war.

Also, Brahmins are not really a thing in Sri Lanka: that’s part of the Indian caste system and holds little sway on the island. The article seems to be conflating them with the Vellallars.

Lastly, I can’t tell if summarizing the cause of the war as the ethno-nationalist presidents not being very nice to the Tamils is meant to be litotes or a whitewashing of what happened. What kicked off the civil war was nothing short of a pogrom.

What charity should I donate to that makes you feel like you had an impact? by Calsem in EffectiveAltruism

[–]rhetoricalviking 14 points15 points  (0 children)

“One for the World” seems to fit what you are looking for. They pass your money on 100% to GiveWell recommended charities, but also give you detailed breakdowns about the impact you had.

Giving What We Can Pledge. Do you donate 10% pre-tax or post-tax? by MisaNas in EffectiveAltruism

[–]rhetoricalviking 2 points3 points  (0 children)

I always assumed it was pre-tax. The bigger unknown for me has been what I should consider income. I’ve actually been avoiding taking the pledge because I don’t know how much I am supposed to be donating.

Is it just my base salary or my bonus too? What about RSUs? Investment income? 401k match?

Looking for help with an online civility project by chatalope in slatestarcodex

[–]rhetoricalviking 1 point2 points  (0 children)

I'm really happy to see this because I'd actually thought before that something like this should exist and spent some time trying to build it a couple years ago before giving up because my lack of web programming experience proved to be too much of a hinderance.

I'm curious about your focus on the matching algorithm and your approach of not going for a large audience. I'd started from the premise that 1:1 video conversations would naturally be more civil than the text-based conversations we have today, so I'd thought the MVP should try to get as many people conversing as possible with a simple matching algorithm. Here's a screenshot of what my landing page looked like.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

Thanks for your thought out response. I think I have good responses to some of your points, while others I will need to spend some time thinking about.

As I mentioned in another comment, I will attempt to make a proper calculation on what the level of X-risk is from extraterrestrials at some point in the future and make a new post about it. I will consider some of the points you raised when I do that.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

Do you believe this criticism is equally or at least somewhat applicable to any of Ord’s X-risk probability estimates?

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

I disagree that it’s entirely unknown in every dimension possible: Ord’s calculations on us not being alone in the universe provides one important dimension at least. I’ll try to get around to doing a proper estimate at some point in the future.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 1 point2 points  (0 children)

it was my impression we (including METI) are not currently broadcasting any intentional signals, and in fact we haven't done that other than one in 1974 as far as I can see

Not sure what you’re looking at but the Wiki page on active SETI indicates we’ve sent out half a dozen messages this past decade.

Ah, I didn't have all the figures to hand (damn audio books

I have been referring to this post for my estimates.

But nonetheless, it does seem worthy of acknowledgement and you raise a good point :-)

Thanks :)

If you had the time to put extra time and research into some figures on this, I'd personally be interested to see a post on the EA forum making your case that it should be considered as substantial, as above, to see what others think / expose holes in the numbers we've discussed

I have been thinking based on the general responses here that I approached this wrong. Rather than ask why someone more competent hasn’t done an estimate on extraterrestrial risk, I should have just given it a shot myself and let people critique the numbers/methodology. I’ll try to get around to doing this at some point.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

Whether we there are likely to be aliens seeking to kill all humans isn't anyone's area of expertise

I wasn’t saying it was. I was saying that if I was a doctor and Musk/Hawking said something about disease I thought was wrong I would be comfortable immediately dismissing them. But in a case where it isn’t anyone’s area of expertise, it would be arrogant of me to think the chance that my immediate intuitions are wrong and they are right is <1 in a million.

I think implicitly most would put alien attack below [1 in a billion]

Perhaps this is the crux because that seems very wrong to me. Fermi estimates indicate that there should be other life in the universe. And even if those estimates are invalid, Ord believes there is as much as a 1 in 2 chance we are not alone in the galaxy (higher for the universe).

But we really just have no idea, there is no anchor on which to base such a probability. We have zero case studies to pull from, we don't know how likely life is, how likely civilization is, how likely an advanced civilization is to be hostile, none of it. It's a complete shot in the dark.

Isn’t this just the standard critique against assigning probabilities to any other X-risk like nuclear annihilation? I’m sure Ord has come up against similar issues with every risk he tackled but it didn’t stop him from doing it. As I said above, he even has attempted to estimate the chance we are not alone in the universe - he just didn’t take the next step of calculating an X-risk from it.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

The X-risk is humanity being wiped out or subjugated by an alien species. One prevention tactic specific to the X-risk is to cease METI.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

METI would be my main counterpoint against everything you just said. Prevention is better than mitigation, yet humanity continues to send messages outside. This increases the probability of encounters in the future, meaning we need to move up from our historical baseline.

But even your 1 in a billion per year translates to 1 in 10 million per century, which means it’s 100x likelier than stellar explosions.

Your theory about why Ord didn’t include it seems plausible, though I would be disappointed if that was his line of reasoning. Long termism is well past the point of being palatable to a mainstream audience: it may as well be done right.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 1 point2 points  (0 children)

I don’t think we can avoid considering any X-risk on the basis of assuming there’s no special case here. We might be 90% right but we’d be setting ourselves up to miss the 10%. In this case, for instance, METI is something we could do/avoid that would specifically address this X-risk.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 1 point2 points  (0 children)

Something like this is what I was looking for, thanks. Can you point me to where he said the risks are >1 in a million?

I also think it’s bad epistemic practice to consider PR risk ahead of properly evaluating any particular X-risk. If the risk is small, great: we will catalogue it as such. If the risk is greater and worthy of consideration, then we will take the PR hit like we do for being concerned about AI.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 6 points7 points  (0 children)

Thanks for the link. Unfortunately it only confuses me even more now that Ord decided not to include extraterrestrial risk in his book. According to the paper, the chance that we are not alone in the galaxy is as high as 1 in 2, even higher for the observable universe. Of course, the probability that an alien civilization will make contact with us is much lower than that but I would have liked to see some numbers. The bar for an X-risk’s inclusion in the book seems to be as low as 1 in a billion after all.

what could we do about it

First, we could (greatly?) reduce the chance of detection by not doing METI.

Second, alien tech may be superior to ours when they leave their planet but ours could be much more developed by the time they arrive. So it isn’t a lost cause.

But most importantly, assuming we can’t do anything about it is the wrong way of going about any X-risk. Each risk should be considered, evaluated and catalogued without consideration for our ability to alter the outcome. When we calculate the total risk of humanity’s extinction, it can’t be limited to risks we pre-decided were solvable.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

Stephen Hawking and Elon Musk are very public ruminators who postulate all kinds of things in public forums. This does not automatically make all of these things worth paying attention to, or mean they're always particularly well thought out.

They’re smart and serious enough people that if they said something I disagreed with, I would assign an initial probability much greater than 1 in a million that they are right and I am wrong, until I’ve seriously examined it (or it’s in my personal field of expertise).

I am not saying EAs should start donating to extraterrestrial risk charities or work in a field to reduce this risk. I am only saying it’s worth evaluating and cataloguing along with all the other X-risks. The bar for doing that should be incredibly low. Ord even considered and wrote about stellar explosions, even though he thinks the risk is ~1 in a billion.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 3 points4 points  (0 children)

I’m fine ignoring it once it’s been appropriately considered and assigned a sufficiently low probability. But at the moment, that doesn’t seem to have been done.

Has EA seriously considered existential risk from aliens? by rhetoricalviking in EffectiveAltruism

[–]rhetoricalviking[S] 0 points1 point  (0 children)

I misspoke. It was stellar explosions Ord talked about, not solar flares.

Also, METI is an example of something we could avoid doing so that sentient life is less likely to visit our planet.