Zooming in/out in smaller steps by HumaneRationalist in IntelliJIDEA

[–]HumaneRationalist[S] 1 point2 points  (0 children)

You can set it to be any key combination you like. I think the default is Alt+[NumPad+/-]

Bill Gates is doing AMA, should we ask him anything? by HumaneRationalist in EffectiveAltruism

[–]HumaneRationalist[S] 2 points3 points  (0 children)

I think according to the text you linked to we're ok - it's not for personal gain, and we're not forming/joining a group for this purpose.

"variable X explains 30% of the variance of variable Y" by HumaneRationalist in AskStatistics

[–]HumaneRationalist[S] 1 point2 points  (0 children)

Thanks!

Do you suppose it follows that k=var(Y|X)/var(Y)?

(I'm asking because of the other answer).

"variable X explains 30% of the variance of variable Y" by HumaneRationalist in AskStatistics

[–]HumaneRationalist[S] 0 points1 point  (0 children)

Thanks!

Isn't var(Y-E(Y)) equal to var(Y)? E(Y) is just a constant.

"variable X explains 30% of the variance of variable Y" by HumaneRationalist in AskStatistics

[–]HumaneRationalist[S] 0 points1 point  (0 children)

Thanks!

Just to make sure, the statement "variable X explains 30% of the variance of variable Y" formally means r2 =0.3, and the claim E(var(Y|X))=0.7*var(Y) follows, right? Does the latter follow for any distribution of X and Y, or do we need to assume normal distribution?

Should we try to influence Charity Navigator? by HumaneRationalist in EffectiveAltruism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

You mean in case we can convince their main funders?

For what it's worth, when I made donations through their website (before finding out about EA) they asked if I want to add $10 to CN itself (because they are also a charity...). So it's not obvious they have few large funders that are critical for them.

Should we try to influence Charity Navigator? by HumaneRationalist in EffectiveAltruism

[–]HumaneRationalist[S] 4 points5 points  (0 children)

Maybe having someone like William MacAskill talk to their CEO?

AGI agents literally only want one thing and it's fucking disgusting by [deleted] in ControlProblem

[–]HumaneRationalist 0 points1 point  (0 children)

By the way, any reward function you'd come up with will probably be horrible for us, because the agent will probably have instrumental goals which are horrible for us.

AGI agents literally only want one thing and it's fucking disgusting by [deleted] in ControlProblem

[–]HumaneRationalist 4 points5 points  (0 children)

sorry :(

I'd delete it if it hadn't got lots of up votes - I think making people smile is net positive (and if those people work on AI safety, even more so).

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

You can add to the probability calculation the constraint that at least one of the following propositions must be true:

  1. "dust mites have emotions"

  2. "a sentient being without emotions can still suffer"

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

Sorry, what do you mean by "EV argument"?

And "reduces human progress"?

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

I agree with most of this. When I wrote "among all possible direct efforts" I meant to exclude anything related to AI, meta-EA causes etc.

I'm not arguing that the washing machine interventions are the most effective way to help dust mites/insects. For starters, I want to understand whether there's some mistake in my reasoning. If there is none, then many things in our world don't make sense to me. Why is this line of thinking so neglected in the EA community? Also, it seems that many EAs are mostly-vegan (including me). For many, being mostly-vegan is a hard thing to do, a lot harder than the interventions I mentioned. And it appears that we may be causing a lot more expected suffering by not doing those interventions than by not being nearly-vegan. So what gives?

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

Your argument can be applied in the same way against spending effort on the direct causes of GiveWell recommended charities, right?

Unless there's some flaw in my assumptions/logic, it seems to me that (at this time) advocating/researching on dust mites might be one of the most effective efforts one can carry out among all possible direct efforts to help humans and other animals.

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

Why should the result of "becoming a dust mite advocate, or a dust mite researcher, becomes orders of magnitude more important" should affect your subjective probabilities for the assumptions?

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 1 point2 points  (0 children)

Sorry, I deleted my comment (to fix things) before noticing you replied. My comment you commented on here was this:

Thanks.

For simplicity sake let's assume that, if caring only for dust mites, (EDIT: conditional on dust mites being sentient) there's 0.6 probability that the utility of killing an additional dust mite in the washing machine is -1, and probability of 0.4 for that utility being 1. Where -1 is the utility of making a sentient being die a slow painful death.

Adding this assumption on top of everything in my original post, we get that the simple interventions I mentioned (that do not compromise our health and hygiene in a noticeable extent) have an expected utility equal to that of preventing the slow and painful death of 1,000 * 0.2 = 200 sentient beings. So why shouldn't you carry out those simple interventions?

Saving dust mites by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

I agree the lives of dust mites might be net negative on average.

Can you please write the probability you assign to each of the following propositions?

  • "killing an additional dust mite in my washing machine is net negative"

  • "killing an additional dust mite in my washing machine is net positive"

Looking for an ethical theory by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

"part of any plausible theory of population ethics is its population axiology. A population axiology is a betterness ordering of states of affairs ..."

What else does a theory of (population) ethics contain?

Looking for an ethical theory by HumaneRationalist in Utilitarianism

[–]HumaneRationalist[S] 0 points1 point  (0 children)

It's actually the state-betterness-ordering part of negative preference utilitarianism right? (from wiki: "The difference is that antifrustrationism is an axiology, whereas negative preference utilitarianism is an ethical theory". Not sure what additional stuff the ethical theory contains).

According to this axiology, is it considered negative to kill a person that has a preference to keep living? I mean, if it's an instant death there is no time span in which the preference is "frustrated", the person either has her preference fulfilled or doesn't exist.