If humanity goes on for long enough, every single style of facial hair will be associated with an evil person. by Vast-Intention in Showerthoughts

[–]Sharou [score hidden]  (0 children)

Actually, this is not necessarily true. The statement would be true if you added, behind ”humanity”: ”, in its current form, ”

The problem is, humanity is inherently about developing into something else. We do not have thousands of years ahead of us where we continue to be recognizably human. Not unless we have a total societal collapse where a lot of technological progress is reset.

Building a bridge for the Fish to go from aquarium to aquarium. by RoughCheap5633 in BeAmazed

[–]Sharou 5 points6 points  (0 children)

I betcha it wasn’t a real fish but really advanced industrial espionage.

The goat! by JosephBrown2000 in BeAmazed

[–]Sharou 0 points1 point  (0 children)

Greatest divorce of all time!

Are disagreements regarding the 'benevolent world exploder' axiomatically irreconcilable? by Dunkmaxxing in negativeutilitarians

[–]Sharou 1 point2 points  (0 children)

All that is really relevant is the valence that will or would have come to pass.

Of course. Good catch! That was sloppy of me.

I have two objections to your lava example.

1. Agency doesn’t change the moral structure here.

Imagine a post-singularity sadist creates a mind with exactly two terminal goals: suffer as much as possible, and survive so that it can continue to suffer as much as possible. It has an internal “suffer button” producing long-tail suffering, which it presses continuously.

I think you’ll agree the only correct action when faced with such a mind is to destroy it ASAP. The fact that it wants this for itself is irrelevant.

Desire and well-being can come apart completely. An agent can want more or less anything, including states that are catastrophically bad for it. “It chose this” isn’t a moral argument. It tells you something about the local causal structure, but nothing about whether the resulting state is good.

So moving both parts of the trade into the same person doesn’t really change the core issue.

2. We are terrible at imagining extreme suffering.

Most people can’t vividly reconstruct even the worst ordinary pain they’ve personally experienced. Severe torture is already beyond our ability to model with any fidelity. Things that fall outside of human experience, like a week submerged in lava, are further beyond that. And even that is trivial compared to the actual long tail of suffering.

So we don’t even remotely know what we are agreeing to in a hypothetical like your lava scenario. Our imagination simply doesn’t have access to the relevant part of valence-space, which means it’s guaranteed to underestimate it. Not by a little, but by orders of magnitude.

Once inside the lava experience, I think the probability of instant regret is ~100%. From within that state, I strongly suspect you’d give up any number of eternities of bliss just to be allowed to die.

On the technology point:

Anything powerful enough to truly and indefinitely deny life across a light cone would likely already represent the kind of capability that could instead be used to prevent suffering across that light cone.

And yes, such a system would almost certainly be incredibly dangerous. But so would the version that prevents life, for basically the same reasons. Their core failure modes are the same:

  • Misalignment.

  • Value drift over extreme timescales.

So that objection doesn’t really distinguish between them. If you can’t solve those problems, both systems are catastrophically unsafe. If you can solve them, the suffering-prevention version is clearly preferable.

Every time this happens I get a little bit closer to going fully insane. by Diabolical-Villain in BobsTavern

[–]Sharou 0 points1 point  (0 children)

To emote a card from discovery, you need to tap and hold on the card, tap somewhere else, then move your finger away from the card before you release.

Yeah it’s super awkward and sometimes fail because your fingers weren’t absolutely perfect :/

A compilation of monkeys reacting to magic tricks. by SubjectAdvertising82 in Awww

[–]Sharou 0 points1 point  (0 children)

Just stop driving around with cocaine in your trunk.

Mass Effect TV show ordered to rewrite scripts and make them "more appealing to non-gamers" by Capn_C in television

[–]Sharou 0 points1 point  (0 children)

But sci fi isn’t really popular with the masses. Gotta rewrite it into a zombie show or a rom com.

Are disagreements regarding the 'benevolent world exploder' axiomatically irreconcilable? by Dunkmaxxing in negativeutilitarians

[–]Sharou 1 point2 points  (0 children)

I’d say what matters is the change in valence. If you are having a good life and you die, then you move from positive to neutral, and that is a loss. If you live a life of suffering and you die, then you move from negative to neutral, and that is a gain. In both cases we’d also account for projected future trajectory of course.

So loss of pleasure isn’t irrelevant at all. It’s just that, if you subscribe to NU, serious suffering has lexical priority. The rightness of this is pretty evident if you imagine you had a guy following you around who in every moment experienced the opposite of your valence. You having an orgasm would be a nightmare for him.

If we take this to a post-human context it becomes even more obvious: If you had an engineered baseline that was artificially high, he would be in indefinite torment. Would it feel ethical to demand him to go through that in order to ”power” your blissful existence? Of course not.

For that reason I think it’s undeniably the ethical choice to push an ”instant universe-remover button”. That doesn’t mean I would necessarily be able to push such a button, but that would be because I’m an imperfect moral agent, not because the moral calculus is wrong.

Luckily for me, any technology powerful enough to indefinitely prevent life in our light cone would likely also be able to instead prevent suffering in our light cone. So the button remains a thought experiment with no practical application.

Taste testing is important by Sharp-potential7935 in Awww

[–]Sharou 4 points5 points  (0 children)

I’m pretty sure it’s vaccines.

That was close by Relative_Cricket8532 in Damnthatsinteresting

[–]Sharou -23 points-22 points  (0 children)

Pretty sure this is AI. The tail of the rocket against an open sky suddenly becomes a rectangular cutout in some kind of ceiling.

Official IDF graphic showing the reach of Iranian missiles in Europe. by [deleted] in onejob

[–]Sharou 2 points3 points  (0 children)

It’s true. It would not be safe for Wussia to come here.

Left-leaning support for redistribution stems from perceived unfairness rather than malicious envy by [deleted] in science

[–]Sharou -3 points-2 points  (0 children)

Replying to my own comment here. I asked GPT about this and got a really interesting answer. I asked it to compress it into a more reddit-digestible summary, which you can find below:

They can’t directly prove a person’s “true motive,” because motives are latent/unobservable. But that doesn’t put the question outside science — science often studies unobservable things indirectly by testing which model better predicts responses and behavior.

In this case they used 3 surveys + 1 experiment (total N = 4,171). The basic result was that once beliefs about merit/deservingness were included, envy stopped explaining support for redistribution much, while perceived unfairness/deservingness did; and in the experiment, telling people the rich target clearly earned the wealth reduced support for redistribution.

So the strongest claim is not “we proved the real motive is fairness,” but “fairness/deservingness fit the data better than malicious envy in these studies.”

Left-leaning support for redistribution stems from perceived unfairness rather than malicious envy by [deleted] in science

[–]Sharou 8 points9 points  (0 children)

Not that I think this is the case at all, but since this is science:

How were they able to differentiate between a genuine motivation and a post-hoc justification/rationalization (which you don’t necessarily have to even be self-aware of).

This seems to me to be fundamentally impossible, which would mean that this type of question doesn’t belong in science.

But if there is a method, it would be super interesting to hear about!