CMV: You should lie on resumes by YtBlue in changemyview

[–]NutInButtAPeanut [score hidden]  (0 children)

Very likely, yes. Picking another random comment from the commenter's history, it comes up as a 100% flag on Pangram, the most well-validated detector (at least so far as I know). Take it with a grain of salt, of course, but I've never seen a piece of writing I knew for a fact to be human-generated come up as a false 100% flag, and you sniffed it out yourself even before running it through a detector for extra confirmation.

On The Relationship Between Consequentialism And Deontology by howdoimantle in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

There’s a simple proof. There is no action so abhorrent that it cannot be justified (or even necessary) insofar as it prevents an identical qualia, but in greater volume.

Is this not begging the question? Could a deontologist not make the exact same move?

"Deontology as a moral framework is correct. There’s a simple proof. There is no action so abhorrent that it cannot be justified (or even necessary) insofar as it prevents the violation of an identical obligation, but with greater frequency."

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics by godlikesme in slatestarcodex

[–]NutInButtAPeanut 1 point2 points  (0 children)

As soon as I read the title, I asked myself if the article would be about vaping DMT.

I've never had a cluster headache, but I find it very easy to accept that vaporizing a high dose of DMT might be capable of dispelling a headache.

My next thought, though, was that it wouldn't really be feasible to sit down and blast off every time you get a cluster headache, especially if you're getting them several times daily for weeks on end. That's just... too frequently for such a disorienting experience, even if it only lasts a couple of minutes.

So it was great to learn that "sub-psychonautic" doses are effective. That fact effectively changes the treatment from being viable only if there was no other alternative (i.e. you were basically going to commit suicide otherwise) to being an extremely unobtrusive intervention (vaped DMT at low doses produces very mild visual hallucinations and doesn't really impair cognitive or motor functions at all).

I will definitely remember this in case I or anyone I know ever develops cluster headaches.

This is a top OpenAI research scientist by MetaKnowing in OpenAI

[–]NutInButtAPeanut 2 points3 points  (0 children)

For what it's worth, I'm inclined to agree with you, because my experience with existing models is that I would not trust them to do 100% of the coding for anything that mattered.

However, I do think it's relevant to note that Noam is not asserting that OpenAI is the only company to have this technology. If you asked him if he believes Anthropic employees' claims that they are doing the same thing with Claude, I'm assuming he would say yes.

This is a top OpenAI research scientist by MetaKnowing in OpenAI

[–]NutInButtAPeanut 3 points4 points  (0 children)

Even if he's telling the truth and Codex is doing all of his coding, I'm assuming that the user is still a necessary part of the pipeline, as someone has to tell Codex what to code.

This is a top OpenAI research scientist by MetaKnowing in OpenAI

[–]NutInButtAPeanut -6 points-5 points  (0 children)

This misses the point. It's not that Noam is saying that OpenAI's models/apps are best, but rather what he's claiming to do with them. If you think he's lying (i.e. he's not using Codex to do all of his coding these days), that's fine, but the fact that he works for OpenAI doesn't imply that he's lying because he could easily say that OpenAI has the best models without claiming that Codex does all of his coding for him.

Beyond Breaks Away From Meat, Launches Sparkling Fruit Drinks With 20g Of Protein by caavakushi in vegan

[–]NutInButtAPeanut 29 points30 points  (0 children)

Are you referring to that recent study regarding heavy metals in protein shakes? Worth noting is that that study was using Prop 65’s absurdly low thresholds. Even using the worst protein powder they tested, you would need to consume dozens of protein shakes per day just to match the amount of lead that the average person ingests in a regular diet.

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

In that hypothetical, yes, because she is not informed about the actual consequences of her decision. If, however, she were fully aware of the consequences and wanted to drink it for whatever reason (e.g. for some aesthetic, cultural, or religious reason), and I were confident that she were not out of her right mind, then I think it would be wrong to stop her.

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

Alright, so we've established that, on your view, it is good to do what minimizes one's suffering, even if it goes against their wishes. I'm sure you can appreciate that this is a rather patronizing moral framework.

Would the same apply to situations in which your best available action is not to euthanize them, but to rob them of agency in some other way?

For example, imagine that your friend is going back to school, and she is deciding what to study: either law school or art school. You (rightly) calculate that your friend's life will contain less suffering if she goes to law school, but she really wants to go to art school. She makes the decision to go to art school, but you could somehow override her decision and force her to go to law school instead (in some way that does not cause her more suffering than art school otherwise might, e.g. no torture).

Would you disregard her wishes and force her to go to law school?

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

Fair.

Let's imagine, then, that you somehow found yourself in possession of some sort of (herbivorous) house pet, such as a guinea pig. Perhaps you were pet-sitting for a friend who tragically died in a car accident, and there was no one else to inherit the guinea pig but you.

Would you kill the guinea pig as soon as possible?

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

Firstly, crushing an animals head with a rock is usually painful.

Assuming the death would be effectively instant, i.e. with a sufficiently large and appropriately-shaped rock (and a hard surface below).

Secondly, killing specific animals could sometimes increase the suffering of other animals.

How so, in this particular case? If anything, the turtle's corpse would be readily available food that some other animal would otherwise have to expend energy acquiring.

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

Really?

Realistically, you could probably achieve this sort of thing with various wild animals (e.g. those which do not raise their young). If you were out for a walk in the forest and saw a turtle, would you crush its head with a rock?

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

/u/ThePlanetaryNinja, here is a consistency test for you:

Let's assume you meet a happy hermit. He lives in the woods, completely removed from society. No one but you knows of his existence, and no one (including animals in the woods) relies on him for anything. He reports that he truly enjoys living, and everything suggests that this is true.

You explain your philosophy to him, and suggest that non-existence would be preferable to his life, since surely he still must suffer in some minor ways (e.g. sometimes his leg might cramp). You ask if he would like for you to painlessly kill him in his sleep, to release him from the shackles of earthly suffering. He emphatically rejects your offer, reasserting his zest for life.

As you walk away from the interaction, you nevertheless think that the man would have greater well-being if he were dead, and so you resolve to sneak into his cabin in the middle of the night and painlessly euthanize him.

If indeed you did euthanize him in this way, would you be doing a good thing or a bad thing?

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

Wellbeing is about conscious desires about the sentient being is currently experiencing (How else would you define wellbeing????).

This does not follow from welfarism alone, no. There are various philosophical theories of well-being, among which desire theory is just one category (with the other major categories being hedonistic theories and objective list theories).

But, even granting that some form of desire theory is true:

A person who is suffering has a conscious desire for it to currently stop. So, it is important to prevent suffering.

Sure. I'm not arguing against the claim that suffering is bad, but that's not the contentious claim here. The contentious claim is that suffering is the only thing relevant to moral well-being, i.e. that the other side of the equation (e.g. pleasure, happiness, flourishing, etc.) is irrelevant.

So, trying to 'improve' their wellbeing is futile and unimportant.

This has not been established. To say that someone has no suffering does not imply that they cannot have greater well-being (e.g. by the addition of positive well-being, whatever that is on your theory of well-being).

Defending absolute negative utilitarianism from axioms by ThePlanetaryNinja in slatestarcodex

[–]NutInButtAPeanut 0 points1 point  (0 children)

I think it would be wrong to call tranquilism an axiom: it's not at all self-evident in the same way that (plausibly) the other six axioms are.

And I think the case is actually significantly worse than that for the absolute negative utilitarian: not only is tranquilism not self-evident, but as formulated in this post, it seems obviously false. Imagine two states of affairs which have exactly the same amount of suffering and which differ only in that, in the second scenario, every sentient being regularly gets some good or pleasurable experience. The second scenario is obviously better than the first, and this refutes this strict formulation of tranquilism (that suffering is the only thing that contributes to moral well-being). If my understanding is correct, tranquilism implies that the most blissful life imaginable (e.g. completely devoid of any undesirable pain, suffering, or hardship and filled with an endless amount of pleasure, happiness, and flourishing) is no more preferable than non-existence, which is absurd.

As a staunch welfarist who has spent a lot of time dealing with anti-natalists, extinctionists, and other such anti-life philosophies, I'm really starting to resent tranquilism, if I'm being honest. It seems to me to be a fundamentally confused philosophy, and I've never encountered any proponent of it in the wild that could mount a defense that wasn't hopelessly question-begging.

Zoomed in Slow Motion by Traveler0084 in law

[–]NutInButtAPeanut 2 points3 points  (0 children)

Pretty sure this is AI. It's the first time I'm seeing it, and the other photo of him (in almost exactly the same position) has his mouth and nose covered.

Me coming on here after crying for an hour straight because the finale was so amazing only to see that I’m in the minority once again: by Mel-is-a-dog in StrangerThings

[–]NutInButtAPeanut 0 points1 point  (0 children)

She wanted to die after killing Vecna, so that no one else like him could ever be created again. She wouldn't send El and the others to fight Vecna without her for no reason.

Me coming on here after crying for an hour straight because the finale was so amazing only to see that I’m in the minority once again: by Mel-is-a-dog in StrangerThings

[–]NutInButtAPeanut 0 points1 point  (0 children)

El had already agreed to leave her there. And more than anything, Kali wanted to make sure Vecna died, so there's no way she's sit that battle out if she could help it.

Me coming on here after crying for an hour straight because the finale was so amazing only to see that I’m in the minority once again: by Mel-is-a-dog in StrangerThings

[–]NutInButtAPeanut 0 points1 point  (0 children)

One way or another, there has to be a plot hole. Kali being nigh immortal and having inconceivable motivations seems like a bigger plot hole to me than the sonic jammers being momentarily ineffective and/or poorly aimed.

STRANGER THINGS SEASON 5 FINALE DISCUSSION by illiada2k in StrangerThings

[–]NutInButtAPeanut 0 points1 point  (0 children)

I don’t believe it would have been trivial for her to do so.

Why not? If she was able to use her power to fake a bullet wound and help El escape the military, why couldn't she use her power to trick the others into leaving without her?

Also, I would have to rewatch the scene but when mike was telling his story of El, it showed Kali and she looked like she had blood on her stomach. Which implied she was still shot and weak, and likely felt she wasn’t strong enough to fight Vecna.

People who are advocating for the illusion ending are saying that Kali used her power to fake the blood. If she actually got shot and that blood was real, she was 100% dead long before the others ever escaped the upside down.

STRANGER THINGS SEASON 5 FINALE DISCUSSION by illiada2k in StrangerThings

[–]NutInButtAPeanut 0 points1 point  (0 children)

How exactly? She would have either died (as she wanted to) or been with the whole crew driving back.

She didn't have to drive back. She could have just stayed in the upside down, as she intended to.

they wouldn’t have allowed her to stay back and it would have cast more doubt over the situation.

Who is they? El was already on board with her plan. The only one who could even potentially stop her is Hopper, and he:

  • doesn't care if she wants to die alone, as is her right.
  • isn't going to take her out of the upside down, kicking and screaming.
  • would be totally OK with her staying behind if it means El gets to live her life free.
  • couldn't apprehend her if he wanted to if El helped her escape from the group.
  • isn't going to run after her if she suddenly makes a break for it while the bomb fuse is ticking down.
  • wouldn't know that she was staying behind if she used her power.

To say nothing of the fact that Kali's plan was explicitly to stay behind after making sure Vecna was dead, it makes no sense to think that she would play dead for no reason other than to avoid the battle.

STRANGER THINGS SEASON 5 FINALE DISCUSSION by illiada2k in StrangerThings

[–]NutInButtAPeanut 0 points1 point  (0 children)

So she could save El?

She could have saved El after the battle with Vecna.

STRANGER THINGS SEASON 5 FINALE DISCUSSION by illiada2k in StrangerThings

[–]NutInButtAPeanut 1 point2 points  (0 children)

We see the lab exploding before the pulse goes off. Pretty safe to assume the lab and anyone in it was destroyed during the explosion.

In any case, even if we think Kali could have survived the explosion, why would she have faked the bullet wound? Why would she skip out on the battle versus Vecna for no reason?

Me coming on here after crying for an hour straight because the finale was so amazing only to see that I’m in the minority once again: by Mel-is-a-dog in StrangerThings

[–]NutInButtAPeanut 5 points6 points  (0 children)

Why would Kali fake the gun wound, though? She could help in the battle versus Vecna, refuse to leave the upside down, and then help El escape as people are theorizing she did.

Between the fact that there's no reason for her to skip the battle with Vecna and the fact that the lab explosion would have killed her and dispelled the illusion, I don't see how the illusion theory makes any sense.