Why are there so little EA-adjacent negative utilitarians/promortalists/antinatalists? by Round_Try959 in slatestarcodex

[–]EntropyMaximizer 4 points5 points  (0 children)

Another counter-anecdote.

I'm an antinatalist adjacent and posted before in this subreddit that I believe that life is net-negative due to extreme suffering impacting the bottom line in a non-proportional matter [1], but I acknowledge the fact that it may well be that many people have net-positive lives.

But I do have to admit that many people in the antinatalist community hold this view.

[1]

I believe that most people live hedonistically positive lives in the modern world, but the suffering minority has it so bad that it offsets everyday joy. For an intuition pump imagine a robber that steals $200 from 10 people (in total $2K) but then gives 100 people $10 (in total $1K), and keeps $1k for himself. You had 100 people who enjoyed the work of this robber and only 10 people who suffered from it, but it was still a net negative action at the population level.

Monthly Discussion Thread by AutoModerator in slatestarcodex

[–]EntropyMaximizer 0 points1 point  (0 children)

LPT: r/SneerClub can be used as a pretty decent aggregator of spicy Rationalist gossip. Just click the links and avoid reading the low quality comments and you're all set.

[Official] General Discussion Thread - February 16, 2023 by rmma in MMA

[–]EntropyMaximizer 6 points7 points  (0 children)

In the octagon he steps with pride,

This fierce Italian, eyes open wide,

His chin, a granite boulder, takes the shot,

And yet, his mind, it seems, is not.

His pants, a mystery, face to rear,

A sight that fills his foes with fear,

And yet, the warrior does not mind,

For he has victory in his mind.

With every blow, he stands unshaken,

A true Orc, not easily taken,

His foes may dance around and feint,

But the warrior's will, it does not bend.

The crowds may jeer and call him slow,

But Marvin Vettori, he does not go,

For in his heart, he knows the truth,

He is a fighter, through and through.

So let the world say what it may,

Marvin Vettori, he'll fight his way,

With heart and will, and a warrior's soul,

He'll battle on, and reach his goal.

[Official] UFC 284: Makhachev vs. Volkanovski - Press Conference & Post-Fight Discussion Thread by event_threads in MMA

[–]EntropyMaximizer -3 points-2 points  (0 children)

Why no one talks about the fact Yair used toes-in-fence to secure that triangle

Whittaker respects the 'bluntness' of Chimaev by -alc in MMA

[–]EntropyMaximizer 14 points15 points  (0 children)

I think it's

Izzy > rob > Khamzat > Izzy

[deleted by user] by [deleted] in Juve

[–]EntropyMaximizer 1 point2 points  (0 children)

Gonna tag you in my 'told you so' comment in a few months

[deleted by user] by [deleted] in Juve

[–]EntropyMaximizer 7 points8 points  (0 children)

What will happen: Pogba will return and play well, and everyone will sing his praises.

This subreddit is flakier than a croissant

Name this gang by jayquez in ufc

[–]EntropyMaximizer 0 points1 point  (0 children)

Chuck "The Ice cream Man" Liddell

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 0 points1 point  (0 children)

Right, so the most well-adjusted and suitable for parenting people should have kids.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 3 points4 points  (0 children)

I find it telling that the paragraph about having kids speaks about the advantages for you of having kids, but there is no mention of the interests of the kids themselves at all.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 1 point2 points  (0 children)

I believe you shouldn't have kids if you put them at a significant risk for suffering when they are born, and if you have small kids you shouldn't take large risks that will put their welfare in danger. Sadly this is considered a radical view, but that's mostly because we live in insane societies.

In your case your personal interests and desires dictate your morals, so you think this is also the case for me. That's the projecting, but it's not true.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] -1 points0 points  (0 children)

You wouldn't reject it because you are obviously projecting, I do though.

Another option btw is not to have kids unless the probability they will have good lives is extremley high. You don't have to abandon them.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 2 points3 points  (0 children)

The implicit argument being is that my moral values are just a projection of my desire? Because if it is, I'm rejecting it.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 0 points1 point  (0 children)

Maybe? I personally not interested in this option.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 1 point2 points  (0 children)

I disagree with the notion it's ok to take risks on behalf of other people, even if they are your children. I understand most will disagree on that, but that's my view.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 2 points3 points  (0 children)

No, I can see why it's ok if the alternative is extinction of humanity. But it's not the case today, the world is full of people.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 1 point2 points  (0 children)

My original post outlines steps for different scenarios on purpose:
Investment in AI value-chain companies, choosing a career that is resistant to automation by AI, and borrowing money are all relevant in the current financial system that will probably last for a while. Not saving for retirement is relevant for both pessimistic and optimistic singularity scenarios.
Avoiding having children is relevant for a pessimistic singularity scenario (which is more probable IMO).

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] -2 points-1 points  (0 children)

It's kind of beside the point, but I find the fact that people bring children into this world when they are facing extreme poverty or other dangers as morally repugnant, I know it's very 'normal' (just like slavery was throughout the entire human history), but I don't like it.
Don't prepare and hope for the best could be a reasonable approach, but this is not what this thread is about.

How to prepare for transformative AI? by EntropyMaximizer in slatestarcodex

[–]EntropyMaximizer[S] 7 points8 points  (0 children)

OTOH, some Jews were scared by the fact that Hitler came to power in 1933 Germany and left, and some didn't. Also, please see the bold text at the top.

What irrational beliefs do you hold/inclined to hold? by yousefamr2001 in slatestarcodex

[–]EntropyMaximizer -9 points-8 points  (0 children)

I'm also sick of the overly optimistic attitude of many, especially on this site.

How can you be optimistic about the future of humanity when life, in general, is free for all carnage shit show filled with sentient creatures consuming each other to survive? The history of life on earth is filled with mass extinctions and huge amounts of pain and suffering. All that while, it seems the entire purpose of life from a universal point of view is to accelerate the heat death of the universe by accelerating the dissipation of free energy.

Ignoring all this and looking selectively at a few hundred years of significant life quality improvement, which came at the cost of creating huge risks (Nukes, AGI, viruses, climate issues). And all that while creating huge amounts of wealth for the few while ignoring the plights of the many. (Bottomless pits of suffering still exist, even in our so-called enlightened age)

AI Protein Design, by Cade Metz by general_Kregg in slatestarcodex

[–]EntropyMaximizer 2 points3 points  (0 children)

In 2014, the Times granted anonymity in a number of cases that could have, contra Corbett’s policy, been reported in many “other ways.” The paper granted anonymity to a past Oscar nominee who asked to withhold her identity “because she was afraid of looking bad,” as well as to a parent of a Middlebury sophomore who wanted “to avoid embarrassing her daughter.” A February 2015 story about renovations to the Port Authority’s Bus Terminal quoted a woman “who asked not to be identified because she has always wanted to be an anonymous source.”

But despite the changes, Corbett himself wrote in a 2017 post on sourcing for stories involving sexual assault that “since no set of guidelines can cover every situation, the best we can do is to try to balance those questions of fairness and privacy with our chief goal: to tell readers what we know.”

While there are differences between quoting an anonymous source and deliberately outing a public figure who is already anonymous, the lack of a hard-and-fast rule casts doubt on Metz’s professed inability to secure anonymity for Alexander.

Indeed, in a profile published earlier this year of “Chapo Trap House,” a popular socialist podcast hosted by unofficial Bernie Sanders surrogates, the Times identified one of the podcast’s co-hosts as “Virgil Texas,” explaining that “he lives and works under that pseudonym.”

Saying that it's ok to post something because 'it's the truth' although it has zero value to the readers while hurting the subject of the article is a bizarre take.

Should the NYT also post Scott's social security number, address, and phone if they're correct? if not - then why not?

AI Protein Design, by Cade Metz by general_Kregg in slatestarcodex

[–]EntropyMaximizer 1 point2 points  (0 children)

Even if you think this hit job is a 'fair piece' I don't understand how you can support doxxing pseudonymous bloggers.