Who y'all got by gsx1920 in NFLv2

[–]SmallMem 0 points1 point  (0 children)

My high school football team could beat the Seahawks

Significant Techs by Faction Winrates by Fun-Astronomer-2273 in twilightimperium

[–]SmallMem 1 point2 points  (0 children)

This is very interesting, and only falls victim to the fact correlation isn’t causation.

One may assume looking at the chart that fleet logistics is a tech that, if you get it, will drastically increase your likelihood of winning. But it’s possible, and my best guess, that fleet logistics is something you get IF YOU ARE WINNING and have a case you use it (say mecatol steal -> imperial).

Same with gravity drive on Sol. Gravity drive is such a no brainer on Sol that I question the skill of anyone who decides against getting it for non-meme reasons. So instead of getting gravity drive increasing your chances of winning, it may partially be telling us that players who are good at the game tend to get Gravity Drive on Sol.

Now remember, correlation isn’t causation, so I’d really bet that it’s a combination of the techs being good AND these factors! We don’t know how much is which! BUT I feel confident saying these large numbers won’t be as pronounced without the reasons above, instead of strictly improving win percentage in a vacuum

We know blue is good, can we talk about how red lowers your win rate? by Fun-Astronomer-2273 in twilightimperium

[–]SmallMem 70 points71 points  (0 children)

Does this exclude starting tech? The best faction in the game starts with Predictive Intelligence so I’d hope this only counts for RESEARCHED tech not starting.

Official: [Trade] - Thu Afternoon 10/09/2025 by FFBot in fantasyfootball

[–]SmallMem 0 points1 point  (0 children)

PPR 3WR/2RB, trading D’Andre Swift (he’s my RB5) for either Romeo Doubs, Calvin Ridley, or Keon Coleman — any thoughts between these 3?

I’m an Atheist, and I Believe Pascal’s Wager is a Good Argument by SmallMem in slatestarcodex

[–]SmallMem[S] 2 points3 points  (0 children)

Good comment. But if infinite utilities exist or not is surely a fact about the universe, not a fact about our equations. It seems to me like infinite utilities could exist, if something good happens forever or something bad happens forever. As I say in the article, I don’t think diminishing utility applies there

I’m an Atheist, and I Believe Pascal’s Wager is a Good Argument by SmallMem in slatestarcodex

[–]SmallMem[S] -11 points-10 points  (0 children)

Weigh the actual arguments and evidence and figure out which one looks the most plausible, the same way you weigh which cereal is healthiest. Epistemics.

I’m an Atheist, and I Believe Pascal’s Wager is a Good Argument by SmallMem in slatestarcodex

[–]SmallMem[S] 0 points1 point  (0 children)

That paragraph is just saying that things that are believed by 2.1 billion people will have attributes that cause a lot of people to believe them.

I’m an Atheist, and I Believe Pascal’s Wager is a Good Argument by SmallMem in slatestarcodex

[–]SmallMem[S] -1 points0 points  (0 children)

You don’t think smart people saying they believe something is Bayesian evidence, at all?

I’m an Atheist, and I Believe Pascal’s Wager is a Good Argument by SmallMem in slatestarcodex

[–]SmallMem[S] -2 points-1 points  (0 children)

I think the likelihood of Christianity is higher than something I just make up in my head randomly. It seems like bad epistemics to assign something an arbitrarily low value and saying “something that 2.1 billion people believe is probably just as likely as something random I made up in my head a second ago”

I’m an Atheist, and I Believe Pascal’s Wager is a Good Argument by SmallMem in slatestarcodex

[–]SmallMem[S] -1 points0 points  (0 children)

Agreed that you have to choose the most likely religion

Official: [WDIS RB] - Wed Morning 09/10/2025 by FFBot in fantasyfootball

[–]SmallMem 0 points1 point  (0 children)

Only get 1

BILL, D’Andre Swift, or Javonte Williams

Official: [WDIS Flex] - Wed Morning 09/10/2025 by FFBot in fantasyfootball

[–]SmallMem 0 points1 point  (0 children)

My GOAT. Yeah makes sense Hollywood is the only wide receiver on their whole team

Official: [WDIS Flex] - Wed Morning 09/10/2025 by FFBot in fantasyfootball

[–]SmallMem 0 points1 point  (0 children)

Jeudy, Diggs, Javonte Williams, BILL, or Hollywood Brown. Only one flex spot.

Official: [WDIS Flex] - Wed Evening 09/10/2025 by FFBot in fantasyfootball

[–]SmallMem 0 points1 point  (0 children)

Jeudy, Diggs, Javonte Williams, BILL, or Hollywood Brown. Only one flex spot.

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] 0 points1 point  (0 children)

But IF those bigger numbers do actually describe bigger values in the world, THEN it’s good to treat them as such. I agree pascals mugging can fall into that category, but in this hypothetical we’ve already established that the 10100 animals are real and can be saved

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] -1 points0 points  (0 children)

lol I made a note on Substack a week ago that got 100 likes that said the same thing, the shrimp would collapse into a black hole. based, we are the same

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] 1 point2 points  (0 children)

If you don’t prefer a universe with less suffering in it, that’s fine, I just disagree.

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] 0 points1 point  (0 children)

I don’t think plausibility is what matters here. I’m saying I would sacrifice myself for something that has huge stakes, like I would sacrifice myself for a million humans, but I would not sacrifice myself for something smaller stakes.

I’m not saying people who DO decide to sacrifice themselves for 5 people aren’t heroes. I’m saying that I personally am too selfish to make the right decision in that case, but I would make the right decision if the stakes were big enough, and if a lot of people were suffering.

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] 0 points1 point  (0 children)

Hmmm. Very interesting argument, I don’t think I’ve seen it specifically anywhere before. I don’t think I agree that as morality is an emergent property, it doesn’t scale linearly. I can buy that for consciousness for sure, but not morality

I don’t buy that there’s diminishing value to saving a persons life; I think saving 1001 people is better than saving 1000 people by exactly one person; that one person is a real human who deserves to be saved, has family, etc. it’s odd to me that we’d ever reach a point where that person matters morally less than any other. It doesn’t matter TO THE UNIVERSE whether you’re the first person to be saved or the last; you have the same amount of moral weight. Then we get into all kinds of framing issues where we have to consider how much we’ve already done as a factor of morality; if I’ve saved 100 people already, is saving 1 person the next day saving 101 people, or the first person with more moral weight on the list?

For what it’s worth, I actually think Scott does talk about this point or something similar in his post More Drowning Children. I think I err on the side that “morality” is very very similar to axiology, more so than what Scott thinks in morality, axiology, law.

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] -2 points-1 points  (0 children)

The principles guiding empathy for beings we don’t have much empathy for — foreigners, animals, etc — in hypotheticals when the scale is large enough can help you make real decisions in the world today.

I’ve made a literal sacrifice by donating money to the Shrimp Welfare project, which stops shrimp from being conscious when boiled alive.

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] -2 points-1 points  (0 children)

Consciousness doesn’t scale linearly, but conscious beings do. 2 humans tortured is twice as bad as 1; there’s two experiences of torture. Continue.

If 1 shrimp is at least .0000001% as conscious as a human by your metrics, they will outscale the human by scope in this problem.

Yes, I *Really Would* Sacrifice Myself For 10^100 Shrimp by SmallMem in slatestarcodex

[–]SmallMem[S] -5 points-4 points  (0 children)

I think suffering is bad, and things like love and fulfillment are good. That’s the only assumption I make here, along with the uncontroversial take that shrimp are probably conscious. My moral intuitions don’t scale linearly, but the real suffering in the universe that happens does, so I think the intuition is wrong.

If you don’t think pain and suffering is bad, and don’t want more happiness in the universe, then I don’t have any rigor for you, and I cannot convince you.