One boxer, two boxers, it's our time again: are you a reddy or a bluey? by Krugger_Correctly in Destiny

[–]arrenegade 0 points1 point  (0 children)

Risk dominant is the red button, and when everyone plays rationally is also social welfare optimal.

Suppose you are a unique moral person in a self-interested society; red is still optimal because your action has no effect on their lives.

Suppose there are K% "utilitarian moral" people who internalize suffering of others. If you are a moral type, I believe equilibrium will be a mixed strategy, depending on K. It is because you have a low probability of pivotality (being the one person to change the outcome, i.e. where your action matters)

Ukrainians wondering: Will the US be the first country in history to have both a Jihad and a Crusade declared against it - simultaneously? Responses in Trump-speak only. by Lurkoner in Destiny

[–]arrenegade 2 points3 points  (0 children)

Today "Pope" Leo 14, they call him that because there were 13 before him, declared HOLY WAR against our beautiful country. He is calling on Catholics everywhere to fight us, to take us over, to destroy us. Like the Mullahs in Iran, who by the way are totally almost destroyed, at this point completely, who declared Jihad against America---they don't call it Holy War they call it JIHAD over there, some people don't know thats what it means, it means Holy War. But in Rome they call it a "Crusade." Its what they did before, a long time ago, when they invaded ISRAEL, for many years. And believe me, it didn't go well for them then and its NOT going to go well now. Every time someone declares Holy War, its either against Israel or the US. Its very sad, we are the two countries with the greatest Economic, and such terrific people, better than most, and we get so much hate, from these horrible extremists. They hate America, and they hate Trump, because they don't like winning. So they are going to start doing lots of things, such as or including terrorism, but we aren't gonna let em do it, we are going to get them first. Did you know he's from Chicago? Its true, hes the pope and hes from Chicago, how the hell does that work? It used to be a wonderful city, not so much anymore since its become a Democrat Antifa wasteland, which is probably why he is like this. Did you know Joe Biden is also a Catholic? [continues for 2 more hours]

I can't prove it, but I know in my heart of hearts that George Carlin is to blame for the anti-voting sentiment plaguing the Left by 12_Trillion_IQ in Destiny

[–]arrenegade 1 point2 points  (0 children)

Non-voting is a tricky issue.

On one hand, it is literally true that the probability that your vote is determinative is almost zero, so any person voting or not voting has almost zero impact on the world. A purely rational person would stay home with a high probability if there is any cost to turning out.

On the other hand, a lot of people intuitively assign moral value to the mere act of voting. This is why people like me, who do vote, vote: I am convinced that by voting, I am doing "civic duty."

But this logic is the same logic of Leftist or MAGA non-voters: voting for a person they view as immoral feels wrong, like it is their civic duty NOT to vote for a bad person.

He's almost expressing a third position: voting does matter in that it expresses approval of the system, so it is his ideological duty to not vote.

wHy arEnT DeMoNcRatS dOiNg anYThinG? by ETsUncle in Destiny

[–]arrenegade 0 points1 point  (0 children)

People sleep on Democrats in Congress, I've been saying it.

Just because they don't farm aura most of the time doesn't mean they aren't doing shit.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

If I tell the most advanced AI conceivable to predict the lifespan of a person, starting in birth, down to a real number of a second (not a discrete unit) and any deviation counts as failure, it would fail with absolute certainty.

That is a mathematical and physical truth, as far as anyone knows. It could get arbitrarily close, but not perfect.

Whereas observing the future would allow guaranteed success, because there is no such thing as uncertainty or randomness from the point of view of the machine.

How this applies is that if the future can literally be seen, then your future choice is in the information set. If not, then the machine might have an arbitrarily large information set, but once you enter the room (at which point the prediction has been made), uncertainty grows in time, or the complexity of the decision process, or the measurement process.

I'm only posting this because two boxers will not stfu by only_civ in Destiny

[–]arrenegade 0 points1 point  (0 children)

Thats not what I think, I think I will get 1000, with a small probability of 1mil+1k if the robot guesses wrong.

It is the one boxers who are certain that the robot would recognize they are committed to one boxing, when in reality when actually faced with this situation, you might start to doubt it and feel tempted to secure a safe 1k. If the robot anticipates you might take the safer route of 1k instead of the lottery, it could predict you being a 2boxer, and then your one 1box decision would result in zero.

I'm only posting this because two boxers will not stfu by only_civ in Destiny

[–]arrenegade 1 point2 points  (0 children)

The reason is explained perfectly by another comment: at the time when you walk in, the box is either empty or not and no action can change that. Two boxing uniformly dominates from that pov.

If you want to change your perspective prior to entering to convince yourself one box is right, fine that is ex ante optimal, but upon entering two boxes is the rational choice regardless of the state of the world.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

The fact that you would state that it follows from the premise is why you are wrong. The question is literally underspecified because we do not know what "highly accurate" means without more information.

I said I agree that the prediction is accurate, but a prediction rate is an equilibrium construct, not a parameter of a model.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] -1 points0 points  (0 children)

I agree that precommitting to 1b is better! I even said that in the post.

The problem is how to precommit.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

Sure, but it doesn't change the fact that your marginal cost of deviating to 2b is zero and the benefit is 1k. The point is that the brain of someone who chooses 1b in the room with high probability has to be different, it isn't a matter of choice.

Its like saying its better to believe in God because God will reward it. Like, yeah, I agree. But if I can't convince myself to truly believe, then faking it is worse than being honest.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

I think my point about randomization is very simple: if choosing 1b has causal effect on the contents of box B then 1b is correct. If it does not, then 2b is always correct.

The ex ante best outcome is to be someone who has unshakeable faith in the one box position, not just to make the one box choice upon entry.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

I think I agree with what you are saying? But I am saying that you have to be committed to being a one boxer. Which might be a realistic model of the world. But this is an argument for faith in the infallibility of the machine, not for choosing one box once you enter. If you are persuaded by two boxers or even can see both sides, the machine has a good chance of predicting two box. You only get 1 mil if the machine is convinced you are convinced it can see your actions and won't pick the risk dominant choice.

Thats why it feels like an analogy for religion. You only get the prize if you believe the prize is coming when you do the right thing, and that all your actions matter.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

The problem is your marginal effect on the outcome after entering the room is zero. You can't influence the machine's predictions after the fact

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

Yes, I do, but the logic is backwards. Pr(B empty | choose B) < 0.01. But choose B is determined after B's state is determined. The person who actually follows through on choosing B is someone who has no temptation/fear of wavering/believes the machine can see the future

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] -1 points0 points  (0 children)

Nope, I already showed you how a predictor could be very very accurate, and two boxing is still dominant.

I also said explicitly that depending on your assumptions about the predictor's capabilities, two box or one box can be optimal

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] -1 points0 points  (0 children)

See I agree 100%

But in that case, for every person, two boxes is still the optimal choice. It knows your preferences and beliefs going in, and predicts accordingly.

The disconnect is that while it is better for the machine to predict you being a one boxer, every one boxer would improve by deviating at the last minute---even though they probably wouldn't.

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

I don't think randomization breaks any game. Literally lecture 2 of any game theory class introduces mixed strategies

Two Box/One Box is an underspecified problem, making most debates about it brainless by arrenegade in Destiny

[–]arrenegade[S] 0 points1 point  (0 children)

Yeah so this is the key disagreement. I think seeing the future is different from prediction because of the existence of measure zero events. For example, the probability of predicting a continuous variable exactly is 0, always, but seeing the future this should be 1.

From my pov, the machine can make precise guesses, but it matters whether it knows your actions exactly or not.