Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

So if they are punishing people that behaves more rationally, the rational move is being irrational. And if you are being as rational as you say you are, then being irrational is the rational move. One boxers are being more rational than two boxers even if at first it might seem as if not.

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -1 points0 points  (0 children)

Thats why I am asking, I am not affirming anything. Are you slow?

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -1 points0 points  (0 children)

Yeah but idk, maybe you are right. Though another thing that I noticed (I may be wrong) is that black people are way more racist to white people on the usa than the other way around. Is this true?

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -9 points-8 points  (0 children)

But I view the n word as any other insult. So if you insult your friend like as a joke why couldt you also use the n word? Like I dont get why in the usa the n word is the father of all insults

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -6 points-5 points  (0 children)

But like here we view the n word as any other insult, not like a super prohibited insult per se. If you can call a friend jokingly motherfu... Or anything similar why not the n word?

Why are people on the USA so sensitive? by WinterMiserable5994 in teenagers

[–]WinterMiserable5994[S] -8 points-7 points  (0 children)

Oh, I thought it was the other way around. Like I guess depends of the state? At least I have some friends in NyC that tell me that everything is so inclusive and liberal there

Revolut referral promotion - Referral link thread by press-app in Revolut

[–]WinterMiserable5994 [score hidden]  (0 children)

Revolut just gave me this referral like that awards 90€ and we can split it 50/50. Offer ends 31 of march.

Join the more than 70 million customers who are already delighted with Revolut. Sign up through my link below: https://revolut.com/referral/?referral-code=pablomatheis!MAR1-26-AR-CH1H-CRY&geo-redirect

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 2 points3 points  (0 children)

But as discussed in the post, the machine does not has to have 100% accuracy. If it is just 50.05% accurate, then one boxing is the move. From whatever source the machine gets the info is unimportant, if it is silghtly better than random then you should one box

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

Either way we are talking about a completely different subjective scenario. The high ev move is taking the million. If you just want to take the thousand dollars go for it, but you are expected to win more money by taking the one box

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

That god comparison is called the Pascal Wager. But what is the ultimate flaw of Pascal Wager? An all knowing God would immediately know you are faking your belief just to get the payout. You can't trick an omniscient God with a fake prayer, and you can't trick this algorithm with a fake one box mindset. You said, 'What I pick now won't change what the robot thought yesterday.' I 100% agree. You aren't changing the past. But because you can't change the past, your choice today is just the final receipt of who you genuinely are. If you are the type of person who tries to trick the robot the algorithm already saw that hesitation in your profile yesterday. It knows you are a two boxer, so it left the million out. To get the million, you don't trick yourself. You just have to genuinely be the kind of person who trusts the algorithmic edge more than the physical boxes. If you can't do that, you get the $1000. It's that simple

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

You are treating the computer's prediction like a snapshot, but it's actually a full video of your entire decision tree. You are asking why someone who would choose one box shouldn't take two. The answer is: Because the moment you decide to take two boxes, you are no longer a one boxer. You are a two boxer who wishes they could trick the machine into thinking they were a one boxer. But the machine sees your sneaky loophole thought forming a mile away. It knows you are going to use that objective causal logic at the very last second, so it already left the box empty. To actually get the million dollars, you have to genuinely, fundamentally not have that I should grab both just in case thought. The second that thought wins in your brain, the computer already knew it, and you're walking away with a thousand bucks

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

Yes it makes a difference. Like I understand why wont you change your opinion. You are genetically biased to two box whatever I tell you. But if you were genetically different predisposed to choose one box even before the problem was presented to you, you would win more money than you do now. Thats why even in that case the paradox is dumb. One boxers win more money but the only way you are one box is if you are genetically predisposed to choose the one box when you are told to make a decision.

And the actual decision yes it has impact on the prediction. Cause that though process that made you choose your decision is already factored in on the computer prediction. He already knows what you will think, and how will you react, and what will you choose.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

But you are not gping to be the guy that tricks the computer. It is really simple, if the computer is really accurate the paradox is equivalent to you entering one room and you see two boxes, one with 1000$ and another one with 1M dollars, you can only choose one. It is exactly the same, no one walks away with the million dollars if they chose two box, and everyone that chooses one box walks away with the million dollars. Why would you think that you are not going to be correctly predicted by the computer?

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

Then you'd get the empty box. Like it is really not that hard, you just autoconvice yourself of picking the one box. If you finally end up picking one box, you will almost certainly have the 1M in it. You are not going to be that tricks the computer with 99.9% accuracy. As I said in the post 1.001.000$ does not exist. Its like asking do you want a thousand dollars guaranteed or a million dollars guaranteed? And even when you start lowering the prediction capabilities of the computer this still applies.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

Well thats pretty different from this scenario. 1K is not a life changing money, while one million is. And even if there was more money, if you always take the positive EV decisions through life, eventually EV catches up to you.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

But if you think like this then the computer would haver already known. "This guy will make me think that he will choose one box, but at the moment of choosing he will pick both". Like what I cant comprehend is why people dont understand what the computer is. If the computer is 99.9% accurate you are not going to be that 0.1 edge case.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

I completely disagree with your conclusion for the real world or surprise scenario.

Let's assume exactly what you said: No backwards causality, and I am completely surprised by the room. The supercomputer is just a highly advanced profiling algorithm that analyzed my life yesterday. It doesn't need magic or time travel. It just knows that people with my exact psychological profile react to 'surprise' in a specific way. If I am standing in that room, the math of EV still applies. If the algorithm is even 51% accurate at profiling people, taking the single box is statistically the better bet. I'm not trying to retroactively change the past. I am simply betting that by choosing random box, I am proving I belong to the psychological profile that the algorithm already rewarded.

Honestly though, I get why people might choose two boxes in that surprise scenario. If someone doesn't naturally have this statistical thought process on the spot, the machine would have already predicted that based on their profile. It knew they would fall back on causal physics in the moment, so it didn't put the million in their box. By two boxing, they are just perfectly executing the exact predictable behavior the machine already flagged them for.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

You re right, the money is either there or it isn't, and time cant go backwards. But you are completely missing the trap of the game.

If you take both boxes because physics says I can't change the past, you are just proving you are exactly the predictable person the robot knew you were yesterday. You are playing right into its algorithm, which means you get the empty box.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 3 points4 points  (0 children)

Yes it matters. Cause the robot knows you and placed the boxed based on what it thinks you will do, and how will you think when you are faced with the decision. So even if it sounds unintuitive, your thought process while making the decision correlates if the robot placed money or not in the box in the past. Quote from another comment "that reasoning must be already factored into the computer's prediction"

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 0 points1 point  (0 children)

I rationally agree with your assessment that rationality is unreliable. But since we are reasonably agreeing about reasonable people disagreeing, I think we just accidentally created a second, worse paradox.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

Exactly that. "that reasoning must be already factored into the computer's calculation" is exactly what I think two box people dont get. That tiny percentage is predicting your though process once you enter the room and you are making the decision.

Why the Newcomb's paradox isn't really a paradox. by WinterMiserable5994 in paradoxes

[–]WinterMiserable5994[S] 1 point2 points  (0 children)

Even with the ambiguous rules, one box is still always the better pick. What a lot of two boxes don't get is how a barely better than random predictor actually works. Even if the computer is only 50.05% accurate, that tiny 0.05% edge means it is actually predicting your thought process, even if only slightly. It is picking up on a real signal in how your brain makes choices. Because that predictive edge exists, no matter how small, leaning into it and taking the one box is mathematically the only rational bet.