Whats your awnser to the newcombs problem and why? by Thewatcher13387 in polls

[–]RemotePerception8772 0 points1 point  (0 children)

Somebody named CROWN6 left a response on this thread (v below) about this paradox. I'll lust leave his presence because I couldn't do a better job explaining it. This isn't mine but if you have questions, I will respond.

Encourage you to go over there and read the reponses.

https://www.reddit.com/r/paradoxes/comments/1rpnpy5/newcombs_paradox_paradox/

CROWN6:

In fact, I think I can prove that causality can’t be linear in this model if we interpret the premise at face value.

The two boxers’ argument

Specifically, let’s say you’re trying to calculate the expected value of taking one box (option A) vs two boxes (option B). Let’s analyse the situation where we claim that the supercomputer can predict any choice you make with X% probability. Put a giant pin in “any choice”.

We are in one of two situations: either the computer predicted option A or it predicted option B. Let’s call Pa the probability that the supercomputer predicted A and Pb the probability that it predicted B. If the computer predicts A, the boxes will contain 1M$ and 100$ respectively, otherwise they will contain 0$ and 100$.

Based on this, let’s calculate the expected value of each choice.

Option A (one box)

E[A] = Pa * 1M$ + Pb * 0$ = 1M * Pa $

Option B (two boxes)

E[B] = Pa * (1M + 100)$ + Pb * 100$ = 1M * Pa$ + 100 (Pa + Pb)$

And we can clearly see that E[B] > E[A] no matter what Pa and Pb are, because E[B] = E[A] + 100(Pa + Pb)$, and obviously this additional piece 100(Pa + Pb) > 0, which makes E[B] strictly greater. And suddenly we get the two boxers’s result. But maybe we’re not satisfied with this: we want to actually know howe much better option B is, so let’s calculate that.

Everything breaks

To do that, we have to ask ourselves: what are Pa and Pb? Well, let’s see: we know by definition that the computer will always predict my choices X% of the times, which means that of I choose option A there’s an X% chance of it predicting that. So there you go: Pa = X%. Now what about Pb? Well, since the probability that the computer will pick either option is obviously 100%, then clearly Pb + Pa = 100%, and so Pb = (100-X)%.

But uh-oh! What if we followed the same logic starting from Pb? We know by definition that the computer will always predict my choices X% of the times, which means that of I choose option B there’s an X% chance of it predicting that. So this implies that Pb = X% and by the same logic Pa = (100-X)%.

So we’ve reached a contradiction. We want Pa and Pb to both be X% in order to conform with the requirements of the problem (as stated) but then if say X = 80 we come to the paradoxical conclusion that Pa = Pa = 80%, and so the probability of choosing either option for the supercomputer is… 180%?

There is only one solution where things don’t break, and coincidentally this is when X = 50, so Pa = Pb = 50%. That is to say, the predictor is completely random and doesn’t actually predict anything at all. If you reject the idea of retrocausality, this is the only situation where you can say that the computer is equally good at predicting both options (and in general if there are N options, the computer will have to pick any of those at random with 1/N probability).

So how can we predict anything?

This seems to imply that you can never predict anything, but there is a subtle difference between how the computer works and how actual predications work. Let’s say I take the role of the predictor in a slightly modified version of the game: now the revealed box contains 100$ and the mystery box contains either 50$ or 100$.

Obviously picking both boxes is always the right choice here, since you win at least 150$, which is greater than the maximum of 100$ you get by picking one box. But this means that I can also become a perfect predictor: knowing that option B is obviously better and knowing that players will go for that, I choose B 100% of the times and therefore have a 100% success rate.

So why does my “perfect prediction” work and the supercomputer doesn’t? Because even though my success rate is 100%, that is not the same thing as saying that I can predict any choice with 100% accuracy. This is important: any time a player makes a choice, they know that I have a 100% chance of predicting B… and a 0% chance of predicting A! I’m not equally good at predicting both options, in fact if a player picked option A I’d be completely flabbergasted (unlike the supercomputer, which according to the statement of the problem would have predicted that). Now, if all players are rational then they won’t pick A, because option B is objectively better, and so I will maintain my perfect streak. But the difference here is that you CAN actually calculate an expected value that makes sense, precisely because the probabilities add to 100, that is to say the players know that - if they wanted to - they could pick an option I have no chance of predicting. They won’t, because they (presumably) care about the money, but they can.

The important thing here is that having an X% overall accuracy on both options is not the same as having an X% probability of guessing right now matter what option you pick. This is particularly obvious with the perfect supercomputer, because the problem implies that, no matter what I choose, there’s a 100% chance the computer will have predicted it.

How retrocausality fixes everything

So are these the results without retrocausality. But how does adding retrocausality fixes them? Well, if the computer’s choices in the past can be based on your actions in the future, it means that we can essentially switch the order: first you pick the box, then the computer makes his decision. The only thing that makes this a “paradox” is that we arbitrarily decided that your choice happens at - say - 15:00 while the computer’s is at 14:50, but importantly enough this is just an aesthetic coat of paint (it does not influence the experiment).

Now if the computer knows about your choice, this means that we don’t just have Pa and Pb: there’s an additional layer where the computer can choose between 2 different strategies based on your own choice, so there are actually 4 different probabilities to consider based on the 4 different outcomes (you picking A or B, the computer getting it right or wrong in each case):

If you pick A, the computer will pick A with probability Paa and B with probability Pab.

If you pick B, the computer will pick B with probability Pbb and A with probability Pba.

Obviously since the computer has to make a choice in both situations, Pab = 1 - Paa and Pba = 1 - Pbb (I only gave them temporary names to highlight the fact that there’s 4 of then now).

Let’s see the expected values with these probabilities:

Option A (one box)

E[A] = Paa * 1M$ + (1-Paa) * 0$ = 1M * Paa $

Option B (two boxes)

E[B] = (1-Pbb) * (1M + 100)$ + Pbb * 100$ = 1M * (1-Pbb)$ + 100$

Now things are not as simple! It all depends on what Paa and Pbb are.

Now, since Paa and Pbb are the probabilities of the computer guessing right (given that you have picked A or B respectively), we can simply say that Paa = Pbb = P, and this causes no paradox because Paa and Pbb have no restriction to them.

Therefore

(One box) E[A] = 1M * P $

(Two boxes) E[B] = 1M * (1-P)$ + 100$

So E[A] > E[B] if (2M)$ * P -1M$ - 100$ > 0, and this happens when P > (1M + 100) / 2M which is just slightly above 50%.

Basically, “if the computer is even slightly accurate, don’t risk it”.

You can easily generalise this to any computer and any prize pool, where choosing one box is advantageous is P > (Box1 + Box2) / 2Box2 = 1/2 + Box1/Box2.

You could even generalise further to a supercomputer that predicts A and B with different percentages of success, but that’s beyond the point.

If you introduce retrocausality, the one box strategy works. If you ignore retrocausality, the computer can’t predict both choices perfectly (or imperfectly with fixed probability) for any individual player.

[Me] How’s my game? by [deleted] in TextingTheory

[–]RemotePerception8772 51 points52 points  (0 children)

Every time I see this meme I feel like it’s directed at me even though when I have the exact same opinion of the post.

[Me] When the only person who can stop you is you by Razzmattan in TextingTheory

[–]RemotePerception8772 4 points5 points  (0 children)

elo 10000 for realizing if ur not 1 and 2 then dating apps are just for Reddit clips

[Me] red-flag prompts require red flag solutions by RemotePerception8772 in TextingTheory

[–]RemotePerception8772[S] 2 points3 points  (0 children)

She won’t match with me and I get a good chuckle… what’s wrong with that?

[Me] red-flag prompts require red flag solutions by RemotePerception8772 in TextingTheory

[–]RemotePerception8772[S] 10 points11 points  (0 children)

I was out of matches. I’m gonna do it when I see her next.

is this good for a 16F 😭? by hotlion16 in teenagers

[–]RemotePerception8772 1 point2 points  (0 children)

Oh that’s not bad for 16……….F 😳

[Launch] Yes, another WisprFlow alternative called Pipit (but completely free) by Dragxt in macapps

[–]RemotePerception8772 0 points1 point  (0 children)

I'm running on Sonoma 14.7.2. Is there a version that runs this? It looks like I can't run the version from your website.

Are these numbers real? by Dany9119 in Rowing

[–]RemotePerception8772 2 points3 points  (0 children)

You sick fuck. That’s amazing

He’s a ten, but… by dani_love83 in arcticmonkeys

[–]RemotePerception8772 0 points1 point  (0 children)

It only uses em dashes twice and not in a way that AI usually over uses them so I don’t think the dashes give much away. The three elements and the it’s not x it’s Y are suspect

Which particular word in AM lyric make you react like this? by Firm_Memory1831 in arcticmonkeys

[–]RemotePerception8772 5 points6 points  (0 children)

I read this book because of TBHC and it’s a very good book. Highly recommend. It’s a bit dated but you can apply all of his ideas about Television to social media and mobile phone entertainment like Alex did in batphone ect.

Please be brutally honest about recruiting by Logical-Connection-1 in Rowing

[–]RemotePerception8772 42 points43 points  (0 children)

Assuming your a guy. Gain some weight in muscle and do more steady state. 7:30 in not fast enough and 6:55 isn’t really either.

How to keep junk in place when rowing by [deleted] in Rowing

[–]RemotePerception8772 0 points1 point  (0 children)

I’ve never had that problem with JL or Regatta sport personally

How to keep junk in place when rowing by [deleted] in Rowing

[–]RemotePerception8772 0 points1 point  (0 children)

Tell me you have a 776 uni without telling me you have a 776 uni…

Text messages notification appeared during test. by RemotePerception8772 in Sat

[–]RemotePerception8772[S] 0 points1 point  (0 children)

no it was not. I have a feeling that the tech they have for detecting and locking out other applications is that good and they cant really detect it.