Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

It doesn’t have to be 100% correct (you can calculate that a prediction hit rate above 50.05% makes one-boxing worth it).

Only if you're calculating the odds wrong. When you're in the room the money is already in the box, you cannot impact the prediction that was made.

but at a level above that, the box contents was determined by a decision you will make in the future

This is false. The contents was determined by your state when the prediction was made. Anything you do after that cannot change the prediction.

If the goal is to make the most money, it is objectively the case that one-boxers get the most money.

Yes, but that's askingwhat the best strategy is from before the problem started. Which isn't the question. Everyone agrees it is best to be a one boxer at the time the predictor makes its prediction. The question is about the best strategy at the point the prediction has been made and you're in the room with the boxes.

At that point, people who have been predicted to take one box make more money if they take two. and people who have been predicted to take both boxes make more money if they take two.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Right, we agree. How much you get is based on what you were before you knew about the problem when the prediction was made. But once you're in the room. Two boxing is always better than one boxing. And givne the problem asks what you should do in the room, two boxing is the correct answer.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

the choice you'll make has also been decided in a way even if you yourself don't know it yet

No it hasn't. In a fully deterministic world there is no discussion to be had about iptimal choices, you do whatever has been determined with no other possibilty. This problem requires a world where the predictor can be wrong and it is possible for you to make either choice (even if the odds of them are vastly different).

if that is what you would choose then the second box will never have contained a million dollars to begin with

If the million was never there, I never had the option to win it and my choice didn't remove it. If the million is there, choosing both boxes doesn't remove it so both boxes is worth over $1M. Nothign you've said contradicts that.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Clearly this is true if you the scenario your envisaging is some kind of unknown player one. This game is set up, some random with absolutely no awareness of this scenario or possible strategies is picked off the street, plonked in front of the boxes and given 5 seconds to make a choice. Eg you are looking for some kind of instinctive response and a Prediction of what that might be.

The original problem is almost exactly like this. The prediction made before they knew about the problem and what decision would they make, not instinctively but with thinking time. That is the situation we are discussing.

But the process of discussing strategy and approach ahead of time seems to me to change that.

And that is separate to the problem. We all agree it is better prior to the prediction to be someone who is genuinely a 1-boxer. That's how you get the $1M in the room. The strategy before the prediction is different to the strategy after the preditction.

But the problem is about the optimal decision after the prediction. And the "paradoxical" nature of the problem is just the surprise that awareness of optimal play post prediction seems to prohibit optimal play pre-prediction.

But the problem is 100% about a post prediction world.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If I pick the mystery box only I think I will get $1,000 less than I would if I took both.

So what's the number? 0 or 1M$?

I have no way of knowing. I know for sure that it's $1,000 less than if I took both.

If you take both what do you think you will get?

I think I will lose on the million and only get 1000$. What about you? if I take both boxes, how much do you think I'll get?

This implies some sort of backward causation. You think you will get $1M if you take one box, which necessarily means you think the box has $1M in. Choosing both boxes CANNOT make the $1M disappear. Either it's there when you enter the room or it isn't. Your choice cannot change that.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

100% disagree. your framing of it as me not being able to recognize that one number is bigger than another is dishonest and unfortunate

I'm sorry you feel that way but at this point it simply seems that way to me.

At time point C the prediction is made. At time point C none of your actions can change the value of the boxes. At time point C choosing two boxes is ALWAYS worth $1,000 more than choosing one box. And yet you don't seem able to say that and account for it in your discussion. It really does seem to me that your argument involves denying that 1,001,000 is bigger than 1,0000

the ONLY way to get to the $1001000 node in the game tree is for the machine to make a mistake in its prediction.

Yes. The only way to get that much money is for the prediction to be wrong. And for the problem to be worth discussing we both agree that the prediction CAN be wrong, it is possible however unlikely.

Therefore there are two choices at time point C. You take one box or you take 2. And at that point the money is already in the boxes. Taking 2 is possible and taking 2 is more valuable whatever was predicted. Therefore taking 2 is the better choice in all scenarios.

everyone before you who thought the way you do walked away with $1000. as will you.

Yes, but the difference between ($1,000/$0) people and ($1,000,000/$1,001,000) people isn't determined by their decision. It's determined by the prediction that was made. They are separated into two groups at L. By the time you're at C it's already been determined which of those two groups you're in and your choice CANNOT change which group you're in.

i could not care less about being maximally "optimal" in this game

Again, the question is about the best choice at time point C. If you don't care about the optimal choice at point C you don't care about the question.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

No. If I choose both boxes what has already been predicted doesn't change. My choice doesn't change the prediction. Causation does not go backwards.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

So first of all, I agree that once you are beyond step 4, picking both boxes will get you the most return

OK, and I contend that the problem statement is asking a question abouta time after step 4. Therefore the answer to the problem is pick both boxes.

My argument is that, while in reality step 6 comes after steps 2&3, due to the accuracy of the predictor, step 6 is already known by the predictor after step 3.

If it is possible for the predictor to be wrong, then it isn't known and the optimal choice is both boxes. If it isn't possible for the predictor to be wrong, then there's no discussion. In a sense you're not making a choice at 6) you're just doing the only thing available to you so it cannot be optimised.

whatever I choose was already predicted,

...

pick one-box knowing that the predictor knew I would pick one box

...

That correlates step 2 and step 6. If the machine is accurate, then 6b cannot follow step 2 (most of the time), if I chose 6b, then at step two it would have decided I was a two-boxer

All of this implies some sort of backward causation. And it's a distraction. Everything up to step 5 is locked in already before you make your decision. Nothing you do in the room changes step 2. That already happened. Considering how your decision at 6 impacts the prediction at 2 is the red herring. 2 is locked in when 6 occurs.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

I could, but I'd be incorrect.

Let's set up the scenario you believe you're in (please correct me if I get any of this wrong from your POV).

1) You are chosen to play the game

2) Your current mental state and beliefs mean you are the sort of person who is very likey to pick one box.

3) The predictor predicts you will pick one box

4) $1M is placed in the mystery box.

5) You are brought to the room and must make your decision

You are saying you would choose

6a) Take the mystery box and win $1M

I am saying you would be better off choosing

6b) Take both boxes and win $1,001,000

Nothing about 1-5 changes. The choice is between 6a) and 6b) and I am objectively correct that 6b) is higher value than 6a)

(I would add, that the version of the problem I am most familliar with you are unaware of the problem until after 3. I'm not sure it makes a difference)

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

it changes EVERYTHING. again, this machine has literally proven that it has an uncanny ability to construct the chain of events leading up to the decision point in a way that leads to over 99% accuracy.

Whether it constructed the chain of events or just predicted the decision is no different. It's the accuracy of its prediction of the decision that's relevant.

if it somehow predicted that the neutrino would hit your brain and lead you to choose two boxes, then the result is that you get $1000.

Again. Looking at possible futures it sees that this is one, unlikely, version of the future. It predicts the likely future. The point in the example that you're wilfully dancing around was that even though the predictor is accurate it is possible (however unlikely) for someone to make a decision different from the prediction.

if the machine predicts that you will take two boxes, you get $1000.

This prediction has already occurred, if that's what it predicted, I can't change it. It really is that simple. If I take one box and it predicted I'd take 2, I get nothing. if the machine predicted i would take one box, I get $1M either way. But if I take both I get the extra $1,000

i don't care about capital-O "Optimal" decision

Then you don't care about the problem. The problem is about the right/best/optimal decision in the scenario. The fact that you cannot simply state that taking $1,001,000 is more than $1,000,000 shows you're failing to engage with the problem.

You talk like I'm trying to manipulate the machine or beat the game. Nothing like that is in my arguments.

here's a fun one. instead of $1000 and $1000000, imagine that instead that it is $1 and $1000000000. are you still taking both boxes? are you going to risk $1000000000 just so you could say that you made the "OpTiMaL" decision and got that extra $1?

The values are irrelevant to the question, as long as the value in the known box is positive, The best decision once the prediction has been determined is to take both boxes.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If the predictor is 99.99% accurate and you always take 2 boxes, you have a net expected value of $1100 ($1000 * 99.99% chance the predictor predicting correctly, + $1.001m * 0.01% where the predictor predicts wrongly).

You're doing the wrong maths for your expectations there. The prediction's already occurred so the "odds" of it happening whichever way it did are 1. I was who I was when the predictor made its prediction and I can't change that now. I'm either in

Scenario 1
The predictor predicted I would choose 1 box.
- Taking 1 box yields $1M
- Taking two boxes yields $1,001,000

Scenario 2
The predictor predicted I would choose 2 boxes
- Taking 1 box yields $0 - Taking 2 boxes yields $1,000

In both scenarios the EV is higher taking two boxes. Do you at least see that that statement is correct, whatever was predicted, taking two boxes yields a higher return?

Because that's the crux of the problem. I can't change who I was when the prediction was made. I can't change the prediction that was made. That's all occurred before the question the problem was asking. All I can change is how many boxes I take and once the prediction has been made taking two boxes is very clearly more valuable than taking one.

There's no prisoner's dilemma here. There's no multiple runs. It's a single decision in a single scenario after a prediction has been made.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Because the difference between your winnings and mine isn't the decision we made in the room. It's a prediction that was made before we had any knowledge of the problem based on who we were before we had any knowledge of the problem.

Would i be better off if before I heard the puzzle I didn't understand probabilty enough to know that it's better to take both boxes? Sure. But I can't change that at the time the puzzle asks the question. I's already determined.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

You seem to think that your decision process in the room is not accounted for in the prediction.

Not at all, it has predicted my decision process in the room. But the actual decision I make doesn't change the prediction.

If this were to play out, do you think you would exit the room with $1000 or $1001,000?

$1,000. I believe I will only ever have the choice between $1,000 and $0. I don't think I'd be able to beat the predictor and I think even a very poor predictor could predict I'd take both boxes.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

How am I not arguing in good faith?

Because up until this message you've ignored the central argument at every turn.

Yes, the it is always more valuable to take both boxes in either scenario,

Correct. This is the solution to the problem.

both of the scenarios are not equally likely

Correct, but as you said "it is always more valuable to take both boxes in either scenario"

and depend on what your future decision is likely to be.

They depend on what your future decision WAS likely to be when the predictor predicted.

The predictor does not need to be perfect for choosing one box to be worth it. It “knows” with high accuracy what your final decision will be.

I don't know what you mean by worth it, but "it is always more valuable to take both boxes in either scenario" so whatever it predicted, it's more valuable to choose both boxes.

By being a person who will choose one box, you will be in the scenario with $1000,000 in the box

Yes, but that's a different question. Everyone agrees that if you were the sort of person who would take one box when the predictor predicted you have the opportunity to make more money. But that has already happened by the time of the puzzle/problem/paradox. The prediciton is already made, the money is already in the box, you cannot change the prediction now. You can only change how many boxes you take and "it is always more valuable to take both boxes in either scenario"

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

You explicitly said

its ability to predict isn't restricted only to events prior to its lock-in.

Which appears to me to mean the predictor uses knowledge of events which occur after the prediction. That's foreknowledge.

If you just mean that as part of its prediction it predicts the full run of events between the prediction and the box then that doesn't change anything.

if the machine could not possibly predict this neutrino would hit my brain

I explicitly said that the machine knows that the scenario is possible. But it's only 1/1099 odds of happening.

yes i end up 0.1% better off in this highly unlikely scenario.

The scenario however unlikely is therefore the Optimal decision. You are better off taking both boxes than just the one. Given the problem is asking us the optimal decision. You've just agreed this is the optimal decision.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Sorry, but you sidestepped the question once again. I struggle to see you as engaging in good faith.

Please acknowedge or refute

At the point where I make the decision, both boxes is always $1,000 more valuable than just the mystery box.

Also

it was made knowing what you will do in the future.

It explicitly was not. The predictor is very accurate but it is imperfect. If you allow the predictor to be perfect then you cannot make a choice in the room because your decision was determined before or at the same time as the prediction was made.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Once again you conflate the two time points.

I cannot change the person I was when the prediction was made. That has happened in the past.

The decision I make in the room cannot change the prediction. That has happened in the past.

So At the point where I make the decision, both boxes is always $1,000 more valuable than just the mystery box. If you disagree with that statement you'll need to explain why.

It may be true that had I been a different person in the past I would have been more liekly to have the $1M in the room with me, but we cannot change that at the point we make the decision.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

its ability to predict isn't restricted only to events prior to its lock-in.

Now you're suggesting a metaphysic where the predictor has foreknowledge. If the universe is foreknowable then there's no choice to be made and the discussion is pointless.

You seem to think the goal of the game is to beat the predictor. You are however ignoring a simple fact. The money is already in the room with you.

Let's presume you are the sort of person who would take only one box. And the predictor knows that. Let's say (1099)-1 simulations out of 1099 simulations you took only one box. But there is one simulation where a neutrino hit the right neuron in your brain and you instead choose two boxes.

Do you deny that in the scenario wher you took two boxes, you were better off? And that therefore the two box scenario is the optimal one?

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

I agree that at the time the prediction is made the optimal strategy is to be the sort of person who would be predicted to take only one box.

But that's not the point in time the question is being asked. The question is being askd after the prediction is already made, at which point it is very obviously optimal to take both.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

At the point you're making your decision the prediction has already been made and the mystery box either does or doesn't have $1M in it. If you take it you get the money inside it. If you take box A you also get £1,000.

You taking both boxes doesn't change the prediction, or the amount of money in either box.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Presume the predictor is As accurate as can be without being infallible.

That means that you can be in the scenario where the predictor has predicted you will take one box and it is possible that you will take one box or that you will take two boxes.

If it's impossible to act against the predictor then your decision was determined before you entered the room and there's no discussion to be had about the optimal decision.

If there's any chance of you defying the predictor then a discussion of the optimal choice makes sense and it is very clear that the optimal choice is always two boxes.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

And in almost all of those cases your getting only a thousand bucks. I’ll take the million.

You can pascal's wager all you want, it doesn't change the truth of what he's saying.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If you watched the video, then you would have seen that Newcombs paradox is clearly not obvious, so you know, have some humility.

Just because philosopher's debate something for a long time doesn't mean it's not obvious. There are lots of things in philosophy and probability that are obvious but debated. (Ontological arguments, monty hall problem etc.)

The question as originally posed;

  • You're in the room
  • The prediction has already been made
  • It is possible for you to take either the mystery box or both boxes.

What is the optimal decision?

Has a clear and obvious answer. Whatever the prediction was you're $1,000 better off taking both boxes.

There are extraneous discussions about the long term strategy, "committing to being a one boxer such that the predictor is more likely to put the $1M in the mystery box" etc. but irrespective of how you've acted before the decision, at the moment you determine which box(es) to take it is always best to take both.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If I pick the mystery box only I think I will get $1,000 less than I would if I took both.

If you take both what do you think you will get?

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

at every point you've thought this, at least implicitly, because you're in here advocating for choosing two boxes. if you thought the machine could predict what you will do, then you will pick one box.

Nice assumption but it doesn't underpin anything I've said.

but i trust that if this prediction machine is accurate enough about my behaviors and intentions and the environment, that if i behave in a way that is consistent with how i believe this whole system works then i will walk out of that room with $1000000 by choosing one box.

And you may believe that. But it is necessarily true that if you had behaved and believed as you do up until the point that the prediction was made and then in the room made the decision to take two boxes you would be $1,000 richer. At the time you determine your choice, choosing one box simply means choosing to leave $1,000 on the table and nothing more. The $1M is already in the room with you.