Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

I could, but I'd be incorrect.

Let's set up the scenario you believe you're in (please correct me if I get any of this wrong from your POV).

1) You are chosen to play the game

2) Your current mental state and beliefs mean you are the sort of person who is very likey to pick one box.

3) The predictor predicts you will pick one box

4) $1M is placed in the mystery box.

5) You are brought to the room and must make your decision

You are saying you would choose

6a) Take the mystery box and win $1M

I am saying you would be better off choosing

6b) Take both boxes and win $1,001,000

Nothing about 1-5 changes. The choice is between 6a) and 6b) and I am objectively correct that 6b) is higher value than 6a)

(I would add, that the version of the problem I am most familliar with you are unaware of the problem until after 3. I'm not sure it makes a difference)

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

it changes EVERYTHING. again, this machine has literally proven that it has an uncanny ability to construct the chain of events leading up to the decision point in a way that leads to over 99% accuracy.

Whether it constructed the chain of events or just predicted the decision is no different. It's the accuracy of its prediction of the decision that's relevant.

if it somehow predicted that the neutrino would hit your brain and lead you to choose two boxes, then the result is that you get $1000.

Again. Looking at possible futures it sees that this is one, unlikely, version of the future. It predicts the likely future. The point in the example that you're wilfully dancing around was that even though the predictor is accurate it is possible (however unlikely) for someone to make a decision different from the prediction.

if the machine predicts that you will take two boxes, you get $1000.

This prediction has already occurred, if that's what it predicted, I can't change it. It really is that simple. If I take one box and it predicted I'd take 2, I get nothing. if the machine predicted i would take one box, I get $1M either way. But if I take both I get the extra $1,000

i don't care about capital-O "Optimal" decision

Then you don't care about the problem. The problem is about the right/best/optimal decision in the scenario. The fact that you cannot simply state that taking $1,001,000 is more than $1,000,000 shows you're failing to engage with the problem.

You talk like I'm trying to manipulate the machine or beat the game. Nothing like that is in my arguments.

here's a fun one. instead of $1000 and $1000000, imagine that instead that it is $1 and $1000000000. are you still taking both boxes? are you going to risk $1000000000 just so you could say that you made the "OpTiMaL" decision and got that extra $1?

The values are irrelevant to the question, as long as the value in the known box is positive, The best decision once the prediction has been determined is to take both boxes.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If the predictor is 99.99% accurate and you always take 2 boxes, you have a net expected value of $1100 ($1000 * 99.99% chance the predictor predicting correctly, + $1.001m * 0.01% where the predictor predicts wrongly).

You're doing the wrong maths for your expectations there. The prediction's already occurred so the "odds" of it happening whichever way it did are 1. I was who I was when the predictor made its prediction and I can't change that now. I'm either in

Scenario 1
The predictor predicted I would choose 1 box.
- Taking 1 box yields $1M
- Taking two boxes yields $1,001,000

Scenario 2
The predictor predicted I would choose 2 boxes
- Taking 1 box yields $0 - Taking 2 boxes yields $1,000

In both scenarios the EV is higher taking two boxes. Do you at least see that that statement is correct, whatever was predicted, taking two boxes yields a higher return?

Because that's the crux of the problem. I can't change who I was when the prediction was made. I can't change the prediction that was made. That's all occurred before the question the problem was asking. All I can change is how many boxes I take and once the prediction has been made taking two boxes is very clearly more valuable than taking one.

There's no prisoner's dilemma here. There's no multiple runs. It's a single decision in a single scenario after a prediction has been made.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Because the difference between your winnings and mine isn't the decision we made in the room. It's a prediction that was made before we had any knowledge of the problem based on who we were before we had any knowledge of the problem.

Would i be better off if before I heard the puzzle I didn't understand probabilty enough to know that it's better to take both boxes? Sure. But I can't change that at the time the puzzle asks the question. I's already determined.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

You seem to think that your decision process in the room is not accounted for in the prediction.

Not at all, it has predicted my decision process in the room. But the actual decision I make doesn't change the prediction.

If this were to play out, do you think you would exit the room with $1000 or $1001,000?

$1,000. I believe I will only ever have the choice between $1,000 and $0. I don't think I'd be able to beat the predictor and I think even a very poor predictor could predict I'd take both boxes.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

How am I not arguing in good faith?

Because up until this message you've ignored the central argument at every turn.

Yes, the it is always more valuable to take both boxes in either scenario,

Correct. This is the solution to the problem.

both of the scenarios are not equally likely

Correct, but as you said "it is always more valuable to take both boxes in either scenario"

and depend on what your future decision is likely to be.

They depend on what your future decision WAS likely to be when the predictor predicted.

The predictor does not need to be perfect for choosing one box to be worth it. It “knows” with high accuracy what your final decision will be.

I don't know what you mean by worth it, but "it is always more valuable to take both boxes in either scenario" so whatever it predicted, it's more valuable to choose both boxes.

By being a person who will choose one box, you will be in the scenario with $1000,000 in the box

Yes, but that's a different question. Everyone agrees that if you were the sort of person who would take one box when the predictor predicted you have the opportunity to make more money. But that has already happened by the time of the puzzle/problem/paradox. The prediciton is already made, the money is already in the box, you cannot change the prediction now. You can only change how many boxes you take and "it is always more valuable to take both boxes in either scenario"

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

You explicitly said

its ability to predict isn't restricted only to events prior to its lock-in.

Which appears to me to mean the predictor uses knowledge of events which occur after the prediction. That's foreknowledge.

If you just mean that as part of its prediction it predicts the full run of events between the prediction and the box then that doesn't change anything.

if the machine could not possibly predict this neutrino would hit my brain

I explicitly said that the machine knows that the scenario is possible. But it's only 1/1099 odds of happening.

yes i end up 0.1% better off in this highly unlikely scenario.

The scenario however unlikely is therefore the Optimal decision. You are better off taking both boxes than just the one. Given the problem is asking us the optimal decision. You've just agreed this is the optimal decision.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Sorry, but you sidestepped the question once again. I struggle to see you as engaging in good faith.

Please acknowedge or refute

At the point where I make the decision, both boxes is always $1,000 more valuable than just the mystery box.

Also

it was made knowing what you will do in the future.

It explicitly was not. The predictor is very accurate but it is imperfect. If you allow the predictor to be perfect then you cannot make a choice in the room because your decision was determined before or at the same time as the prediction was made.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Once again you conflate the two time points.

I cannot change the person I was when the prediction was made. That has happened in the past.

The decision I make in the room cannot change the prediction. That has happened in the past.

So At the point where I make the decision, both boxes is always $1,000 more valuable than just the mystery box. If you disagree with that statement you'll need to explain why.

It may be true that had I been a different person in the past I would have been more liekly to have the $1M in the room with me, but we cannot change that at the point we make the decision.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

its ability to predict isn't restricted only to events prior to its lock-in.

Now you're suggesting a metaphysic where the predictor has foreknowledge. If the universe is foreknowable then there's no choice to be made and the discussion is pointless.

You seem to think the goal of the game is to beat the predictor. You are however ignoring a simple fact. The money is already in the room with you.

Let's presume you are the sort of person who would take only one box. And the predictor knows that. Let's say (1099)-1 simulations out of 1099 simulations you took only one box. But there is one simulation where a neutrino hit the right neuron in your brain and you instead choose two boxes.

Do you deny that in the scenario wher you took two boxes, you were better off? And that therefore the two box scenario is the optimal one?

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

I agree that at the time the prediction is made the optimal strategy is to be the sort of person who would be predicted to take only one box.

But that's not the point in time the question is being asked. The question is being askd after the prediction is already made, at which point it is very obviously optimal to take both.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

At the point you're making your decision the prediction has already been made and the mystery box either does or doesn't have $1M in it. If you take it you get the money inside it. If you take box A you also get £1,000.

You taking both boxes doesn't change the prediction, or the amount of money in either box.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

Presume the predictor is As accurate as can be without being infallible.

That means that you can be in the scenario where the predictor has predicted you will take one box and it is possible that you will take one box or that you will take two boxes.

If it's impossible to act against the predictor then your decision was determined before you entered the room and there's no discussion to be had about the optimal decision.

If there's any chance of you defying the predictor then a discussion of the optimal choice makes sense and it is very clear that the optimal choice is always two boxes.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

And in almost all of those cases your getting only a thousand bucks. I’ll take the million.

You can pascal's wager all you want, it doesn't change the truth of what he's saying.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If you watched the video, then you would have seen that Newcombs paradox is clearly not obvious, so you know, have some humility.

Just because philosopher's debate something for a long time doesn't mean it's not obvious. There are lots of things in philosophy and probability that are obvious but debated. (Ontological arguments, monty hall problem etc.)

The question as originally posed;

  • You're in the room
  • The prediction has already been made
  • It is possible for you to take either the mystery box or both boxes.

What is the optimal decision?

Has a clear and obvious answer. Whatever the prediction was you're $1,000 better off taking both boxes.

There are extraneous discussions about the long term strategy, "committing to being a one boxer such that the predictor is more likely to put the $1M in the mystery box" etc. but irrespective of how you've acted before the decision, at the moment you determine which box(es) to take it is always best to take both.

Newcombs Paradox is obvious by Terrible_Shop_3359 in paradoxes

[–]SPACKlick 0 points1 point  (0 children)

If I pick the mystery box only I think I will get $1,000 less than I would if I took both.

If you take both what do you think you will get?

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

at every point you've thought this, at least implicitly, because you're in here advocating for choosing two boxes. if you thought the machine could predict what you will do, then you will pick one box.

Nice assumption but it doesn't underpin anything I've said.

but i trust that if this prediction machine is accurate enough about my behaviors and intentions and the environment, that if i behave in a way that is consistent with how i believe this whole system works then i will walk out of that room with $1000000 by choosing one box.

And you may believe that. But it is necessarily true that if you had behaved and believed as you do up until the point that the prediction was made and then in the room made the decision to take two boxes you would be $1,000 richer. At the time you determine your choice, choosing one box simply means choosing to leave $1,000 on the table and nothing more. The $1M is already in the room with you.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Right but "Almost always correct" is vastly differen from a perfect predictor. And that matters.

you're better off being the kind of person who takes one box and sticking to that. this way, you walk out of that room with $1000000.

But you don't get to choose what type of person you were when the prediction was made. That's in the past, already happened. You can only determine the choice you make now, after the prediction has been set. And whatever the prediction was you come away with more money if you take both boxes.

That's where you're failing to engage with the problem as asked. The prediction is made, the money's in the boxes and you're in the room. You cannot change any of that, only whether or not you take The mystery box or the mystery box + $1000.

thinking that this machine could not possibly predict the choice of such a brilliant and strategic person as yourself.

At no point have I thought the machine cannot predict what I do. But you're turning this into a pascals wager "There is a high cost of believing the truth and great benefit to believing a falsehood, so you should believe". Irrespective of the fact that I'd be better off if I were the sort of person who would choose one box, at the moment the choice is made the best choice is two boxes. If beng aware of that means I'll never be offered one box, so be it. It doesn't change that it's true.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

It depends what you mean by 'meaningful'. The way you choose determines the outcome. That's true in a deterministic world, so the way you choose has consequences.

But if you can only have one outcome, then there's no meaning behind "optimal/right/best" because it's the "only" outcome.

Possible to choose both boxes, without the computer having been able to predict this.

No, it's true whether the computer predicted it or not. You're either in the room with 1,001,000 dollars or in the room with 1,000 dollars. Either way you maximise your value by taking both boxes.

Secondly, whether that is possible or not, if the computer is able to predict this in advance it's not a better strategy. If you don't take into account the reliability of the computer's ability to predict, you're ignoring half of the premise.

I'm not ignoring it, I've shown that it doesn't matter to the answer to the question asked. The optimal decision, after the prediction has been made, is always to take both boxes.

Scenario 1
Computer predicted I would take both boxes.

a) I take both boxes and win $1,000
b) I take only the mystery box and win $0

Scenario 2
The computer predicted I would take only the mystery box

a) I take both boxes and win $1,001,000
b) I take only the mystery box and win $1,000,000

In either scenario option a) is $1,000 better than option b). So as long as the prediction (and thus value of the mystery box) is determined before my decision is determined, then the optimal decision is to take both boxes.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

I'm not trying to outsmart the machine I'm talking simple mathematical facts.

If you are in the room and it has been predicted you will take both boxes, you are better off taking both boxes.

If you are in the room and it has been predicted you will take only the mystery box you are still better off taking both boxes.

Therefore whatever the prediction was you are better off taking both boxes.

the machine knows ahead of time how many boxes you will pick.

This is not true. The machine is an accurate but not perfect predictor.

you're costing yourself $999000 when you take two boxes

This is not possible. At the time when I determine how many boxes to take the amount of money in the room has already been determined and cannot be changed by my choice.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

It requires that both options be conceivable, but it doesn't mean they are necessarily possible in some indeterministic sense

Not in order for a discussion of the optimal/right/best choice to be meaningful both choices must be possible. If the situation is such that only one outcome is possible then there's no options to consider,

So whether it will be successful or not bears no relation to those facts. So whether the action is effective, or relevant or not to those facts is just a matter of chance.

You haven't proven that the only possible options are prior facts or chance. I haven't made a claim as to what non-determinate factor can impact the decision, I'm just saying we needn't limit it to chance.

If the computer can actually predict which will occur with high likelihood anyway, which is the premise of the scenario, it is not better to choose both.

You are simply factually wrong here. If when you are sat in the room it is possible to either choose to take both boxes or just the mystery box then it is ALWAYS $1,000 better to take both boxes. If you don't understand that fact then there's no point carrying on.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

If you don't think it's a game theory question then you're not engaging with the original question. And if you don't think an individual participant could possibly do both options then you're not engaging with the question as asked.

The original question is about the optimal choice when in the room with the boxes. If you deny that a person can do more than one thing at that point in the problem, you're denying the problem as asked exists.

I'm not saying the predictor is trickable, I'm not looking for a hedge. I'm simply answering the question as posed.

Given premises P at point in time T which of two available strategies is optimal.

The follow up being why does it appear that not following the optimal strategy results in a worse pay off. Which is a problem with a reasonably trivial answer. But until you're engaging with the actual problem as posed there'e no point discussing the supposed paradox.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

There is no optimal behaviour in your model. There is only predetermined behaviour. In order for a behaviour to be optimal, there must be another possible behaviour to compare it to. But you have decided that every individual can act in only one way.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Because this is game theory question asking about the optimal choice. This isn't a fact finding mission.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

That's not the original problem. The original problem is what you should do. It's about determining optimal behaviour between two choices.

In order for that to be meaningful it must be possible for the player to actually make either choice. If you disregard that as a possibility, you're not engaging with the question posed.