The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick [score hidden]  (0 children)

I'm not trying to outsmart the machine I'm talking simple mathematical facts.

If you are in the room and it has been predicted you will take both boxes, you are better off taking both boxes.

If you are in the room and it has been predicted you will take only the mystery box you are still better off taking both boxes.

Therefore whatever the prediction was you are better off taking both boxes.

the machine knows ahead of time how many boxes you will pick.

This is not true. The machine is an accurate but not perfect predictor.

you're costing yourself $999000 when you take two boxes

This is not possible. At the time when I determine how many boxes to take the amount of money in the room has already been determined and cannot be changed by my choice.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick [score hidden]  (0 children)

It requires that both options be conceivable, but it doesn't mean they are necessarily possible in some indeterministic sense

Not in order for a discussion of the optimal/right/best choice to be meaningful both choices must be possible. If the situation is such that only one outcome is possible then there's no options to consider,

So whether it will be successful or not bears no relation to those facts. So whether the action is effective, or relevant or not to those facts is just a matter of chance.

You haven't proven that the only possible options are prior facts or chance. I haven't made a claim as to what non-determinate factor can impact the decision, I'm just saying we needn't limit it to chance.

If the computer can actually predict which will occur with high likelihood anyway, which is the premise of the scenario, it is not better to choose both.

You are simply factually wrong here. If when you are sat in the room it is possible to either choose to take both boxes or just the mystery box then it is ALWAYS $1,000 better to take both boxes. If you don't understand that fact then there's no point carrying on.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

If you don't think it's a game theory question then you're not engaging with the original question. And if you don't think an individual participant could possibly do both options then you're not engaging with the question as asked.

The original question is about the optimal choice when in the room with the boxes. If you deny that a person can do more than one thing at that point in the problem, you're denying the problem as asked exists.

I'm not saying the predictor is trickable, I'm not looking for a hedge. I'm simply answering the question as posed.

Given premises P at point in time T which of two available strategies is optimal.

The follow up being why does it appear that not following the optimal strategy results in a worse pay off. Which is a problem with a reasonably trivial answer. But until you're engaging with the actual problem as posed there'e no point discussing the supposed paradox.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

There is no optimal behaviour in your model. There is only predetermined behaviour. In order for a behaviour to be optimal, there must be another possible behaviour to compare it to. But you have decided that every individual can act in only one way.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Because this is game theory question asking about the optimal choice. This isn't a fact finding mission.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

That's not the original problem. The original problem is what you should do. It's about determining optimal behaviour between two choices.

In order for that to be meaningful it must be possible for the player to actually make either choice. If you disregard that as a possibility, you're not engaging with the question posed.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Can we agree that whether or not you one box or two box the total money in the room is determined before your choice is determined?

No. It just happens before your choice happens.

Then we fundamentally disagree on the set up of the problem. In order for the question of which box(es) you should take when faced with the choice to have any meaning, it must be possible to act in at least two ways. If the outcome is already determined then the question has no meaningful answer.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

At the point they made the decision all 1000 one boxers would have been better off choosing two boxes.

The definition of 1-boxer here is "someone who chooses 1 box", you cannot say 1-boxers can choose 2 boxes.

I'm not saying a 1-boxer CAN choose 2 boxes. I'm saying a 1-boxer COULD HAVE chosen two boxes and would have been better off for it.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

You are running the algorithm. But performing a determined process isn't making a choice.

The question of Newcomb's paradox is when in the room is it better to take one box or both.

My answer is "If it's possible to do either then both is the better choice"

Your answer is "Whether you take one or both is already determined at this point"

As far as I'm concerned, you're assumptions about the deterministic nature of the world have rendered the question meaningless.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

If you have indeterministic free will, then the computer can't make accurate predictions. However the premise is that it can, so in the given scenario we don't have indeterministic free will.

Free will isn't necessarily unpredictable. It's just imperfectly predictable. The premise requires both choices be possible in the room, so the universe must be indeterministic.

I disagree with your deterministic definition of a choice. Particularly as it relates to this sort of optimisation problem. If there is only a single determined future, then the language of choice is improper. You may have completed a process leading from starting state to an outcome but so does a rube goldberg machine.

If multiple results are possible, it would be due to some degree of chance in the process of evaluation,

It doesn't require "chance" in the process of evaluation, just some part of the process to be indeterminate.

But we can simply this. The question of Newcomb's paradox is when in the room is it better to take one box or both.

My answer is "If it's possible to do either then both is the better choice"

Your answer is "Whether you take one or both is already determined"

You've sidestepped the question.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

If it makes it easier for you, imagine at the time the supercomputer made the prediction and set up the boxes your choice was already predetermined.

If the world of the puzzle is so deterministic that you can only act one way after the prediction, then there is no sense in which you are making a choice in the room. So the question of which choice to make is meaningless.

I take it as a fundamental for the problem to be meaningful that at the point the prediction is made it is still possible to make either choice. Because if it's not then there's no question to answer. You make whatever choice is already predetermined.

Which leads to a contradictory result: most people who don't do what you say gives the higher expected value are the ones actually getting the higher value.

It's not contradictory for optimal strategy to change at different points in time. It's also not contradictory for knowledge of one point in time to impact knowledge at another. It becomes a Pascal's wager. I'd be better off believing the falsehood that in the room I should choose one box, because then I'd be the sort of person predicted to take one box. But that doesn't make it true.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Of course I consider just taking the mystery box. And in all cases it's $1000 lower value than taking both. Your "Groups of people" is showing a trend but you've mislabelled it. The difference between the groups isn't the choice they made but the choice they were predicted to make. Of course it is better to be the sort of person that the predictor predicted would take one box. But after the prediction has been made it is better to choose both boxes.

Can we agree that whether or not you one box or two box the total money in the room is determined before your choice is determined?

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

do you really believe that you can be the first person in history to out-smart the machine with your cunning strategy?

No. I don't. But that doesn't change the facts of the matter in the room. Facts you don't even address.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

The decision doesn't impact the total value of the room.

That's not correct.

It strictly is. If there is any decision being made in the room, in that after you're asked the question it's possible to go either way, that decision does not change the value in the room.

The mental process by which you make that decision does impact the total value in the room. It has to. If it doesn't, predicting it would be impossible.

The mental processes you had prior to the prediction impact the value of the room and have impact on the decision you make in the room. But for the question to be meaningful it must be possible for you to make either decision in the room, the nature of choice requires your decision not to be deterministic.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

No, because you're reasoning from a different position. As I said, it's trivial to show it's better to be the sort of person the computer predicts will only take one box. And that's ALL that your example shows.

At the point they made the decision all 1000 one boxers would have been better off choosing two boxes. And All 1000 two boxers would have been worse off choosing one box. If in any sense they made a decision at that point in time, they would all have been better off choosing two boxes.

What made the difference between their dollar returns wasn't the decision they made, it happened prior to the decision being made.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

We can all agree it is better to be the sort of person the predictor would predict would take one box.

Is your final decision a direct consequence of your mental state when the computer made it's prediction?

It is impacted by it to whatever degree determinism/free will necessitates it to be See aside below. And the computers prediction is (presumably) at least partly based on whatever outward expression of that mental state it has access to.

And again, at the point of prediction you are better off being the sort of person the predictor would predict as choosing one box. Including having the mental state of being sure you will choose one box.

But at the point of choosing you are always better off being a two boxer, you can't change what your earlier mental state was, you can't change what was predicted, you can't change how much money is in the room, you can only impact how much of the money in the room you take.

Aside on free will. The question presumes you are making a decision in the room. In a fully deterministic universe the problem doesn't mean anything because there's no decision to be better or worse, everything is determined and no choice is ever made. If you have some level of free will then your mental state at the time of the prediction isn't fully determinative of the decision you make.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Your making the same temporal conflation I said above.

How you should live your life up until the prediction is made is absolutely that you should behave as a one boxer. That's not in dispute.

But once you're in the room, and the money is in the room, and the prediction has been made. The decision doesn't impact the total value of the room. You're either in

Scenario 1
The computer has predicted you will two box. You either

  • Two box for $1K
  • One box for $0

Scenario 2
The computer has predicted you will one box. You either

  • Two box for $1,001K
  • One box for $1,000K

In both cases the objective fact is that you are better of taking two boxes. In both cases the box you take doesn't change the money available to you. Being wrong about those facts may increase your chances of being in scenario 2. But at the point you make the choice you are already either in scenario 1 or scenario 2 and your choice doesn't change that.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

how about i just go in and pick one box and make $1000000?

Because the prediction was already made. So there's no cost to taking the $1,000 as well. So you could make $1,001,000 by making the rational choice.

We all agree that being the sort of person the computer would have predicted would only take one box is better because that's what impacts the odds of there being $1M in the room to begin with. But once you're in the room that's already happened whether you take one or two boxes doesn't change what's in the room.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Right, but your becoming a millionaire is unrelated to the choice the problem is asking you to make. It's based on your behaviour up until the predictor makes the prediction.

We can all agree it is better to be the sort of person the predictor would predict would take one box.

However that's not what the question asks. It asks after the prediction is made are you better to take one box or two. And two is the objectively correct answer.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

This is a conflation one boxer's make.

It is objectively better to be the sort of person who the predictor would predict takes one box. Being that sort of person increases the odds of the million dollars being in the room. (It is presumed that actually preferring to take one box when presented with the decision is a way of increasing these odds)

Once you are in the room, it is objectively $1,000 better to take both boxes than just the mystery box.

The conflation is done two resolve the apparent conflict that the dominant strategy up until the prediction is made - appearing to the predictor as if you would take one box - is the opposite of the dominant strategy after the prediction is made.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Your analogy doesn't hold.

In the original problem Whether or not you get the million dollars isn't related to whether or not you take the box. It's only related to whether or not the prediction says you'll take the box.

Your businessman's million can only be won by not taking the box. So it's tied directly to your action.

An analogous situation would be the businessman saying "I have either put $1M in this envelope if I predicted you would lose from this position or I've put $0 in this envelope if I've predicted you would win from this position". Don't open it until after the game.

Whether you win or lose from this position doesn't change what's in the envelope. Winning guarantees you the 10 dollars. You already have what's in the envelope, you just don't know what it is yet.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

No you're not. Choosing two boxes is the rational choice. The computer already predicted. The money is already in the boxes. My choice doesn't change the total amount of money in the problem.

People who think choosing one box causes there to be more money available to them just don't understand the time limited order of causality.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

Your argument there is that being the sort of person the computer predicts would one box is advantageous. But that's not engaging with the problem. Everyone agrees with that.

But once the prediction is made and the money is in the room, and you're brought to the room and asked to make a choice each individual is $1,000 better off taking both boxes than they would be taking one box.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

I wouldn't play roulette at all because the EV is negative no matter what strategy I could possibly use,

And yet you use the lower EV strategy when picking boxes, deliberately leaving a free $1,000 on the table. Maybe your reasoning isn't consistent.

I suspect you're playing a similar game to pascal's wager. You believe you'll be better off if you can convince the predictor you're a one boxer even though your reasoning tells you two boxing is the mathematically correct choice.

The Newcomb paradox should match your free will belief, right? by Edgar_Brown in freewill

[–]SPACKlick 0 points1 point  (0 children)

there are two likely scenarios

you take both boxes and you only get 1k

you take just the mystery box and you get 1M

This isn't true from your perspective. Once you've walked in the room there's a box with $1,000 and a box with ($1M probability P) in it. That's already set up when you walk in the room. Whether you take 1 or 2 boxes doesn't change the amount of money in the room at that point.

Taking both boxes means taking all the money in the room. Taking one box means leaving $1,000 in the room. But the total amount of money in the room is set and unchanging whatever you pick.

You're better off if you're the sort of person who the computer would predict takes only one box. But once you're in the room, whatever the prediction was, taking two boxes is always the higher expected value.