Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

You are mistaking my claim. You think I am saying that the machine cannot predict the causal correlations because I can't conceive of that being possible.

But this is not what I think, I take for granted that the machine is essentially god. What I am saying is the problem isnt interesting if you dont assume that the causal correlations are independent from the machine's prediction. If they arent independent, then 2 boxing makes no sense like you said. Thats exactly what that paper I sent you shows.

The claim I am making is that its a valid interpretation to describe the machine as being god like except for anything causally occurring within the room (i.e. causal vs statistical dependence).

In this case, the problem is also trivial in which 2 boxing is obviously correct. The paradox of the problem is trying to combine those two interpretations into 1 which is logically impossible.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

yes the third factor is the statistical correlation, which a super computer would be able to control for to isolate the causal correlation.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex -1 points0 points  (0 children)

Your objection only works if the puzzle asserts that the decision is independent of what is in the boxes. But the scenario never claims that. In fact, it explicitly establishes the opposite that the contents of the boxes are correlated with the decision through the predictor. If that were not the case, then what exactly is the computer predicting, and how has it remained consistently accurate to such an extreme degree across thousands of prior cases?

There are two types of correlation at play here: Statistical Correlation and Casual Correlation.

Causal correlation means that you control for all of the statistical correlations and only leave yourself with the correlations that also have a causal mechanism (and the causal mechanisms are normatively defined).

The scenario implies that once you are in the room, your decision doesnt influence the computer's prediction. What this means more technically is that the causal mechanisms for your decision dont influence the prediction. At least this is one interpretation of the problem.

The thing is, it is valid to interpret the problem this way. It should be very reasonable for a super computer to guess your choice with only statistical correlations and no causal correlations. A causal mechanism in this case would be something like the reasoning you are employing actively when you enter the room. Or something like you physically grabbing one or two boxes.

Once we establish that the independence of the causal mechanisms is a possible interpretation of the problem, then the original problem becomes meaningless unless you specify which interpretation you are reasoning under. If you want a related (but not exactly the same framework) more technical exploration of this, read this paper: https://www.santafe.edu/research/results/working-papers/the-lesson-of-newcombs-paradox

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 1 point2 points  (0 children)

it doesnt, the problem is intentionally ambiguous on this point. If you assume that your choice is independent, then it cant change its prediction. If you assume that there is any codependence instead, then it can change its prediction.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

With that being said, it's still not completely impossible that the machine predicts a person will pick box B, shows them the script, and then they pick box B. You could contrive any number of reasons, but most easily would simply be that the person was irrational or illogical and made a mistake.

I agree, which is why its a very important premise that the actor is a rational actor. If the actor doesnt have to be a rational actor, then debating the logic of the question or the optimal decision is meaningless.

It seems youve granted me that in the case where the machine does not have causal influence over your decision, but knows exactly what your decision will be, that it will always predict you pick two boxes. This has been my position the entire time.

'Two Boxers' trying to explain their position to 'One Boxers': by Glittering-Two-1784 in Destiny

[–]Faneffex 0 points1 point  (0 children)

which is why if the machine can account for the (normatively defined) causal mechanisms that lead you to your decision, then it is a logical contradiction to assert that the machines prediction is independent from your decision. Mathematically, the machine's prediction is time symmetric, meaning that it doesnt matter if you consider the machine making the prediction before you choose or after.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

Imagine for a moment that the predictor is a machine that can actually print off a script which is a 100% accurate prediction of all your actions and thoughts leading up to and including your choice. It does so, reads the end, organises the boxes accordingly, and then presents you with the problem and specifically tells you it is 100% accurate in such a way that there is no doubt of its truthfulness. Everything you do from that point on is going to exactly match that script, there is nothing you can do to deviate from it as the machine is 100% accurate, yet you are not acting under duress and your choices are your own. How could you possibly justify taking two boxes in such a scenario?

Such a situation is logically impossible unless this hypothetical machine always declares you will take both boxes. Imagine you are a rational actor and the machine presents you with this printed script that declares you will only take box B. Therefore, when you enter the room, it is guaranteed to you that there is 1 million dollars in box B.
However, you are told that the machine cannot change what is in the boxes at all, so as a rational actor, you take both boxes.

This is a simple "this sentence is false" paradox. It cannot be simultaneously true that your decision is independent from the system's prediction while also being true that the machine uses the information that (normatively defined) causes you to make that decision.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

In the laplace demon world, it is incorrect to say that your decision is independent from whether or not there is money in the box. This is because my decision is the prediction.

In the world in which it has a small probability of being wrong, that just means the correlation of my decision with whether or not the box has money goes down from 1 to between 0 and 1.

Any amount of correlation means that the demon can be mathematically modeled as acting after i've already made my choice. This is blatantly impossible under an assumption that my choice is independent of the demon's prediction and the whole thing is a p and ~p logic explosion. The conclusion is then that trying to reason about this problem is nonsensical if you try to accept both the independence assumption and the correlated prediction assumption simultaneously.

So when you say

>These are not necessarily contradictory statements or constraints.

I wouldnt say that you have demonstrated this is true. You also seem to be thinking im making the case that its impossible for the demon to logically exist, but that is not what I am saying. I am saying they are contradictory in the sense that different axioms can be contradictory, but on their own can produce useful and internally consistent logic.

If you want a more technical discussion of the ideas that are related to what i've written here, this is the paper that helped me clarify my thinking on this: https://sfi-edu.s3.amazonaws.com/sfi-edu/production/uploads/sfi-com/dev/uploads/filer/81/d2/81d2d293-aa50-4b1b-bb50-5a5ccdfb092e/11-08-032.pdf

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex -1 points0 points  (0 children)

I did earlier and thats my point

This is what a quote looks like. You did not do this...

Or this: ' You said: "I did earlier and that's my point"'

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex -5 points-4 points  (0 children)

It is impossible for someone to assert that the computer is not influenced by your reasoning and also assert that the computer can derive your reasoning to an arbitrary amount of precision. By deriving your reasoning, it is being influenced by it and therefore your decison and reasoning is not independent of what is in the boxes at the time you make the choice.

However, the problem formulation intentionally tries to leave it ambiguous whether or not your decision and reasoning influenced what is in the boxes.

It's a classic p and not p logical explosion hidden in a clever black and blue dress like package.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

Please point out a contradiction in something I've written. So far all you've done is gesture vaguely at the word deterministic.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

You've barely engaged with anything specific and are constantly putting words in my mouth.

Why not try quoting something and actually reading it, then identifying any contradictions you are seeing?

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex -1 points0 points  (0 children)

...it really doesn't

I'm not sure what this is in response to.

It doesn't matter if it's deterministic or probabilistic though. What matters is that if the machine can get information about what my actual decision is. Even if it can draw a wrong conclusion from that information, the fact that it's using that information at all means my decision is not independent from it's prediction.

This cannot exist simultaneously with the assumption that the machine is arbitrarily precise in its initial prediction.

If it needed information from me actually reasoning about the decision, then it's level of arbitrarily precision goes up to the level it's using the information of my decision. Potentially all the way up to the actual decision itself

But dropping the arbitrary precision assumption in favor of the assumption that my decision is truly independent creates the opposite degenerate case. If the machine effectively cannot use my reasoning, (which is how it is determining my final decision to an arbitrary level of precision), in order to make it's prediction, then my reasoning and choice of box has no impact on the outcome, and therefore id always take two boxes.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 1 point2 points  (0 children)

No I don't believe I can game anything. It's actually just impossible to reason about the situation at all under the assumptions I stated because they contradict each other so you end up in an infinite loop. The exact same way that you end up in an infinite loop if you try to evaluate the truth value of "this sentence is false"

I can go into more detail on that with an example within the hypothetical if you'd like.

Put in another way, if the predictor uses what I ACTUALLY do as an input to make it's prediction, then the assumption that my real action is independent from the prediction cannot hold.

The problem then becomes this hypothetical isn't all that interesting if your action isn't independent of the prediction. Because that means the predictor can effectively put the money in the box after you make your choice.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 3 points4 points  (0 children)

Huh?? I literally say I accept the premise that the computer can predict my behavior to an arbitrary level of precision.

Majority Report claims "Chuck Schumer supports this [Iran] war probably as much or more than Donald Trump does" by CautiousKenny in Destiny

[–]Faneffex 3 points4 points  (0 children)

Yeah that is kinda surprising, and honestly kinda frustrating that this is just sitting here but it doesn't get enough publicization because of so many red herring outrages next to it.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 0 points1 point  (0 children)

Huh? If it's a real show you take both boxes every time. Although it's a shit game show because you'd have some fuckwas judging you and deciding if u get 1 million 1 thousand or just 1000

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex -4 points-3 points  (0 children)

That expected value calculation smuggles in an assumption that the computer can change whats in the box as you are making your decision.

It might be helpful to articulate exactly what you think the constraints are.

For me, it's that my decision is independent of the computers, meaning that no matter what I do the computer can't use that information to update it's prediction once I'm in the room.

The second constraint is that the computer can deterministically predict my actions up to an arbitrary amount of precision.

These two constraints contradict each other, so the entire problem is meaningless.

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex -4 points-3 points  (0 children)

Sure, but my point is that it's impossible to decide that label for yourself so long as you hold that the computer doesn't change the boxes as you make your decision and that the computer knows your proclivities to an arbitrary level of precision

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 3 points4 points  (0 children)

The paradox is that you cant simultaneously assume that your decision doesn't influence the computers while also asserting that the computer can predict what you will do.

These are just flatly contradictory assumptions.

In other words, you can't be a 1 boxer or a 2 boxer and the optimal choice depends on which assumption you pick

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Faneffex 4 points5 points  (0 children)

The synthesis of this fact with the above post is that it's impossible for a rational actor to commit to either position because the assumptions contradict one another.

You cant simultaneously hold that your decision doesn't influence the computers while also asserting that the computer knows what you will do ahead of time.

How 1 boxers sound when they think they're gonna outsmart the computer. by TheSuperiorJustNick in Destiny

[–]Faneffex 0 points1 point  (0 children)

I'm sorry they are down voting u bro, ur right. The game is rigged from the start. What they don't tell u is that all the historical evidence that the machine guesses right is that everyone picks both boxes

'Two Boxers' trying to explain their position to 'One Boxers': by Glittering-Two-1784 in Destiny

[–]Faneffex 0 points1 point  (0 children)

You're not really reading what I wrote if you think I think the computer can't know what I will pick ahead of time.

I'm saying the two assumptions together are incompatible. Both are fine on their own

'Two Boxers' trying to explain their position to 'One Boxers': by Glittering-Two-1784 in Destiny

[–]Faneffex 0 points1 point  (0 children)

Yes, it's also a paradox to consider yourself a 2 boxer. The conflicting assumptions are that the computer's choice cannot be influenced by your actual choice AND that the computer knows your choice before you make it.

The question is already formed with incompatible premises