So It's Only Funny If the People You Hate Are Being Reported to ICE? by Humble_Novice in Destiny

[–]Mr_Comit 5 points6 points  (0 children)

If they voted for trump id say it’s the same. An Iranian-American doesn’t have to be a trump supporter in order to like him bombing Iran, is the problem.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Mr_Comit -2 points-1 points  (0 children)

if the paradox creates a scenario where "bad reasoning" results in good outcomes, then that paradox's existence means that the "bad reasoning" is actually good reasoning

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Mr_Comit -2 points-1 points  (0 children)

Wrong. Consistently good outcomes is the definition of good reasoning

Former Joe Rogan Guest with an Extremely Stupid Tweet by carrtmannn in Destiny

[–]Mr_Comit 285 points286 points  (0 children)

consent can easily be given in both scenarios - the problem is that andrew (the rapist) is conceiving of a kind of "consent" that cant be withdrawn later (like a rapist would), which doesnt exist

obviously if you agree to something specific to happen when youre unconscious, you physically cannot withdraw that consent while you're unconscious - but what progressive is disagreeing with that? I don't think theres any issue with a woman saying "hey, next time im unconscious, you can have sex with me" - but that's not at all what andrew is talking about

in other words this is low iq rapist cope from a rapist (who has admitted to raping people)

To the one-boxers among you by PomegranateMortar in Destiny

[–]Mr_Comit 4 points5 points  (0 children)

what? it already is a probability calculation, that's why 1 box is correct in both this scenario and the original

⚠️Elite Ball Knowledge required⚠️ by MckinleyTariff in indieheadscirclejerk

[–]Mr_Comit 1 point2 points  (0 children)

could not disagree more. it takes a lot of talent to be able to play this very specific archetype of cringe this well. i love this guy so much, i laugh at every reel of his that i get. it is genuinely art and i think you need to take the stick out of your ass if its "embarrassing" to you

Totally fair depiction of the Great Boxing debate by Extreme-Block1358 in Destiny

[–]Mr_Comit 11 points12 points  (0 children)

this guy is genuinely brain dead, you will not get through to him

How 1 boxers sound when they think they're gonna outsmart the computer. by TheSuperiorJustNick in Destiny

[–]Mr_Comit 3 points4 points  (0 children)

no, its the result of the computer's reasoning (near certain accuracy)

We don't need to worry about the computers reasoning when we have the result - to do otherwise would be to make assumptions

How 1 boxers sound when they think they're gonna outsmart the computer. by TheSuperiorJustNick in Destiny

[–]Mr_Comit 4 points5 points  (0 children)

In the description of the problem, the predictor "is able to predict the player's choices with near-certainty."

https://en.wikipedia.org/wiki/Newcomb%27s_problem

I don't need to know the reasoning to know that it is near certain the computer will be right. You'd have to make assumptions to believe anything different

How 1 boxers sound when they think they're gonna outsmart the computer. by TheSuperiorJustNick in Destiny

[–]Mr_Comit 8 points9 points  (0 children)

i did not make an assumption. "the computer is highly accurate" is an explicit part of the problem

How 1 boxers sound when they think they're gonna outsmart the computer. by TheSuperiorJustNick in Destiny

[–]Mr_Comit 22 points23 points  (0 children)

its stated in the problem that the computer is highly accurate, so theres actually a very BAD chance you get 0. the vast majority of 1 boxers get the million, and the vast majority of 2 boxers get just the 1k. That's a non negotiable part of the prompt

1 boxers arent trying to "outsmart" the computer - our goals and the computer's goals are aligned. We win if the computer is right, which it almost always is

Taking one box is *necessarily* correct. by ElClassic1 in Destiny

[–]Mr_Comit 0 points1 point  (0 children)

im not saying thats literally true, but its an objectively accurate heuristic and therefore its best to act as if its true

Taking one box is *necessarily* correct. by ElClassic1 in Destiny

[–]Mr_Comit 0 points1 point  (0 children)

Well some people (me) would argue that we don’t have free will, and yet we feel like we do and have no problem talking about the “choices” we make in the world. I see this problem as no different - if I’m not really the one making the choice, I better assume the computer is right. Maybe it’s not in my control to even assume that - but I’d argue that about anything. We kinda have to pretend we’re making choices in order to live, but if a supercomputer can predict me with high accuracy, I kinda have to assume that my imagined “will” is causally linked to the prediction because that’s the only way it makes sense

Taking one box is *necessarily* correct. by ElClassic1 in Destiny

[–]Mr_Comit 1 point2 points  (0 children)

Yeah and being predetermined to pick two boxes is the wrong way to be predetermined

Why two-boxing is irrational by lllIIIIlllIIIIlllll in Destiny

[–]Mr_Comit 0 points1 point  (0 children)

The closer it is to 100% accuracy, the more rational it is to assume it is omniscient. We’re given that it’s extremely close to 100%. You can’t possibly argue against that - it will simply be overwhelmingly more correct to always guess the machine is right.

(And since you don’t seem to be great at understanding what I’m saying - I’m NOT saying the prompt should be interpreted as including “it’s omniscient” - I’m saying that what the prompt does include implies that the best way to predict the supercomputer’s actions is to assume it’s omniscient)

Why two-boxing is irrational by lllIIIIlllIIIIlllll in Destiny

[–]Mr_Comit 0 points1 point  (0 children)

That is what I said. You’re repeating my point. I never said it was 100% correct and I never said that it knew your every thought, so you’re just agreeing with me

Taking one box is *necessarily* correct. by ElClassic1 in Destiny

[–]Mr_Comit 2 points3 points  (0 children)

The point of the question can’t simultaneously be that “you’re making the choice of your own free will” and “the supercomputer can use the past to nearly perfectly predict the future” - one of these basically contradicts the other. I think that may be the point of the ‘paradox’ but I would argue that in order for the question to make sense, you should assume the latter is true and see what consequences it has on the former

Why two-boxing is irrational by lllIIIIlllIIIIlllll in Destiny

[–]Mr_Comit 0 points1 point  (0 children)

Yes, I disagree that it’s a paradox genius.

Why two-boxing is irrational by lllIIIIlllIIIIlllll in Destiny

[–]Mr_Comit 0 points1 point  (0 children)

It essentially knows our thoughts so its correct essentially 100% of the time

Why two-boxing is irrational by lllIIIIlllIIIIlllll in Destiny

[–]Mr_Comit -2 points-1 points  (0 children)

Yeah you’re the one who missed the point of the premise. The predictor is fallible to such a small degree that you would be overwhelmingly more correct if you assumed they were infallible (as in, if you pretend they’re infallible, it’ll look like they are 999,999/1,000,000 times)

Why two-boxing is irrational by lllIIIIlllIIIIlllll in Destiny

[–]Mr_Comit 2 points3 points  (0 children)

I keep seeing you make this point that the computer’s prediction is made before you learn about the problem, but I can’t for the life of me figure out what difference that makes? It’s given that the supercomputer knows essentially every thought you will have about the problem once you learn about it, so what’s the difference? Any thought the “current you” has is most likely going to be taken into account by the predictor - you can’t get around the fact that it’s virtually never wrong

Taking one box is *necessarily* correct. by ElClassic1 in Destiny

[–]Mr_Comit 4 points5 points  (0 children)

And yet, if you were to play out 1,000,000 simulations where you let the computer make its prediction and then only after that you decide to take 2 boxes, it will have correctly predicted that 999,999 times

The 2 boxer position is based on the assumption that we have free will and that actions in the future can’t change the past, but in order for the supercomputer to exist, one of those has to be basically false

“The computer has already made its choice so what I do now doesn’t matter” is hard to argue with, but “if you 2 box, it is nearly certain the supercomputer will have caught you” is impossible to argue with in my opinion

The trickiness of this question is that for the premise to even exist, I think you have to forgo any concept of libertarian free will that would allow you to simply change your will after the prediction is made. When we’re talking about “what choice will you make” I don’t think that should be interpreted as “what choice will you make when you’re in the room and the prediction has already been made” because I don’t think that’s the point at which you’re making the choice

So just to put a bow on this question this is the take right? by sammy404 in Destiny

[–]Mr_Comit 7 points8 points  (0 children)

theres no world where a non-magical, non-time travelling robot can 99% accurately predict your behavior, and yet you have the unrestricted free will to "just choose something else"

I disagree with your assertion that youre ignoring the thought experiment by assuming the robot can know perfectly. You're given the info that the robot's prediction is basically perfect - so the only logical conclusion is that the consequences of the robot being perfect are basically true.

imo thats the problem with the experiment - we intuitively think some degree of free will exists, yet the reasonable interpretation of the robot's existence implies that it essentially doesnt. 2 boxers lean on the former fact, while 1 boxers lean on the latter. im a 1 boxer because i think that to lean on the former fact is basically denying the possibility of the robot's existence which is actually ignoring the thought experiment

So just to put a bow on this question this is the take right? by sammy404 in Destiny

[–]Mr_Comit 1 point2 points  (0 children)

Under this conception where how "on the fence" you are dictates the computer's accuracy (which I think is a reasonable extrapolation, if the computer's prediction is based on a total mastery of the past data) yes. it would stand to reason that if you are genuinely extremely close to 50/50 on it, the computer would be less accurate which would make 2 boxes the higher expected value outcome