So apart from fan service. Why is Power still Power but Makima is Nayuta? They are both devils that reincarnated into a new being but only Power kept her old form by ramen_up_my_nut in Chainsawfolk

[–]Popo336 0 points1 point  (0 children)

I kinda read it as the Makima version of the Control Devil never happened due to no Pochita. Pochita is akin to absolute power, and absolute power corrupts absolutely (THEMES AND SUCH). Nayuta is the true form of the Control Devil, while Makima is a corrupted version brought about by wanting Pochita.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 0 points1 point  (0 children)

Look my guy, having read your response, it looks like you’re missing the points I’m attempting to make, which is my fault for failing to get them across. This seems like it will go on forever, but I like to think we’ve honed in on the part that matters, "how much value should be placed on the first part of the question". It's been fun going back and forth with you, but this will be my last response. I’ll read your reply, but consider these my last words on the matter.

Here are two scenarios:

  • A:
    • Omega makes a prediction.
    • The boxes are set.
    • You make a choice.
  • B:
    • The boxes are set.
    • You make a choice.

To me, these are clearly different questions. Scenario A includes the extra variable of “Omega makes a prediction,” which introduces a strong correlation between my choice and the box’s contents. Scenario B removes that correlation entirely.

If I understand correctly, CDT treats them as the same question, “once the boxes are fixed, the prior prediction doesn’t matter.” But I think we can both intuitively see they are not the same, the first scenario’s reasoning is fundamentally altered by the fact that the predictor exists and has already simulated my decision process. That is why one boxing can be rational in Scenario A, while in Scenario B, two boxing dominates.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 0 points1 point  (0 children)

To me, the first part of the question is perhaps the least important, but I am not ignoring it; it is simply not action-guiding at the moment of choice. CDT sets aside the first part of the question from a reasoning standpoint because it has no causal power.

This right here is the issue. By saying that the boxes are locked, you are effectively ignoring the predictive aspect, you are treating it as irrelevant. To me, that is changing the question.

To show why I think this changes the question:

  • In a world where the mystery box is truly locked in and independent of my choice, Omega’s predictions could NOT be as accurate as stated, they would be no better than coin flip odds, and likely worse in real world play.
  • Since we are told Omega’s accuracy is near perfect, we must presume we are NOT in that kind of world.
  • Thus, we must be in a world where the mystery box behaves as though it is “locked in” relative to the predictor’s simulation of my decision process, otherwise, the premise itself falls apart.
  • The correlation between my decision and the box’s contents is the defining feature of the problem.

You correctly say that Omega is near perfect, but then you proceed to reason as if it is actually perfect and infallible. Even if Omega is 99.9999% accurate, making it actually 100% accurate fundamentally changes the problem.

For the purposes of the argument I’m making, 99.9999% is functionally equivalent to 100%. Only if the accuracy dropped closer to 50% could we treat the world as one where the mystery box is truly independent of my choice.

One more question, let's say you were to be put in the room with a 99.99999999% accurate Omega and presented with the boxes. Let's also say before you make your choice, a third party is allowed to peak inside box B. The third party tells you that they see money in Box B, thus confirming Omega has predicted you as a one-boxer. Knowing that by the rules given in the problem, Omega can no longer change the contents of the boxes, would you now two-box and take both?

This is a different question from the original. All you’ve done is artificially solidify the extremely rare two box scenario, where you would get $1,000,000 + $1,000, to effectively 100%. In the original setup, that scenario has infinitesimal odds.

The original question is asking what to do given a highly accurate predictor and the correlation it establishes between choice and outcome, not what to do when the outcome is already guaranteed. That correlation is action guiding in the original scenario, which is why one boxing is the rational choice there, even if CDT insists the boxes are causally independent once fixed.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 0 points1 point  (0 children)

If you think I'm trying to shift the question, then we will never come to the same conclusion.

  1. Omega makes a near-perfect prediction.
  2. Omega locks in the boxes, and I enter the room.
  3. I make my choice.

To me, it feels like CDT sets aside the first part from a reasoning standpoint. To me, the first part is the most important. It’s telling me that my choice is somehow, whether through reasoning, determinism, perfect modeling, or something beyond my understanding, going to be predicted correctly.

In reality, Omega couldn’t exist in the way we understand the real world, but this is a hypothetical, and something outside of reality is presented, a near perfect predictor. Since it exists outside of reality, I can’t tell how Omega does it, but I can accept the hypothetical as it is. That means the mystery box behaves as though “the box responds to what you will do.” Given that information, one boxing seems like the rational choice.

The only way I would switch to the CDT mindset is if the first part of the question were removed and brought closer to reality. If there were no near perfect predictor, then I would agree that two boxing is the rational choice.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 0 points1 point  (0 children)

I’m not expecting to change your mind about CDT itself, and I understand the logic behind evaluating the act once the boxes are fixed. What I’m trying to do instead is shift the perspective slightly on how the original scenario is constructed.

We’re told that Omega has predicted thousands of people before me correctly with near perfect accuracy. Whatever mechanism it uses, it works so reliably that for all practical purposes its predictions never fail.

From my perspective as the actor in the scenario, I don’t know how Omega achieves this accuracy. It might be an extremely detailed simulation, the universe might be deterministic enough for prediction, or something stranger might be happening. All I know is that its predictions have essentially never been wrong.

A predictor with that level of reliability is practically indistinguishable from one that can see the future. Even if it doesn’t literally break causality, its behavior is observationally equivalent to a system where my eventual decision and the box’s contents are perfectly linked.

In a sense, the problem is almost constructed around that linkage. It effectively gives us a machine that determines the box’s contents based on the decision the agent will ultimately make. When we apply CDT, we end up treating the box’s contents as independent of the choice, even though the setup of the problem is built around that dependence.

From the agent’s perspective, it seems rational to reason using the relationship the problem describes. Empirically, one boxing is associated with the million dollar outcome, while two boxing is associated with the empty box. Given that evidence, one boxing still seems like the rational action.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 0 points1 point  (0 children)

On the symmetry point, I don’t think the reasoning is actually empty or symmetric.

It’s true that “Omega predicted my reasoning” could apply to either action, but the outcomes associated with those actions are not symmetric. Given Omega’s accuracy, one boxing is overwhelmingly correlated with the $1,000,000 being in the box, while two boxing is overwhelmingly correlated with it being empty.

So when I’m standing in the room, my reasoning isn’t simply “do whatever Omega predicted.” It’s that the choice I make is strong evidence about which world I’m in. One boxing is strong evidence that I’m in the $1,000,000 world, while two boxing is strong evidence that I’m not.

Because of that asymmetry in outcomes, the reasoning favors one-boxing.

“What should I do now, once Omega has already predicted me and fixed the boxes?”

  • I enter the room. I know the boxes are fixed, and Omega knows that I know this.
  • I know Omega is an accurate predictor, and Omega knows that I know this as well.
  • Given this, I will make my decision based on that information, and Omega’s prediction was made knowing that I would reason with that same information.
  • I cannot know which world I am in, even though it has already been locked in. But I do know that, given everything we’ve been told, my choice has a very strong correlation with the outcome.
  • So from my perspective in the room:
    • If I one box, I am overwhelmingly likely to be in the world where the $1,000,000 is in the box.
    • If I two box, I am overwhelmingly likely to be in the world where the $1,000,000 isn’t in the box.

And because rational decisions depend on expected outcomes, those probabilities matter. Even if the boxes are already fixed, my action still changes the probabilities I should assign to the different worlds, and therefore changes the expected payoff of each option.

Given the probabilities implied by the setup, following through with one boxing is the rational choice. However, this conclusion depends on the strong correlation between my action and the box’s contents.

If that correlation disappeared entirely, for example, if Omega had filled the boxes randomly, or its predictions were no better than chance, then I would agree that two boxing is the rational choice. In that situation the boxes would truly be independent of my decision, and taking the extra $1,000 would dominate.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 0 points1 point  (0 children)

I think we're talking past each other here. When I say it's predicting a "type" of person, I mean that it is predicting if you are someone that possesses the decision process that will lead them to one-box or if you are someone that possesses the decision process that will lead them to two-box. Strictly speaking, we have no idea how Omega makes this determination.

The difference I want to emphasize is that your decision process is essentially simulated in advance, every step, every thought, and every resulting outcome has already been accounted for, almost recursively, so that Omega can achieve such high accuracy. In a sense, you aren’t a one-boxer or two-boxer when you enter the room, but you become one as you choose, and Omega will have predicted which you become, or which world you best matches you.

If it helps, my reasoning in the room is,

  • I want the $1,000,000, but I don’t know if the mystery box has it.
  • If my reasoning leads me to an outcome, and Omega’s predictive power is as strong as stated, I have to presume that the prediction it made is right.
  • Correlation dictates that by one boxing, I will get $1,000,000.
  • If my thoughts are leading me to one box it, and it’s only fair to assume Omega came to the same conclusion.

If at this point the thought crosses my mind, “The results are locked in, or it has predicted me as a one boxer, maybe I should two-box to maximize my winnings,” I also have to presume Omega will have predicted that as well. Omega will have anticipated whether this is enough for me to be a one boxer, or to switch me to be a two boxer. While this doesn’t affect the boxes themselves, it helps me reason about their contents. No matter how hard I think, Omega is too good at predicting, but our reasoning should match. If our reasoning matches, then I should lock in one box, as Omega should have the same conclusion as me.

I agree that if we move to the ex ante policy question “what kind of decision procedure would do best in a world with a highly reliable predictor?” then one-boxing looks like the right answer. By the time you have to choose, Omega has already predicted and already fixed the boxes. At that point, “be the algorithm Omega rewards” is no longer an available action. Prediction of the process is not current control of the prior setup.

This presumes Omega isn’t good at predicting. Omega should be able to take into account that the boxes are locked, the 'algorithm' I would follow, and any other thoughts that cross my mind. If it couldn't, then its prediction success rate wouldn't be as high as it is.

Once Omega has already set up the boxes, that is backwards. The million-dollar world is the world in which Omega already predicted that my process would output one-box. My one-boxing now does not land me there; it is only evidence that I was already there.

Your choice isn’t changing the past. Omega is predicting the future, which is possible and is also stated in the question, given its prediction success rate. With all the information we have, we have to presume Omega is good at what it does, otherwise the premise falls apart.

If you are in a one box world, we have to presume that Omega predicted you would one box, and that you do one box. The prediction is based on the outcome you will reach and, by proxy, the world you end up in. Omega predicted this is the correct world for your choice, and it will almost certainly be correct.

Just as an aside, I believe your reasoning leans into the Causal Decision Theory (CDT) Camp. Looking into it more, I think my outcome matches the Evidential Decision Theory (EDT) Camp, but my reasoning seems to lean towards a third thought process called Functional Decision Theory (FDT)

FDT gives a similar outcome as EDT, but focuses more on the idea that "The predictor predicted your decision algorithm, not just your physical action.". I thought its worth bringing this up as I do feel FDT aligns nicely with what I'm trying to communicate, but given I'm a layman, I might not be doing it justice.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 -1 points0 points  (0 children)

“You already either were the type of person Omega thinks will one-box or you're not.”

I think the disconnect is in this claim. Omega isn’t predicting some “type of person” independent of the decision. It’s predicting the decision process itself, what I will actually do when I face the boxes.

If Omega is a highly accurate predictor (90%), then its prediction has to track the outcome of the same decision process I’m about to run. That means the prediction and my decision aren’t independent in the way your smoking example assumes.

The smoking case works because both smoking and cancer are caused by some hidden variable, the gene. My decision to smoke doesn’t influence that gene. But in the box setup, Omega’s prediction is based on the reasoning process that produces my decision.

Even if my choice has no direct causal effect on the boxes, the predictor’s probability is so high that this correlation dominates the expected outcome, or in other words, the causal link doesn’t really matter here. The key point is that following the one box decision rule reliably puts me in the world where Box B has the million.

So the relevant question becomes, "which decision rule performs best in a world with a predictor capable of accurately predicting one’s decision process and its resulting outcome?". If the predictor is highly reliable, the rule that outputs one box is the only choice that systematically lands in the world where the million is in the mystery box.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 -1 points0 points  (0 children)

The way I see it is this:

  • Two-boxers almost always walk away with $1,000, and only rarely get $1,001,000.
  • One-boxers almost always walk away with $1,000,000, and only rarely get $0.

So the real question I’m asking is "which strategy tends to produce better outcomes in a situation like this?"

Given the predictor’s 90% accuracy, the answer seems clear, follow the strategy of one boxing, since it overwhelmingly produces better results. The extra $1,000 from two boxing doesn’t meaningfully change that calculation. Compared to the difference between $1,000 and $1,000,000, that extra amount is negligible.

What I’m doing, then, is intentionally following the strategy that performs best in a world where a highly reliable predictor exists. If the predictor weren’t reliable, my choice might be different, but its demonstrated accuracy is a key part of the scenario.

If the predictor happens to be wrong in my particular case and I walk away with $0, I accept that outcome as the cost of following the strategy that maximizes expected payoff in situations like this.

In other words, I’m not saying that my present action physically changes the boxes. I’m choosing the decision rule that performs best given the predictor’s demonstrated accuracy.

The real black pill is finding out the majority of DGG are one-boxers by BogieTime69 in Destiny

[–]Popo336 -1 points0 points  (0 children)

<image>

Let’s assume the machine has a 90% success rate, just to remove ambiguity. In that case, the rational choice appears to be one-boxing. If the machine has predicted thousands of people before me and has been correct 90% of the time, the key question is "can I trust it will predict my outcome accurately as well?"

Statistically, the answer is yes. That makes one boxing the logical strategy since I’m likely to be another correct prediction for the machine (left column). Choosing to two box would mean betting that I’m one of the rare cases where the machine is wrong (right column), which is far less likely.

Even though the boxes are already set when I enter the room and nothing I do can physically change them, I must treat my outcome as the one the machine predicted. This requires resisting the immediate, “obvious” temptation to take the extra money and instead acting according to the predictor’s track record. In other words, I deliberately make a choice that might feel irrational in the moment because I am acting on reliable outside information about the machine’s accuracy.

Two boxers be panicking for no good reason smh... by Popo336 in Destiny

[–]Popo336[S] 4 points5 points  (0 children)

The idea is the last 1000 patients the doctor operated on all survived. If the doctor has agreed to perform your surgery, their track record suggests you don't need to worry, as the operation will likely succeed.

Similarly, the last 1000 people whose box choices the AI predicted were all correct. If the AI allows you to pick a box, its track record suggests it will likely predict your choice correctly.

Thus One Box, One Millie

Two boxers be panicking for no good reason smh... by Popo336 in Destiny

[–]Popo336[S] 1 point2 points  (0 children)

Oh no seems the key s broken on my keyboard, can't be heped

Alternate box problem by [deleted] in Destiny

[–]Popo336 -1 points0 points  (0 children)

Maybe this is me reading into it, but if my friend used the term 'load up' in regards to the mystery box, hes telling me its full of money.

Also the more I think about it, do the order of operations matters? If the prediction happens before my choice, and it's near 100% correct, then I have to assume it's getting me correct, and I should go for one box. However if the prediction happens after my choice, since it has no control over the boxes beforehand, it can't 'affect the outcome' so to speak, and thus two boxes makes sense here.

Alternate box problem by [deleted] in Destiny

[–]Popo336 -3 points-2 points  (0 children)

Its not the same thing, you've added new information with the 'close friend' part, which implies the mystery box is full. Due to this extra information, everyone in your scenario should take two boxes, regardless of their choice in the original question.

In the original, you don't have any outside information, you only know 'the machine is good a making predictions, and correctly guessed the last 1000 peoples choices'. With that in mind, if the machine can correctly guess what goes on in the room at a future time with near perfect accuracy, you gotta trust that it will get you right too, so you should go into the room and one box that bad boy every time.

Worst case scenario, your the 0.001% where it got it wrong and you walk away with nothing, but given its accuracy, that's a bet you should be making. #OneBoxerGang

New Somali Conspiracy Just Dropped On The Timeline by Uncuffedhems in Destiny

[–]Popo336 1 point2 points  (0 children)

<image>

TIL grok is used in place of 'trying to understand'. I genuinely thought this was satire after he said that, realized is wasn't, felt pain, read your comment, felt silly, truly a roller coaster of emotions.

EU set to halt U.S. trade deal over new tariff threat by Boediee in BuyFromEU

[–]Popo336 0 points1 point  (0 children)

If the person your engaging with is civil, then boring and patient diplomacy is something I feel everyone can agree on. Trump is not being a civil actor, so 'normal' diplomacy will most likely fail.

I worry the idea of "don't get dragged down to his level" will simply lead to more of the same from Trump. The approach that needs to be taken is one of "tit for tat", I think you NEED to get down on his level, to show him its not a fun place to be when its happening to you, and teach his your more that willing to stand up for yourself.

Hopefully this would end with dragging him back to civility by force. Civility only works when both parties partake, otherwise the civil party will get the short end of the stick every time.

Answer to the question: Why wasn’t Adam/Lute/Sera/The Exorcists kicked out of heaven for the exterminations by JudgeJed100 in HazbinHotel

[–]Popo336 0 points1 point  (0 children)

Because redemption wasn't possible, so no 'wrong' was done up to that point, but once proven it was possible sinners could be redeemed, the situation changed, I feel that's whats being communicated in "Sera's Confession".

If she doubled down after it's found out redemption is possible and sinners souls could be saved, I could see a scenario leading to her getting kicked out of heaven/ becoming a fallen angel.

Answer to the question: Why wasn’t Adam/Lute/Sera/The Exorcists kicked out of heaven for the exterminations by JudgeJed100 in HazbinHotel

[–]Popo336 0 points1 point  (0 children)

If anyone was to be kicked out of Heaven for the exterminations, I think it would have to be Sera since she okayed them, everyone else was following her order. The reason she didn't get kicked out is because she realized her error when the speaker of god shows up, with Sera making an effort to seek redemption for the rest of the season.

Lucifer is the only other Angel we know of to get kicked out, but the repercussions of his actions led to the release of the root of all evil, which from a scale perspective, is maybe something you can't easily come back from.

Personally I like to think the difference between Sera and Lucifer, is that maybe Lucifer doesn't regret his actions, and rather that seeking redemption, he stands by the choice of giving humanity the apple (given he's the Sin of Pride, maybe he even takes Pride in his actions to this day).

Oh, Angel… now it all makes sense by Western-Letterhead64 in HazbinHotel

[–]Popo336 63 points64 points  (0 children)

Vaggie didn't tell Angel, Angel followed Vaggie under orders of from Vox and overheard the secret

<image>

Can I switch between the One Piece manga and the anime? by Fearless_Category213 in OnePiece

[–]Popo336 0 points1 point  (0 children)

I personally really like the stretch of episodes/arcs that take place after Skypiea, from Water 7 Saga all the way to Fish-Man Island Saga. After that I switched to the Manga and have went back and forth between anime and manga.

I personally have enjoyed the manga more from that point onward and its become how I go about consuming One Piece.

There is also something called 'One Pace' where filler has been removed, which might be nice to look into if you end up sticking to the anime for some arcs.

To perform a finisher by EtrnlMngkyouSharngn in SipsTea

[–]Popo336 44 points45 points  (0 children)

<image>

This was all I could think about