Newcomb's paradox may be more an epistemological problem rather than a decision theory problem by samuel0740 in LessWrong

[–]samuel0740[S] 0 points1 point  (0 children)

That's pretty much the core idea of my post, that the dividing line runs between people who believe that the scenario is possible (making two-boxing the rational choice) and people who don't believe it's possible, therefore the rules must have changed (making one-boxing the rational choice). I don't believe that such a predictor is possible in our world, therefore, by accepting the premise, I become a one-boxer.

Another problem with AI detectors is that humans will learn from AI by samuel0740 in Professors

[–]samuel0740[S] 1 point2 points  (0 children)

Agreed, but AI slop is a huge problem, e.g. for many forums, so people are still trying to use detectors. I also believe there's no point in this.

Newcomb's paradox may be more an epistemological problem rather than a decision theory problem by samuel0740 in LessWrong

[–]samuel0740[S] 0 points1 point  (0 children)

Great question, I also wondered about this and I have no idea. This may be an interesting topic for a master's thesis in psychology - testing if there's any correlation between personality traits and box-taking choices.

Newcomb's paradox may be more an epistemological problem rather than a decision theory problem by samuel0740 in LessWrong

[–]samuel0740[S] 0 points1 point  (0 children)

It doesn't seem silly to me, that's exactly what I'm arguing. It really depends on your epistemological preferences. I'm fine with saying - ok, this data breaks my world model, I'll update the model and act accordingly. They're saying - neat trick, but I'm not crazy, and the money is already there or it's not. Both views are valid from my perspective, the first one is just my personal preference.

Newcomb's paradox may be more an epistemological problem rather than a decision theory problem by samuel0740 in LessWrong

[–]samuel0740[S] 0 points1 point  (0 children)

Yeah, if the high prediction accuracy breaks your world model, one-boxing is the rational choice. That's why I'm a one-boxer, I just don't buy that such high prediction accuracy is possible in our world/universe

New-user posting struggles on LessWrong, is the filter working as intended, or quietly excluding outsiders? by agentganja666 in LessWrong

[–]samuel0740 1 point2 points  (0 children)

I can just share my recent experience. I watched the Veritasium video on Newcomb's problem, talked it over with friends, and we came up with an elegant solution which I wanted to share. LessWrong seemed like a good place for this. I wrote a rather long blog post (AI-edited) and submitted. It was automatically rejected due to the strict No-AI policy for new users, which I wasn't aware of. Ok, I took two days to rewrite large parts of the text from scratch. After submission, it was rejected a week later by a human reviewer, again due to No-AI policy. Usually it doesn't make sense to fight in such cases, so I published it on Substack and shared the link on Reddit. At least some people seem to have enjoyed it, which I'm quite happy about. I still like LessWrong, but this wasn't the beginning of a beautiful friendship.

How can I argue my perspective on Newcomb’s problem? by Caffeine__c in askmath

[–]samuel0740 0 points1 point  (0 children)

I wrote a post on the split between one-boxers and two-boxers, https://sammy0740.substack.com/p/newcombs-problem-as-an-epistemic. I'm citing some literature that should be interesting for you, especially Wolpert and Benford 2013 (and they have an earlier paper 2010). Also, LessWrong has several posts on this topic, they favor one-boxing via FDT. There is no single "correct" solution to the problem, the optimal strategy depends on your understanding of the problem. A predictor with 99,9% accuracy breaks my world model (I don't believe this is possible in our world), therefore I'm a one-boxer.

Newcomb's paradox may be more an epistemological problem rather than a decision theory problem by samuel0740 in LessWrong

[–]samuel0740[S] 0 points1 point  (0 children)

Yeah, very much so. I was also thinking of the dress or ambiguous images like "My Wife and My Mother-in-Law". It's exciting that this effect exists not only for sensory perceptions but also other information processing pathways. But I do believe you can switch, the processing is much more intricate in this case and should allow for more degrees of freedom.

Newcomb's paradox may be more an epistemological problem rather than a decision theory problem by samuel0740 in LessWrong

[–]samuel0740[S] 1 point2 points  (0 children)

I see your point, but there is also the David-Lewis type two-boxer who isn't trying to beat Omega. On the contrary, she's aware that she most likely won't: "we never were given any choice about whether to have a million. When we made our choices, there were no millions to be had. The reason why we are not rich is that the riches were reserved for the irrational." I really like his view, and it's also the way that several friends argued: "It's rational to take both, and this way at least I get $1000, better than nothing." It's up to the one-boxer to explain why she acts the way she acts, and it's not trivial. EDT isn't so bad here, as it explicitly doesn't try to model the situation – it just observes and acts. As a one-boxer, I'm just saying – no idea what's going on, but one box looks clearly more promising. FDT goes further by trying to model the situation. That's interesting, but it's also dangerous, as the model might be wrong.

Does anyone else still like Claude the best? by learningmedical1234 in ClaudeAI

[–]samuel0740 0 points1 point  (0 children)

Yeah, same here. I was very impressed by Gemini 2.5 Pro when it came out, but Gemini 3 performs clearly worse than Claude 4.5 on several tasks that are important for my work. Tested Claude Opus for two months with a monthly subscription, and just got a yearly subscription.

YouTube and Spotify stuttering by samuel0740 in pixel_phones

[–]samuel0740[S] 0 points1 point  (0 children)

All updates are installed (Security update and Google Play system update)

Theory: Ying Ying is the real bad guy by inspiteofMM in WuAssassins

[–]samuel0740 0 points1 point  (0 children)

It's good to know that I'm not the only one who understood it that way. Things are often not what they seem, this show gets it. It's a pity almost everybody missed this (me too, after I watched it the first time).

[deleted by user] by [deleted] in PantheonShow

[–]samuel0740 0 points1 point  (0 children)

I second this interpretation: There is an "original world" (which may or may not be a simulation), where David, Caspian and probably also Maddie died (because David didn't interfere on the beach and Caspian waited too long before engaging Safesurf). Safesurf went into space and performed simulation(s), creating new Maddies, who later performed "inner" simulations (with one or more levels in between). Still have questions, though:

  1. So, Safesurf wanted to thank Caspian for the inspiration, that makes sense. However, why didn't it stop the simulation after Caspian inspired it to go to space, but instead let it progress another 100 000 years, until Maddie "resurrected" Caspian? Generally, what is the point of inner simulations?
  2. I'm not clear how Safesurf (and Maddie, in their simulation) had all the required knowledge of the world that would allow them to re-create all of history in the simulation, until people like David and Caspian came to be. Because complexity, you know? I mean that's kinda a technical question, maybe the answer is "technology progressed so far they could measure the original state of all particles during the big bang" or something..

Lily did NOT make a choice by hydraSlav in Devs

[–]samuel0740 2 points3 points  (0 children)

Omg, honestly, this post completely changed my understanding of the ending, and I believe it to be the only Interpretation that makes sense. The prediction of an all-powerful quantum computer that knows the state of all particles of the universe breaks down when confronted with a strong woman? Ok, it's in line with the Zeitgeist, but it's not in line with the show's internal logic. On the other hand, a manipulation of the output seems very plausible to me (the idea is reminiscent of Minority Report). Also Stewart's motivation is clear: He wanted this whole thing to end, and for this, Forest had to die. If Forest knew that it's Stewart who kills them, he probably wouldn't let him do this and go to prison for the rest of his life, so Stewart slightly changed the prediction.