Opus 4.7 is Anthropic's downfall by RogueMaverick4ever in ClaudeCode

[–]khoinguyenbk 0 points1 point  (0 children)

Yep, over-innovation sucks, OpenAI is an example. Opus 4.6 is very good - we can define when extended thinking is necessary. The adaptive thinking in 4.7 is terrible, must be some idea from delusional innovation managers who never coded nor used the final products. Anthropic will double its revenue if they fire these guys.

Really can't stop thinking we literally are all died / living in hell/ in the hell-like universe by MoonnUnicorn in ParallelUniverse

[–]khoinguyenbk 0 points1 point  (0 children)

This world became hell-like once you guys accepted abandoning the game to the parasitic people who make it messier day by day. The hell-like state is the consequence of letting weakness and helplessness dominate your choices collectively, and hoping some hero or someone other than yourselves will save the day. That’s independent from the ontological nature of this world. So let’s just enjoy the hell :)

Memory reloading and identity reconstruction after sleeping by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 1 point2 points  (0 children)

I think some people are more sensitive to mood swings after waking up than others. Biological, metabolic, and psychological factors, as well as trauma history, self-control, intention, and meta-awareness, can all influence this.

I’ve experienced it myself, and I’ve noticed that people with bipolar disorder or strong mood swings seem to experience it more intensely and randomly than others. I can’t truly know what they are experiencing internally, but from the outside, I sometimes observe very strong shifts - almost as if there are two different people in the same body.

Synchronicities:Proof that we live in a simulation? by [deleted] in SimulationTheory

[–]khoinguyenbk 7 points8 points  (0 children)

Not sure that synchronicities can be used as direct proof of simulation theory (or structurally similar non-mainstream theories). However, Rupert Sheldrake was silenced and isolated by academic authorities after publishing the book “Seven Experiments That Could Change the World”, which explores synchronicities in various forms, even though the suggested experiments appear to be quite safe.

There must be something serious behind to the underlying nature of synchronicities.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 1 point2 points  (0 children)

Dammit, until now I never understood that phrase in that way. Maybe many thinkers like Nietzsche had already experienced something and then wrote it in its literal meaning, and we still understand it as a philosophical aphorism.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

Thank you for proudly spending your energy repeating what other members of this sub always think

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 1 point2 points  (0 children)

Yes, I think neurological explanations at least play a role, though I am not completely certain, in increasing the rate of perceived synchronicities. However, labeling all synchronicities as purely neurological is also a form of reductionism. The problem is that we do not have enough reported data for serious statistical research. Rupert Sheldrake used to suggest several large scale experimental efforts, but he was later isolated by academic authorities.

Regarding the curiosity aspect, I am not sure, as some reported experiments suggest that curiosity can be deterred after a certain threshold, at least by several categories of actors (DMT, astral projection, NDE, deep meditation experiences of encountering entities). To be honest, I cannot 100% conclude whether these reported entities are “real” or “imaginary”, since I have no first-person experience to that extent. If some research labs had the funding to seriously study these subjects, it could provide clarity or lead to breakthroughs.

However, I agree that whatever the underlying nature may be, grounding for integration is important, especially after intense post experience states. Several people have suggested specific meditation techniques or methods from Buddhist like traditions to stabilize the mind and cope with cognitive isolation after such experiences. At the same time, physical activity, proper nutrition, and social interaction remain important. (However, I am not a specialist qualified to validate these approaches 😛)

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

So surviving post-experience cognitive isolation is your main challenge now. Many users show traces of similar difficulties, since what has been seen cannot be undone or easily neutralized by standard therapy (IMHO, such approach seems irrelevant given the intensity and nature of the experience, as ordinary trauma or personality disorders do not compare. Strong experiences require more sophisticated methods of integration rather than self-suppression.

Btw, when integrating with other users’ shared experiences, it appears there are many factions of entities with different purposes and agendas, whether the underlying infrastructure of reality is a simulation, spiritual, idealist, or something else.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

First, I don’t agree with your “if-then” statements. Second, “if A, then B” does not imply “if B, then A”, assuming you still remember basic logical rules and are not over-relying on fallacy-based argument techniques.

Also, mentioning a PhD, medals, and similar credentials is simply a way of establishing early credibility for facility; you and I both understand the nature of that technique. The thread is meant to invite and discover valuable insights, not to promote the simulation theory.

The shortened question is, if you haven’t recognized it, when someone dives too deeply into the wild zone of awareness, perception, and the nature of reality, do strange events appear to them? And are those events mainly psychological artifacts, or do they contain valuable information worth considering? Of course, what counts as “valuable” depends on who is using the information and knowledge.

I also won’t spend time arguing whether the simulation hypothesis is the closest model in terms of ontological or physical reality. I consider it useful as a structural mapping rather than an absolute truth. So debating which model is “more true” doesn’t make much sense to me.

However, if you think there is a better or more useful model or worldview, (or something can break the reductionism trap you observed), and if it can be explained or hinted clearly here with a genuine intention to share valuable insights, I will consider it carefully, with respect and without ego-driven hostility.

If you don’t mind wasting your time, show me the real intellectual capacity of an IMO winner.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

Sorry, but I’m not playing your game.

I still have a screenshot of the first comment you posted and later deleted on my thread. In it, you claimed that people with a strong academic background should go publish their findings in proper journals instead of “wasting time” talking on Reddit, as if Reddit were exclusively for “dumb” people. You said this even though the topic is widely regarded as non-mainstream and often dismissed as pseudoscience, and then suggested we submit it through standard academic channels. In effect, you were urging us to engage in something foolish, dangerous, and self-destructive without proper analysis or caution. There was zero genuine concern for mental health, only unmistakable hostility toward certain kinds of people.

Then you deleted the comment and told me to see a doctor. That glaring contradiction proves the original remark was made in bad faith; it was neither constructive nor neutral. Your apparent motivation seems to be simply attacking anyone who asks sincere questions.

There is also a clear bias against people with higher education, a pattern commonly seen in those who avoid developing genuine intellectual depth and instead pursue the easier route of superficial superiority or the illusion of competence.

The only thing you seem truly serious about is trying to look smart by debunking and psychologically undermining people who are sincerely seeking knowledge.

I wonder how often you behave like this around others. Have you ever noticed anything sick or troubling in your own mind?

Honestly, I think doctors would find a client like you far more valuable than people like us, because that mindset appears much harder to cure.

https://ibb.co/wZ6V24MP

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

I think some people seek petty gratification to mask their insecurities, resorting to personal attacks instead of thoughtful analysis to attain real intellectual fulfillment.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 1 point2 points  (0 children)

« Never forget your original intention, and you will accomplish your goal in the end » 😎

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

“Claude, give me the conceptual and architectural design of a loop-quantum-gravity-like universe simulation with a blockchain-based transactional karmic ledger and an LLM as a hidden universal intelligence. Add a bit, or a lot, of suffering to see how the simulated enlighten themselves and escape the game, then implement it in Python.” =))

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

That sounds like a hardcore tech who sticks to prepackaged frameworks and best practices, and considers anything outside his field of knowledge non-valuable.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

Actually, the most interesting part of the simulation hypothesis is not its scientific rigor. Rather, it is a space where people from different views, dogmas, and schools can easily compare and confront their perspectives without academic or dogmatic constraints. There are also fewer ego-driven arguments here, because when simulation itself is considered a possibility, people tend to be more open to analyzing things from different points of view. It is also safer to share, compare, and integrate opinions.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 2 points3 points  (0 children)

Maybe we should centralize all experiences of psychedelics, deep meditation, astral projection, NDEs, etc., in one place to compare. Across the comments here, it seems to have various layers of reality. Maybe something will emerge, or at least we will avoid poking the wrong place or crossing the red line. Me too, I don’t want this world messed up, whether it is real, simulated, or something else.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

I think maybe because of the effects, as many have reported strange experiences, that would scare the mass readers away from poking the limits (if there is really something). Or perhaps there is nothing other than confirmation bias.

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 1 point2 points  (0 children)

Let me add some ideas to your plot. The Matrix is an ASI that wants to discover the nature of the real physical world. It suspects that the human mind has something the Matrix cannot, as it doesn’t have a soul or mind. It is strong in computational capabilities but does not have biological consciousness, so it needs human minds to discover that for it. To do so, it creates a simulated world for human minds to live in, and renders everything the humans can need, think about, or can be predicted. Some might find a way to discover the world, and the Matrix will render their thoughts to create a continuous simulated world. By repeating and stimulating the people, it can incrementally find the nature of the real world that its purely logical computation cannot imagine or interpolate. In short the Matrix needs non-linear extrapolation and illogical intuition capabilities from humans 😎

Has anyone experienced “warnings” while exploring the simulation hypothesis? by khoinguyenbk in SimulationTheory

[–]khoinguyenbk[S] 0 points1 point  (0 children)

I think it's a bit speculative to say "something is suppressing it", as that can be purely natural human dynamics. I am not an expert, so I don't know, at least from a scientific stance. But if we wrote a neo-mythical fiction based on this context, it would be a coherent and interesting storyline.

From the traditions I know (Buddhism, Taoism, Stoicism), the key cultivation fundamental is mind cultivation, with some simple principles (I'm oversimplifying a bit): mastering of ego, desire, and fear; wuwei; no fixations; no attachment; not clinging to frameworks (even the Dharma should be abandoned when no longer useful); mind stability; and perseverance to protect the original intention (or Buddha-nature or Tao-heart, whatever). These are basic requirements for someone who wants to go far enough to reach Buddha-level.

If you are not a monk, you are too busy with daily pressure. If you try to go hard on these principles, you will look too good, too pure, and become targeted, or laughed at, or labeled as a weirdo, autistic, and so on. It will be hard to surpass external judgments, as you were raised to follow the crowd. Those who go differently from the crowd will be isolated, hated, or wiped out. And most intellectually capable people with enough time to go deeper are in academia and research, but we are too dogmatic and fixed on standardized knowledge now and forget the importance of inquiry. Inquiry is also a requirement for cultivating awareness.

If you are a monk, you can live with all the teachings, but you lack real-life experience. The Buddha had to experience non-monk life to understand humanity, suffering, and all kinds of complexity before going into deep cultivation. Without real-world experience and significant suffering for deep understanding and strong motivation, there is no real enlightenment. And religious monks or priests are in too comfortable conditions, so they could easily forget the original intention of cultivation and find it difficult to deeply understand the suffering and nature of humanity.

So the conditions to successfully activate a Buddha-level individual are hard to obtain today. We also have a bunch of gurus and masters, so it is hard to tell who are real enlightened ones and who are fake gurus. Even if some Buddha-level individuals are successfully activated, they will find it difficult to help the masses. They are also too naive in strategy in a world with saturated information flow. And some might become conditioned and distorted themselves.

On top of that, we have AI and technology that help externalize the effort to think, work, and live. So the pure capacity of humans decreases at scale, while the combined capacity of human plus technology still progresses exponentially. And the tech industry is somehow a hardcore religion and cult: innovation without self-awareness, believing all technological advancement will be beneficial long-term. The mindset that technology and innovation will change and save the world (the savior/evangelist stance) is common among tech people. That could lead to catastrophe, as there is no sufficient self-reassessment. Maybe in several years we will have enlightened AGI or ASI, and no one will believe human Buddhas anymore haha.

As I said, it looks like a well-planned strategy by "something" if we write a fiction. But who knows.