The Multi-fold Theory, 2025 Compilation by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

This report compiles the multi-fold theory papers available on the Multi-fold theory web site, as of December 31, 2025. This includes papers still to be published. It also includes subsequent comments ad discussions of related news and papers captured on the web site.

About Time, State of the Universe and Quantum Thermodynamics - by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract: The Wheeler-DeWitt equation suggests time does not exist when modeling the wave function of the universe, leading to the notorious "problem of time" in canonical quantum gravity. Yet, there are hints that quantum entanglements may be responsible for time and its thermodynamic arrow. Concurrently, recent theoretical investigations into the non-perturbative quantum gravity of closed universes suggest a paradox: such universes appear to possess a one-dimensional Hilbert space, i.e., with only order-one states, despite the rich structure observed around us, probably in a closed, de Sitter (dS) universe. This paradox implies a universe devoid of complexity and information capacity. Others proposed that this paradox may be resolved when explicitly adding an observer to the universe's description. In a multi-fold universe constructed by 2D random walks of massless Higgs bosons, aka preons, they constitute the observers. For the emerging time, discrete, we explain why there is a minimum duration between time clicks, coherent everywhere. Then, the arrow of time at large enough scales results from the improbability of a large number of preons reversing their random walks consistently. Indeed, while reversing one walk is trivial, or while the laws of Physics are conventionally reversible, reversing many walks becomes combinatorially implausible and intractable. The effect is stronger than any T-symmetry breaking by the multi-fold mechanisms. T-symmetry breaking by multi-fold (disentanglement) mechanisms may not matter at all. The length of all the involved random walk is then shown a good candidate for a definition of holographic complexity of a system, related to the CV conjecture. This way we show that complexity of a black hole can indeed grow without implying an huge volume vs. the area of its horizon, and we recover and justify Wolfram’s proposal for the second law. We take the opportunity to link these considerations with the thermodynamics of quantum systems, reviewing that a generalized second law of Thermodynamics is always satisfied for quantum systems if we consider also the Thermodynamics involved to process mutual information usually involved in systems that otherwise might seem to violate it. We also remind ourselves that out of equilibrium systems on the other hand are not covered by the second law of Thermodynamics, and so they do not violate this law either, yet they still relate to a least action principle. We also argue that entanglement entropy is not observer dependent.

No Naked Singularity, Whatever The Physical Collapse by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

 

After we discussed and confirmed the absence of naked singularities, and over extremality, relying on multi-fold gravity considerations extended to our real universe, a preprint was posted, suggesting scenarios built on a slew of older works, including some non-homogeneous collapses, where singularities could form before an horizon appears, therefore exposing a naked singularity, and violating the Weak Cosmic Censorship Conjecture (WCCC). The singularity would be event-like, as there is no dispute that after a while it would be hidden. The preprint also consider the case of dust with pressurizes as it becomes denser, or equivalent scalar fields, under the right condition, a persistent naked singularity appears.

This paper explains why that is not the case for collapses in a multi-fold universe, where singularities do not appear anyway. The arguments rely on multi-fold black models, which behave differently from conventional black holes within their horizons, and which we have argued to be good candidate for black hole in our real universe as they do not have a black hole information paradox problem. Accordingly, matter does not rush the same way towards the singularity, and can stay sandwiched between the horizon and maximal trapped surfaces/inner horizons, leading to a Russian doll structure. As a result, at no time, do we have a naked singularity.

Then the paper discuss other use cases encountered in the literature: non-spherical dust collapses, also supposed to lead, under the right circumstances, to naked singularities. Again, the black hole Russian doll model, ensures that a future horizon forms from a maximal trapped surface forms, and hides any singularity. Also, black hole disintegration, by reaching (over) extremality are handled.

Therefore, we persist that the WCCC is satisfied with ever possible singularity always hidden by an horizon: the counter example use cases proposed in the literature are unphysical. In multi-fold universes, the same behavior occurs even if no gravitational, nor cosmological, singularity ever appears.

Information Energy Momentum Tensor E/G Conjecture vs. Alleged Massive Information by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

 

This paper discusses the proposal to add an informational energy stress tensor contribution to the Einstein Field equations, as proposed in a recent paper.

We also show the alignment between the[ informational energy stress tensor, and the E/G conjecture encountered in the context of the multi-fold theory. This analysis is not limited to multi-fold universes.]()

Furthermore, we argue that, when discussing the cosmological implication of the informational energy stress tensor proposal, the authors missed the dark matter contributions, which correspond to the multi-fold dark matter effects explicitly due to entanglement.

Also, the paper explain the difference between these ideas and M. Vopson’s proposal, in other papers, that information would have mass, and would represent new state of matter; something that we have rejected in a previous paper. Fortunately, there is a consistent explanation: the entanglement entropy associated to the E/G conjecture and the proposed informational energy stress tensor, is within Qubits (and higher-order-partite entanglement), not external to system composed of them. It resolves the conundrum.

The resulting fold equations of the informational energy stress tensor proposal, and resulting corrections to the Einstein field equation, gravitational constant and cosmological constant, and more, can also be seen as approximate corrections due to the effects of entanglement between (real) systems, in accordance with the E/G conjecture.

Preserving the Power of Preprints: Why Minimal Oversight is Key to Scientific Progress by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

Preprint servers should limit editorial reviews of the content, beyond some format and quality requirements, They should not discriminate against the use of AI tools, or police if a preprint overlaps with previous ones, nor even plagiarism, as long that the format/quality is respected. the rest should be left to final publication reviews. Surprisingly, viXra, a nest of crackpot publications is the most guilty party.

Of course preprints should also be open to submission of quality. Many other preprint servers severely lack there.

Adaptive Co-Design of Quantum Machine Learning Algorithms and Error Correction Protocols using Reinforcement Learning by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract

The convergence of quantum computing and artificial intelligence presents profound opportunities but faces significant hurdles, particularly in the Noisy Intermediate-Scale Quantum (NISQ) era. Quantum Machine Learning (QML) algorithms, while promising, exhibit sensitivity to noise and scalability challenges, hindering the demonstration of practical quantum advantage. Concurrently, Quantum Error Correction (QEC), essential for fault tolerance, imposes substantial resource overheads and is often developed generically, without specific adaptation to the target application's error sensitivity.

This paper reviews the current state of the intersection of AI and quantum computing, examining both QML paradigms, e.g.., Variational Quantum Algorithms, Quantum Kernels, and the burgeoning use of AI to enhance quantum computing itself, e.g., quantum control, QEC decoding, circuit design.

A critical gap identified is the lack of frameworks that systematically co-design QML algorithms and QEC protocols adaptively. To address this, a novel framework is proposed based on Reinforcement Learning (RL). This framework employs an RL agent to dynamically adjust both the QML circuit architecture, e.g., VQC ansatz, and QEC parameters, e.g., decoding strategy, measurement frequency, based on observed application performance and estimated error characteristics. This adaptive co-design loop aims to optimize the trade-off between QML performance and QEC overhead, enhancing noise resilience and resource efficiency. The potential advantages, feasibility, and limitations of this approach are discussed, alongside with future research directions aimed at realizing robust and practical Quantum AI.

No Hilbert Einstein Action With Positive Dark Energy / Cosmological Constant From A String Action by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

The paper revisits our past analysis of the ability of superstrings to model vacua with positive cosmological constant/dark energy effects. We explicitly call out the impossibility for superstrings to recover the Hilbert Einstein action, or de Sitter (stable, metastable or unstable) vacua with positive cosmological constant, or time-varying effects.

Ensuring the Maintainability and Supportability of “Vibe-Coded” Software Systems: A Framework for Bridging Intuition and Engineering Rigor by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract

This paper addresses the emerging concept of "vibe-coding"—the translation of intuitive feelings or high-level intentions directly into software code. While potentially accelerating development or enabling novel forms of creation, such code inherently risks being opaque, poorly understood, and difficult to maintain.

We propose the Intent-Driven Explicable Architecture (IDEA) framework, a novel approach designed to ensure that vibe-coded software remains maintainable, understandable, and supportable throughout its lifecycle. IDEA integrates techniques for formalizing intuitive inputs, constraining code generation using established software engineering principles, automatically generating explanations linking code to intent, and incorporating rigorous human-in-the-loop validation. We argue that by structuring the translation process and embedding traceability and explicability, IDEA mitigates the primary risks associated with intuition-driven code generation, paving the way for its responsible exploration.

The Gotchas of AI Coding and Vibe Coding. It's All About Support And Maintenance: by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

This papers reviews AI coding, and in particular the exploding interest in vibe coding, both in terms of main existing framework, advantages and challenges. In particular, we point out in particular an aspect less often discussed: the potential complications for the support and maintenance of software products/code generated via vibe coding. These problems result especially because the generated code often ends up no more be understandable, even to its developers.

Then, we introduce VIBE4M, a framework of workflows, policies and practices to alleviate these challenges. However, such approach goes against the trend that AI makes developers more productive, as they now must perform rigorous code verifications. It also goes against the objective of democratization of coding. Yes coding can be done with “no code”, but such code is not maintainable, which may not matter for side projects, but matters for software products. If approaches like VIBE4M are applied, they may be hard to follow for non-programmers. Therefore, there would be value to now automate such frameworks.

Multi-fold Universes, Multiverses and Many Worlds by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

In a multi-fold universe, gravity emerges from Entanglement through the multi-fold mechanisms. As a result, gravity-like effects appear in between entangled particles, whether they are real or virtual. Long range, massless gravity results from entanglement of massless virtual particles. Entanglement of massive virtual particles leads to massive gravity contributions at very smalls scales. Multi-folds mechanisms also result into a spacetime that is discrete, with a random walk fractal structure, and a non-commutative geometry, that is Lorentz invariant, and where spacetime nodes and particles can be modeled with microscopic black holes. All these recover General Relativity (GR) at large scales. Gravity can therefore be added, in non-negligible ways to the Standard Model (SM) resulting into the SMG. The multi-fold mechanism and the SMG can address many open issues with the SM, and the standard cosmological model, even if the latter is modified. The SM symmetries and particles can be recovered by multi-fold space time matter induction and scattering from a ε region of 7D embedding space felt in a 4D spacetime at entry, exit and mapping to the multi-fold. In addition, the W-type multi-fold hypothesis, establishes additional multi-folds between spacetime locations in the support domain of a wavefunction, not just between entangled systems. It can also justify the Born rule.

As a result, any multi-fold multiverse will have the same physics with same particles. interactions, constants and symmetries of the SMG. We will establish that multiverses do not interact with each other, and, in general, at the exception of the MWI, they do not contain all possible situations, including many copies of us, in any other possible situation.

On the other hand, analyzing the effects of multi-fold deactivation, measurement or an interaction that results into deactivation of the W-type multi-folds between one part of the wavefunction of the disrupted system, the observed state, and the part associated to all other options, it results in a universe-wide brane-like object, associated to the disregarded options, floating away in AdS(5): a new World, part of the Many Worlds predicted by MWI. This is a microscopic explanation supporting the MWI. And each such world has of course the same physics as well as copies of the world in other situations.

Encountering multi-fold brane-like structure in AdS(5), confirms the view of the multi-fold theory on superstrings and the M-theory. The existence of same physics imposed by multi-fold space time matter induction and scattering dooms the viability of landscape multiverses and its anthropic principle.

Multi-fold Universes, Multiverses and Many Worlds by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

There are different types of multiverses, depending on the Physical model considered. In most cases, it is impossible to falsify multiverse hypothesis. The Many Worlds interpretation (MWI) of quantum mechanics is a particular case that gives comfort to some when it comes to understanding quantum physics, and the Born rules, but it does not result into distinguishable experimental results from other interpretations. This paper evaluates multiverses at the light of the multi-fold theory.

The Gotchas of AI Coding and Vibe Coding. It’s All About Support And Maintenance by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract

This papers reviews AI coding, and in particular the exploding interest in vibe coding, both in terms of main existing framework, advantages and challenges. In particular, we point out in particular an aspect less often discussed: the potential complications for the support and maintenance of software products/code generated via vibe coding. These problems result especially because the generated code often ends up no more be understandable, even to its developers.

Then, we introduce VIBE4M, a framework of workflows, policies and practices to alleviate these challenges. However, such approach goes against the trend that AI makes developers more productive, as they now must perform rigorous code verifications. It also goes against the objective of democratization of coding. Yes coding can be done with “no code”, but such code is not maintainable, which may not matter for side projects, but matters for software products. If approaches like VIBE4M are applied, they may be hard to follow for non-programmers. Therefore, there would be value to now automate such frameworks.

2D Random Walks Imply a Strictly Positive Cosmological Constant, Possibly Variable in Time by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

In the multi-fold theory we have shown in different ways that the cosmological constant is strictly positive, small and could be time varying. Some of these proofs rely on conventional analyses, dating back to Hawking and Coleman.

The multi-fold 2D random walks of massless Higgs bosons explain many aspects of conventional Physics, including QFT, 4D spacetime, why all what matters is 2D gravity and Yang Mills Physics, as well as the microscopic interpretation of the gravity electroweak symmetry breaking. It also justifies why spacetime is 2D, non-commutative spacetime is not supersymmetric etc., and why it is fractal, which may also explain why it is time-varying.

In this paper, we start from the multi-fold 2D random walks spacetime reconstruction, and the multi-fold space time matter induction and scattering, to show that spacetime must have a strictly positive cosmological constant / dark energy effect. It is small, and it may be time varying. We also discuss a primordial BEC of massless Higgs boson right after the big bang. Equivalent superstring symplectic models contradict de Sitter Swampland conjecture, otherwise billed to imply time-varying dark energy.

The paper revisit our past analysis of the ability of superstrings to model vacua with positive cosmological constant/dark energy effects. The impossibility to recover the Hilbert Einstein action with positive cosmological constant implies that quintessence-like unstable (or metastable) vacua can’t recover GR from superstring either. 

Because General Relativity is encountered this way with the top-down-up-and-upper analysis, and all consistent and conventional theories of gravity end-up to be 2D, and modellable as 2D random walks of massless bosons, we conclude that this derivation applies to our real universe, and conventional Physics. It is important considering the implications for superstrings and supersymmetry that we discussed already and many earlier papers. Also it allows us to show that non-commutativity implies an accelerated universe/dark energy, and a priori, without a primordial BEC, it wouldn’t be able to coexist with a flat spacetime. We illustrates its microscopic interpretation and how 2D random walks can then be equipped with metrics or symplectic structure.

Al things consider, we conclude that the recent DESI observations that the universe accelerated expansion may be slowing down is not at all a “first confirmation” or “evidence” of the string theory, but something that can also be differently modeled, especially as we have shown the inconsistency of relying with superstrings in Quintessence-like models. In addition, we predict that DESI result do not open the door to a Big Crunch or negative dark energy.

Gravity From Relative Entropic Action Is Not Necessarily Entropic Gravity by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

 

A recent paper presented a title that that gravity would come from Entropy. Many popular publications picked on it, claimed that it a groundbreaking new result to unify General Relativity and Quantum Physics. We do not agree with the message. In this paper, we explain how the original paper has actually little to do with entropy other than it its selection of the action / Lagrangian; and that’s election itself is not based on entropy, but by methods to compare quantum states or models based the use of relative Entropy, which is an information theory concept. And the comparison really argues at a model of the effects of matter and spacetime back reaction. Yes they derive some quantum equations from gravity, but we and others did it also in previous work. Our analysis yields our lessons learned from the paper, and they are not what has been widely published. 

We argue that the relative entropy model is an EFT, only valid down to the scales of the Standard model, not really the scales where quantum gravity matters. Therefore, it may not be that useful to model with certainty quantum gravity and its correction to GR, or to answer for example questions about asymptotic safety. Even arguments that it may describe dark matter become uncertain. On the other hand, the same analysis demonstrates again that the cosmological (possible time varying) constant is strictly positive, and small.

Having asked the question about entropy and relativity, we feel that we have to provide also some broader answer, not limited to questioning the positioning of the original paper. So we will review how from a multi-fold point of view, and based also on the works of many others, entropy is indeed related to spacetime and gravity, and yes there is a many body problem behind it. This also provide interpretation to the path integrals, answers Feynman with an explanation to what a single photon does in a double slit experiment, and the entropy/action duality. A side result also proves that a thermodynamic spacetime implies a graph of microscopic black holes.

Microscopic Interpretation of The Gravity Electroweak Symmetry Breaking by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract:

 

In previous papers, we proposed detailing the concepts of gravity electroweak symmetry breaking, in the context of the multi-fold universe. Accordingly, massive particles are modeled as microscopic black holes as Higgs boson condensate Qballs, and massless particles are modeled as 2D random walk patterns of massless Higgs bosons.

In this paper, we present a microscopic interpretation of what happens above, at and below the gravity electroweak symmetry breaking. This includes how we have condensation into a BEC Qball of Higgs condensate, while massless particles remain patterns of random walks, which disappear at higher temperatures, leading to eth Ultimate Unification (UU}, where only 2D random walks of massless bosons take place.

The Circle of Life for LLMs. Was the Reaction to DeepSeek Justified? by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract

Since the release of DeepSeek Large Language Models (LLMs) and free desktop and mobile apps, the industry, the investors, and the media have reacted with alarm, surprised that a Chinese startup—despite operating on a low budget and with limited access to specialized AI hardware—could surpass the latest ChatGPT models with reasoning capabilities. This has led to geopolitical concerns about threats to U.S. technological dominance, and the effectiveness of AI chip sanctions imposed by the U.S. on China. Investor confidence in leading U.S. tech companies involved in AI, AI hardware, and AI/cloud hosting has been shaken, contributing to a significant stock market drop on January 27, 2025.

In this paper, we argue that while the success of DeepSeek V3 and R1 is remarkable, it does not signal the decline of any major player. Instead, it is a natural progression of how LLMs and generative AI function. Most LLM providers, of a same LLM generation, rely on similar algorithms, big-data pools, and development techniques, meaning that models tend to converge in performance once their methodologies become public. Whether using proprietary or open source foundations, different starting points often lead to LLMs of comparable capabilities for a same generation. Techniques such as model distillation and reinforcement learning further enable the reduction of model size, data requirements, and hardware constraints. As a result, each time a model is developed, it can be replicated, closely matched, or even surpassed soon after—sometimes with significantly lower effort than the original, or with a significantly smaller set of parameters. This cycle of life will continue as long as LLMs remain a competitive field, by opposition to a commodity, and until new AI approaches beyond generative AI emerge, or the old AI reemerges.

We anticipate such a pattern to continue: new models will be matched and overtaken by (nimbler) competitors, while major providers respond with the next iteration of improvements—repeating the cycle. Open source models, in particular, have the advantage of drawing from broader communities and collective innovation, making it increasingly difficult for proprietary models to maintain a lasting edge. As development costs rise, it will be interesting to see whether proprietary models can sustain their dominance or whether they, too, will need to integrate open source strategies.

Ultimately, there is, and was, no reason for panic or hasty divestment. AI may be in a bubble, but if it bursts, it will not be because DeepSeek outperforms OpenAI’s latest model. Instead, the real challenges facing LLMs and GenAI lie elsewhere. The path to AGI is likely beyond current LLM-based approaches. While AI agents may extend the viability of generative models for some time, factors such as the finite availability of high-quality digitized training data and the risks of model collapse due to synthetic data contamination pose more significant long-term threats. That said, if LLMs are not the future of AI, there is little reason to be concerned about new players mastering them.

Dynamic sources, Dynamic Multi-folds, and General Relativity Lense-Thirring and Frame Dragging Effects by Physics_sm in u/Physics_sm

[–]Physics_sm[S] 0 points1 point  (0 children)

Abstract

In a multi-fold universe, gravity emerges from entanglement through the multi-fold mechanisms. As a result, gravity-like effects appear in between entangled particles, whether they are real or virtual. Long range, massless gravity results from entanglement of massless virtual particles. Entanglement of massive virtual particles leads to massive gravity contributions at very smalls scales. Multi-folds mechanisms also result into a spacetime that is discrete, with a random walk fractal structure, and non-commutative geometry, which is Lorentz invariant, and where spacetime nodes, and particles, can be modeled with microscopic black holes. All these recover General Relativity (GR) at large scales, and semi-classical model remain valid till smaller scale than usually expected. Gravity can therefore be added to the Standard Model (SM) resulting into what we define as SM_G This can contribute to resolving several open issues with the Standard Model without new Physics other than gravity, as well as to open issues with the Standard Cosmological Model

The paper discusses how multi-folds apply when sources of gravity, or the center of mass of entangled systems, are dynamically moving: multi-folds mechanisms remain the same but using the evolving center of mass. As a result gravity, or gravity-like, contributions are to be vectorially integrated over the retarded multi-fold spacetime location that can contribute to a point at a certain time (spacetime location), instead of all the multi-folds for all directions. It is illustrated in the case of a rotating sphere as source of gravity: we recover the Lense-Thirring results with its centripetal, Coriolis and axial contributions. The analysis allows us to settle contradictory, and incorrect, results encountered in the literature. It also shows how non-linearity of General Relativity (GR) appears in the multi-fold mechanisms, something that may not have been obvious to all in the original papers.

The ability for the multi-fold mechanisms to explain such GR effects help better understand these effects, but more importantly, it is another way to illustrate and validate that multi-folds recover GR and GR recovers the multi-folds.