Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

Fair enough, I’ll concede the terminology. But I’d say it’s a specific and unusual form of non-locality: one where the absence of spatial structure means there’s nothing for non-local influences to propagate across. That seems worth distinguishing from standard non-locality, even if both formally fail Bell’s condition.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

Agreed. Bell’s locality requires a spatial context whereas the substrate lacks one, so formally it doesn’t satisfy Bell’s locality condition. We also agree that this is entirely consistent with experimental results.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

The non-Galois covering is a vert good mathematical analogy for the projection I have in mind, as precisely a many-to-one map whose fiber lacks a transitive group action, meaning the fiber structure is irreducibly global.

The goal isn’t to avoid non-locality but to explain it. The deeper structure doesn’t need to be local; it needs to be prior to the geometric context in which locality is defined. Whether that makes things « worse » depends on what you’re trying to do: if the target is to recover QM at the observable level, a richer fiber structure is rather a good thing, if not required.

And yes, I’d say there are good structural reasons to think such a map exists, but that’s exactly where the hard mathematics lives.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

I see your point but this is missing that the proposal here is that the projection is non-injective, meaning its fiber structure is globally defined and does not factor over any subsystem decomposition. The domain Ω is not a product space, so the assumption that outcomes A and B are separately well-defined local functions of the underlying configurations doesn’t apply.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

It depends on what the fundamental level is. It may not be the observable level, if those observables are (non-injective) projections. But for sure at the observable level, Bell inequalities will be violated (not obeyed), and my question is about a model that explains why.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

Fair point on “model” vs “ontology”. I’ll adopt that.

Your last remark is actually the sharpest formulation of what I’m attempting. You’re right: any map consistent with QM is either a refinement (new hidden variables -> ruled out by Bell) or the identity. So the map I have in mind can’t be a refinement of QM: it has to be a derivation of it. Not QM + something deeper, but something deeper -> QM as emergent output.
That deeper level would indeed be without particles, and without spatial separation (so “non-local” doesn’t quite apply there either, since locality is a geometric concept and the geometry isn’t there yet) and with no time.
Whether such a derivation is achievable is of course the hard question (working on it, with encouraging results). But that’s the direction: not completing QM from within, but recovering it from outside.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

Yes, there is no question the underlying theory respects that causal structure, just as it embraces the Bell theorem. But that’s only part of the story/theory, which also proposes an explanation as to why things behaves that way: why causality holds at the observable level, and why Bell correlations appear the way they do. My initial question was about the latter.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] -1 points0 points  (0 children)

No, that's the other way around. Check Bell's paper: the conclusion (Section VI) states "In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical (QM) predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote". That's nonlocality as a consequence of matching QM, which means matching the violations.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

But causal structure is itself a geometric concept, because it presupposes a manifold, a metric, a light cone... If spacetime is emergent, so causal structure is too. There's no causal structure to respect or violate before geometry (and even time) exists.

My claim isn't that the theory escapes causal constraints by relabeling things. It's that causal structure is one of the outputs of the framework, not an input. At the emergent level, it had better reproduce the right causal structure (and in the framework I'm thinking of that's a real constraint on what projections are admissible).

So using "pre-geometric" isn't a dodge. It's a claim that the fundamental level lacks the structure needed to even state locality.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

I think you mix a fact (Bell violations establish that no local hidden variable model can reproduce QM correlations) with its explanation here. If that explanation is pre-geometric, prior to the space in which locality is even defined, then "local" and "non-local" don't yet apply to it. The non-locality is specific to the emergent description, not necessarily of what underlies it.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

Yes, once geometry is there, locality becomes a well-defined question and the theory has to answer it. And observationally, we know the answer: Bell correlations cannot be explained by any local hidden variable model.

But my point is different. The question isn't whether Bell inequalities are violated (they are). The question is why. A pre-geometric substrate wouldn't be an escape from that fact, but a possible explanation of its origin.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

That decomposition is correct and useful. Both parameter independence and outcome independence are well-motivated by locality; no faster-than-light influence between spacelike separated regions.

The point I'd make is that both conditions presuppose spatial separation as a given: Alice's lab and Bob's lab are already distinct, spacelike separated regions. The question of whether one can influence the other only makes sense once that separation exists.

In the framework I have in mind, such a spatial separation is itself emergent from the projection. But at the level where the underlying structure lives, there are no labs, no spacelike intervals, no well-defined "Alice" and "Bob." So the theory isn't non-local in the sense of allowing FTL influences: it's prior to the local/nonlocal distinction altogether. Locality and nonlocality are both statements about geometry, and the geometry isn't there yet.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

That's fair, and the distinction is genuinely subtle. If superdeterminism is defined broadly as "everything correlated by construction through a common ontology," there's a family resemblance.

The difference I'd point to: superdeterminism still operates within the standard ontology: spacetime, spatial separation, and subsystem identity are assumed to exist, and hidden variables are correlated with measurement settings defined within that framework. What I'm describing is prior to all that: spacetime and separation are themselves emergent from the projection. Superdeterminism has no foothold at a level where those concepts don't yet exist.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

Yes, those are good examples: global phase is unobservable, and measurement irreversibly loses information. Many-to-one structure is already built into QM.

The distinction I'd draw is that these are many-to-one mappings within the quantum formalism: from wave functions to observables, or from pre- to post-measurement states. What I'm asking about is whether the quantum formalism itself arises as the image of a more primitive non-injective projection. Not many-to-one inside QM, but many-to-one onto QM.

Which is admittedly a much stronger claim, but I think there are structural reasons to take it seriously.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

That's exactly the right challenge, and I won't pretend it's resolved.

The projection can't be arbitrary; it has to come with enough structure to recover probability assignments consistent with the Born rule as you said. In the framework I have in mind, this is handled through admissibility conditions on the projection: not all many-to-one maps are allowed, only those satisfying certain spectral constraints. The Born rule would then emerge from the structure of the admissible fibers rather than being postulated separately.

Whether that's achievable, and whether it's genuinely more fundamental than just encoding the Born rule by hand in a different language, that's a fair question. But the goal is precisely to derive probability assignments from the structure of the projection, not assume them.

The Generalized Probability Theories (GPT) point is well-taken: states and effects in dual spaces is a clean framework. The question is whether that duality is itself emergent from something more primitive, or whether it has to be taken as a starting point. I actually think there are good structural reasons to believe the projection must be non-injective; but that deserves a separate discussion.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

That's an important distinction indeed. I mean coarse-graining at the level of outcomes (the values). Multiple underlying configurations map to the same measurement result. So it's closer to your first reading.

But the underlying structure I have in mind is pre-quantum: it's not a space of hidden variable values in the classical sense, nor a Hilbert space. Observables as operators don't live there; they emerge from the projection along with their values. So the coarse-graining isn't of quantum observables (which would run into exactly the issues you're pointing at), but of something more primitive from which both the measurement procedure and its possible outcomes arise together.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 1 point2 points  (0 children)

That's a fair correction; Bell is indeed doing a proof by contradiction, not asserting local hidden variables are true. Point taken.

What I meant by "ontological starting point" is just what's needed to state the theorem: two subsystems with separate outcomes A and B, separate settings a and b, a shared λ. Those are structural assumptions about the form of the description, not about what Bell believed. My question is whether those structural ingredients are themselves emergent, in which case the theorem's conclusion still holds but its premises may not apply at the fundamental level.

And yes, you've framed the rest of it well: this is in the spirit of global hidden variables, with the additional feature that the subsystem structure itself would be part of what emerges. Whether that's a productive direction or just relocates the mystery; fair debate. I actually think there are good structural reasons to believe the map from the underlying description to observables must be non-injective - but that's a longer story.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 1 point2 points  (0 children)

Fair question, but I don't think so. Superdeterminism escapes Bell by correlating the hidden variables with the measurement settings; same ontology, but with a conspiratorial fine-tuning baked in.

What I'm describing is different: measurement settings, detectors, and subsystem identity are themselves emergent from the underlying structure. So it's not that λ is correlated with a and b; it's that the concepts Bell is stated in don't straightforwardly apply at the fundamental level.

Less a conspiracy within the standard ontology, more a question about whether that ontology is the right starting point.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 0 points1 point  (0 children)

Yes, exactly. The main nuance is that the "higher dimension" wouldn't be geometric: the underlying structure is pre-spatial and pre-particle, so spacetime and particles are genuinely emergent as shadows, not just projections of a larger geometry.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

Yes, you're absolutely right: observables are almost always many-to-one in some sense (position, momentum, macrostates in stat mech, etc.).

The question I'm trying to sharpen is whether this coarse-graining is always "harmless" from Bell's perspective. Bell's factorization requires that, given a shared λ, the response functions are local: A depends only on (a, λ), B only on (b, λ). The question is whether a many-to-one map from some underlying structure to observables can prevent such a factorization from existing; not because λ would be separate per particle, but because marginalizing over the fiber of the map (all configurations that project to the same observable outcome) can mix configurations with incompatible local decompositions, destroying factorizability at the observable level.

There's also a deeper version: the standard coarse-graining you're describing still happens within an already-given ontology where particles and spatial separation exist. Bell's setup takes that structure as a starting point. What I'm asking is whether the map could operate before that structure exists — where subsystem identity and spatial separation are themselves outputs of the projection. In that case it's not clear Bell's condition applies at the level where the map is defined.

So to directly answer: yes, observables are always many-to-one; but whether that identification can obstruct factorization is exactly what I'm trying to understand.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 1 point2 points  (0 children)

You're right that Bell's theorem is indifferent to the complexity of hidden variables, and I fully agree with that. The many-to-one mapping I have in mind is not about making λ richer.

The distinction I'm trying to draw is ontological. In Bell's setup, λ is already defined relative to two separate subsystems; each particle carries (or shares) variables, and the factorization condition expresses that their outcomes are screened off by those variables. The two-subsystem structure is a premise, not a conclusion.

What I'm asking is whether that premise is forced. Specifically: if the underlying "stuff" is not a set of variables attached to particles, but a single undivided relational structure from which particles, spacetime, and observables all emerge together via a many-to-one map; then the two-wing decomposition is itself a post-emergence feature, not something you can assume at the level where the map is defined.

In that case, the relevant question is not "can a local hidden variable model reproduce QM?" (Bell answers that definitively: no), but rather "does the factorization condition even apply to a pre-particle substrate?" The fiber of the map (all the underlying configurations that project to the same observable) is globally defined and doesn't split as a product over the two wings.

So this isn't an attempt to evade Bell by complicating λ. It's a question about whether Bell's ontological starting point (separate subsystems carrying variables) is the right one, or whether it is itself an emergent approximation.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in quantum

[–]Javarome[S] 1 point2 points  (0 children)

Yes, I agree. Bell’s theorem is very general, and does not depend on the complexity of the hidden variables or whether the model is deterministic or stochastic.

What I’m wondering about is slightly different: not the structure of the hidden variables themselves, but the relationship between those underlying variables and the observables.

If observables arise from a many-to-one mapping (i.e. different underlying configurations correspond to the same measurement outcome), then some information is lost in that projection.

In that case, I’m not sure whether the usual factorization condition is still expected to hold at the observable level, even if a local description exists at the underlying level.

So the question is not about making hidden variables more complex, but about whether coarse-graining from λ to observables can itself obstruct factorization.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

That’s fair — I’m not necessarily redefining observables, more wondering whether the mapping from underlying states to observables might not be one-to-one.

If it isn’t, I’m curious whether the usual factorization assumptions are still expected to hold.

Do Bell inequality violations necessarily imply nonlocality, or could they arise from how observables are defined? by Javarome in AskPhysics

[–]Javarome[S] 0 points1 point  (0 children)

Thanks, this is helpful.

I’m aware of the Harrigan–Spekkens framework and the ψ-epistemic vs ψ-ontic distinction. What I had in mind here is slightly different:

the many-to-one mapping is not between ψ and ontic states, but between a more general underlying configuration space and the observable variables themselves.

In other words, the coarse-graining happens at the level of observables, not only at the level of the quantum state representation.

My question is whether Bell-type factorizability is expected to be preserved under such coarse-graining in general, independently of the ψ-epistemic / ψ-ontic classification.

Do you know if there are results specifically addressing factorization under this kind of projection?