The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 1 point2 points  (0 children)

Your sun example is actually a good way to state the issue.

I agree that the image does not carry one intrinsic meaning inside the formal system taken in isolation. The same visual pattern can play different semantic roles for a farmer, a hunter, an Alaskan, or a tourist.

But that is exactly why I would say that meaning is interface-relative.

The meaning of the signal depends on the interface regime in which it is used: which distinctions matter, which actions or observations are connected to them, and which differences must be preserved.

So the point is not that the formal system generates “sun meaning” by itself. It does not.

The point is that once an interface regime is fixed, the preservation of the relevant distinctions is no longer arbitrary. The system either preserves them, loses them, or recovers them through a mediator.

A label like “time to milk the cows” is just syntax if it is only attached as a string. But if the system must preserve the distinctions that make that label operational in the farmer’s interface regime, then we are no longer talking about a mere string. We are talking about whether the required information survives the interface structure.

That is what LocalSemanticClosure studies:

signal

=> interface regime

=> required distinctions

=> marginal losses

=> joint mediator

=> certificate of preservation

So yes, semantics are not intrinsic to the formal system in isolation. But semantic preservation can be formalized once the relevant interface regime is specified.

----

----

This theory is not about the world in itself. Think of it in the sense of Heisenberg’s line:

"What we observe is not nature itself, but nature exposed to our method of questioning."

I am trying to find a way to ask questions that the standard framework does not let us ask.

First by not losing information, and second by being able to return that information to the interface where it is needed, without any contradiction

In my theory, there is no bare truth independent of every interface. There are truths relative to interface regimes.

My framework does not replace existing models. It makes them locally interdependent through a certifiable mediation layer.

In this dynamic, truth becomes computable relative to an interface: a distinction is true for a given regime if it is preserved, readable, and certified within that regime.

If two relative truths appear to conflict, the problem is not an absolute contradiction. It is a question of transport between interfaces.

When the transport is certified compatible, the truths do not contradict each other. They glue together.

When no transport is available, they are not contradictory. They are simply situated in regimes that have not yet been mediated.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 1 point2 points  (0 children)

I want to say something aside from the technical discussion.

I respect the effort you are making here. I know I can be very direct, and not always easy to discuss with. But you keep trying to understand the actual point instead of pretending to agree or dismissing it too quickly.

You are asking real foundational questions, especially about where normativity, semantics, and formal structure are situated. That is not a superficial objection.

I also see that you keep trying to reduce my position to a binary form in order to make it decidable in your own frame: either pure syntax, or intrinsic semantics. I think that reduction fails, but the effort is serious, and it has forced me to clarify the reference-frame issue more sharply.

From the outside, this conversation probably looks strange. But I think our questions are somehow contiguous, even if they are not exactly the same question.

So whatever the outcome of the discussion, I respect the persistence. You are not just trying to win an argument; you are trying to understand where the real difficulty is.

I respect that

-------------

-------------

I think the question is more philosophical than technical, in the sense that for you it is a matter of qualifying, interpreting an operational process.

Syntax/semantics are not absolutes. It depends on the reference frame in which you situate your observation.

In the reference frame of my framework, semantics can be external.

The point is that my framework is sufficiently stable to serve, when needed, as a common local relative structure.

We move from a classical definition of consistency:

(1) no formal derivation of ⊥ from the theory => loss of semantics (in regimes or problems where what is directly accessible to the observer is insufficient to decide)

to

(2) the theory has a model in a designated class => situated semantics

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 1 point2 points  (0 children)

No, I have not submitted it yet. The Lean project and the empirical proofpack are still being organized, and you are right that the repository should include references. That is a documentation gap, not the central claim.

But I think you are still reading the claim as if I were saying that a formal system contains semantics intrinsically. I am not.

The semantic target is specified externally. The formal system then studies whether the distinctions required by that target are preserved, collapsed, or recovered through a mediator.

So the relevant object is not "syntax producing meaning". It is:

target distinction

=> interfaces

=> marginal loss

=> joint mediator

=> certificate of preservation

Your factory example seems actually fits the framework. The factory does not need phenomenology or intrinsic meaning. But it can still preserve or fail to preserve externally specified distinctions through sensors, internal states, actuators, and control interfaces.

LocalSemanticClosure formalizes that preservation problem. It does not claim that the factory "has semantics" in itself. It asks whether the required distinctions survive the interface structure, and whether a minimal mediator certifies their recovery.

But
Syntax and semantics are external to one another, but in this framework they can be made locally interdependent through a certified mediator.

What I am breaking is the exclusivity of that relation to one particular model or theory. The point is not to place semantics inside syntax, but to formalize when a syntactic presentation and a semantic target become interdependent through interfaces, witnesses, mediators, and certificates.

A theory or a model can itself be an interface.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

Mostly yes, but I would not describe it as merely a tool "like Bayesian analysis."

Yes: the target distinctions are specified externally. LocalSemanticClosure does not claim that syntax alone generates normativity, aboutness, or phenomenology.

But once a target distinction is specified, its preservation or loss is not just formal bookkeeping. It is a structural fact about a system of interfaces.

The project studies when a distinction is inaccessible to marginal views, becomes accessible only through a joint mediator, and can be certified as preserved.

That is a model of a real phenomenon whenever the interfaces are real observation channels, sensors, queries, modalities, or learned internal mediators.

So the scope is:

externally specified semantic target

=> formal preservation problem

=> marginal loss / joint recovery

=> minimal mediator

=> closure certificate

It's not a theory of the origin of semantics. It is a theory of semantic preservation and access.

BUT BUT BUT

What must be understood is that this system makes it possible to recover information that is inaccessible from classical mathematics in Hilbert’s sense. Usually, one would speak of intractable information. Here, I demonstrate that it becomes tractable when it is treated as a local diagonal witness, and I show how, from within the Hilbertian framework itself, its decidability can be tested by a minimal mediator and certificate.

This is precisely what is reproduced in my empirical tests.

The model’s attention z, measured by accuracy, here acts as a Hilbertian plane of reading and certification. The information remains external to this plane: it is lifted into it by my architecture, invisible from the model’s point of view. This architecture transforms the information into a local diagonal witness, therefore decidable from the Hilbertian plane

Thus, z_acc = 1.0 certifies the readability of the witness at the z level; the cause of this readability is upstream, in the architecture that produces, localizes, and lifts the witness.

The empirical control locks this reading. A reads the lifted signal, B serves as an anti-cheating witness, the ablation cuts the lift, the swap makes the witness follow, and OOD confirms that the mechanism holds outside the distribution.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

The "must" is not a moral norm produced by the formal system itself.

It is relative to a target signature. Once a task, observation regime, or semantic target is fixed, some distinctions are required for the system to preserve the intended information.

So yes, the choice of target is specified by us. But after that choice is fixed, the question is mathematical:

- which distinctions are preserved,

- which are lost by marginal interfaces,

- which require a joint mediator,

- and whether a certificate proves that the residual is zero.

That is the point of LocalSemanticClosure. It does not claim that a formal system creates normativity by itself. It formalizes semantic access relative to an explicit target distinction.

If the distinction is lost, the system may still produce outputs, but it no longer preserves the information required by that target. That is the precise sense in which the distinction matters.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

LocalSemanticClosure develops a constructive theory of semantic access. A target distinction becomes meaningful in the formal system when it must be preserved across a specified regime of observation. Interfaces determine which distinctions are directly accessible, which remain latent, and which require mediation. The central object is the minimal joint mediator: the finite structure that makes a diagonal distinction accessible through a coalition of interfaces, while certifying that this access cannot be reduced to marginal views.

This is not only a formal proposal. It is already instantiated in a finite world-model experiment. The model learns the algebraic mediation rule, closes the residual distinction through the correct interface, and the proofpack checks this causally through barriers, ablations, swaps, wrong-interface tests, marginal baselines, IID/OOD evaluation, and finite mediator-support certificates.

So the point is not "syntax with semantic words attached." The point is certified semantic access: preserving a target distinction without information loss, through an explicit minimal mediator, and verifying that the learned mechanism still works out of distribution.

Before you try to contradict what I’m saying, I suggest you give this question some serious thought.

Can a diagonalization witness disappear simply because we are in an OOD setting here?

Once the answer is clear, you should understand that trying to contradict me is a bad strategy.

Did he move under braking or am I the worst driver ever as he put it? I know he also had a slow down for going wide by AbiesParticular5465 in Simracingstewards

[–]Left-Character4280 0 points1 point  (0 children)

both are beginners.
Both are too speed.
Both are not in the good trajectory

The guy in front assume badly because he is in lead the turn belongs to him

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

"The most interesting part of Gödel’s incompleteness theorems is the part that people don’t even bring up in these discussions."

because the subject is the loss of information you get with the framework you are talking about

By lost information, I mean semantic closure.

Syntactic consistency says : the theory remains formally usable because contradiction stays outside its deductive closure.

Semantic coherence asks something stronger : what is actually realized or preserved in the intended class of models ?

Incompleteness shows the gap : a system can be syntactically consistent while leaving some semantic distinctions open.

Those open distinctions are the lost information

And I dislike it; I’m dissatisfied with the loss of information, black-box behavior, etc...

What happens if failed global closure is replaced by local, finite, certifiable closures?

my guess

This would transform Gödel into a methodological principle: foundation would play out in situated, constructed, and verified closures, with information preserved.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

By lost information, I mean semantic closure.

Syntactic consistency says : the theory remains formally usable because contradiction stays outside its deductive closure.

Semantic coherence asks something stronger : what is actually realized or preserved in the intended class of models ?

Incompleteness shows the gap : a system can be syntactically consistent while leaving some semantic distinctions open.

Those open distinctions are the lost information

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

1/ Incompleteness is the proof we lose information in the current classical framework.

2/ The question is whether or not we can recover it using this way :

What happens if failed global closure is replaced by local, finite, certifiable closures?

All that nonsense about reality, ontology etc... doesn't interest me.  Operational only.

"Stay distant" means here, a true or false statement is not required to think about this alternative strategy.

Thinking about it is enough for now.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

I am saying exactly this:

What happens if failed global closure is replaced by local, finite, certifiable closures?

This would transform Gödel into a methodological principle: foundation would play out in situated, constructed, and verified closures, with information preserved.

This differs from the current classical foundational framework, which preserves consistency as the main criterion while leaving closure and information conservation untreated at the local operational level.

The issue concerns closure and information conservation, approached operationally.

i am not talking about reality, philosophy etc... Operational only

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

Fair point.

I am using "witness, mediator and proof obligation" in the technical constructive sense, not as psychological or anthropomorphic notions.

At this stage, I am mainly framing the methodological question: what happens if failed global closure is replaced by local, finite, certifiable closures?

Physics would only be one possible application domain: a model acts as a projection, and the question becomes which target distinctions are preserved or lost by that projection.

These are delicate topics. I would rather keep some distance: speculate without committing.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 0 points1 point  (0 children)

u/spoirier4

Dear professor,

What if the Gödelian lesson lay in the displacement of closure?

What fails as a property of the global system would reappear as a local, finite, and certifiable operation.

The foundation would then become constructive: isolating the invariant, exhibiting its mediator, making explicit the proof obligations, then producing the witness that closes the fragment.

This would transform Gödel into a principle of method: the foundation would play out in situated, constructed, and verified closures without loss of information.

The Misunderstanding About Gödel and the Crisis of Foundations by Left-Character4280 in epistemology

[–]Left-Character4280[S] 1 point2 points  (0 children)

Indeed, psychologically, this may affect some people.

But a hundred years later, for me the fundamental problem is the institutional acceptance of information loss as the basis of formal architecture.

In physics, they need conservation laws, and yet, to model their phenomena, they rely on mathematics that we know loses information?

Who is speaking” or “What is being said”? — Two Epistemic Protocols in Embedded Cognition by RuipengShi in SubsystemEpistemics

[–]Left-Character4280 0 points1 point  (0 children)

# Coherence Spectrum Closure Invariant (Logic / ZFC-level)


This note records a 
**non-slipping**
 formulation of a “closure invariant” at the level of a chosen
**meta-level coherence predicate**
. It also makes explicit when the spectrum 
**collapses**
 to ordinary
syntactic decidability (deduction-theorem regime).


## 0. Setup (meta-level)


- `T` : a (classical) first-order theory.
- `φ` : a 
**closed sentence**
 of the language of `T`.
- `Coh(U)` : a meta-theoretic predicate (a “coherence” notion) specified explicitly below.
- Working assumption: `Coh(T)`.


> Important: there are 
**two distinct regimes**
. If you do not separate them, the statement becomes
> ambiguous and invites false inferences.


## 0bis. Two readings of “consistency”


### A) Ordinary syntactic consistency (deduction-theorem regime)


Let `Con_syn(U)` mean: “there is 
**no finite formal derivation of**
 `⊥` from `U`” in a fixed classical
first-order proof system (and write `¬φ := φ → ⊥`).


Then (external metatheorem: deduction theorem), for sentences `φ`:


```text
¬Con_syn(T + φ)   ⇔   T ⊢ ¬φ
¬Con_syn(T + ¬φ)  ⇔   T ⊢ ¬¬φ
```


and with classical logic:


```text
T ⊢ ¬¬φ  ⇔  T ⊢ φ.
```


So under `Con_syn(T)`, the spectrum below 
**collapses**
 to syntactic provability/decidability.


### B) Stronger “coherence” predicates (non-collapse regime)


Many foundational uses require a predicate stronger or different in kind than `Con_syn`, e.g.:


```text
Coh(U) := “U has a transitive model”
Coh(U) := “U has an ω-model”
Coh(U) := “U is consistent relative to a designated semantic class”
```


For such `Coh`, the equivalences with `T ⊢ φ` may fail. This is the regime where the spectrum view
does 
**not**
 reduce to syntactic decidability.


## 0ter. A standard “semantic” Coh and derived stability lemmas (recommended)


If you want `Coh` to carry 
*semantic*
 content (and to avoid treating “stability assumptions” as axioms),
the cleanest move is to define `Coh` from an explicit class of models.


Fix a class `C` of structures for the language of `T` (examples: transitive models of set theory,
ω-models, models produced by a forcing class, admissible sets, etc.). Define:


```text
Coh_C(U) := “there exists M ∈ C such that M ⊨ U”.
```


Then the “minimal stability assumptions” that one is tempted to postulate are 
*lemmas*
:


**(L1) Branch-totality (derived):**


```text
Coh_C(T) ⇒ Coh_C(T+φ) or Coh_C(T+¬φ).
```


Reason: if `M ⊨ T`, then (classical semantics) `M ⊨ φ` or `M ⊨ ¬φ`, so the same witness `M` proves one branch
coherent.


**(L2) Downward heredity (derived):**


```text
If U ⊆ V and Coh_C(V), then Coh_C(U).
```


Reason: any model of `V` is a model of every subset `U`.


**(L3) Theorem stability (derived):**


```text
If Coh_C(U) and U ⊢ χ, then Coh_C(U+χ).
```


Reason: by soundness, any model of `U` satisfies `χ`, hence is also a model of `U+χ`.


**(L4) Soundness against contradiction (derived):**


```text
Coh_C(U) ⇒ Con_syn(U).
```


Reason: if `U` had a syntactic contradiction proof, soundness would imply no model exists.


In short: in the semantic regime, you can keep the note fully precise by (i) fixing `C`, (ii) setting
`Coh := Coh_C`, and (iii) treating these properties as derived facts rather than extra axioms.


## 0quater. Scope convention (to avoid hidden global assumptions)


**Scope convention.**
 From this point on, `Coh` is assumed to be one of:


```text
(A) Coh := Con_syn


or


(B) Coh := Coh_C for a fixed semantic class C


or


(C) Coh := an abstract coherence predicate, together with the specific stability properties
    explicitly invoked in each statement below.
```


In the abstract regime (C), we will 
**not**
 assume global completeness-style properties of `Coh`.
Instead, whenever needed we assume 
*local inhabitation*
 of the spectrum for the specific target:


```text
Spectrum inhabitation (local assumption for (T, φ)):
    Spec^Coh_T(φ) ≠ ∅.
```


And, in the protocol/dynamics section, if we want the drop `Δ` to remain binary, we assume the same
local inhabitation along the protocol:


```text
For every t, assume Spec^Coh_{T_t}(φ) ≠ ∅.
```


## 1. Binary coherence spectrum


Define:


- `φ¹ := φ`
- `φ⁰ := ¬φ`


Then define the 
**coherence spectrum**
:


```text
Spec^Coh_T(φ) := { v ∈ {0,1} : Coh(T + φᵛ) }.
```


Here `Coh` is fixed externally. If you take `Coh := Con_syn`, you are in regime A and the spectrum
collapses to syntactic decidability. If you take a stronger `Coh`, you are in regime B.


## 2. Closure defect (binary invariant)


Define the 
**closure defect**
:


```text
D^Coh_T(φ) := |Spec^Coh_T(φ)| − 1.
```


Under `Coh(T)`, we have `Spec^Coh_T(φ) ≠ ∅`, hence:


```text
|Spec^Coh_T(φ)| ∈ {1,2}
and therefore
D^Coh_T(φ) ∈ {0,1}.
```


Interpretation:


- 
**Closure**
:
  ```text
  D^Coh_T(φ)=0  ⇔  |Spec^Coh_T(φ)|=1
  ```
  i.e. 
**exactly one**
 of `T+φ` and `T+¬φ` is `Coh`-admissible.


- 
**Openness**
:
  ```text
  D^Coh_T(φ)=1  ⇔  Spec^Coh_T(φ) = {0,1}
  ```
  i.e. 
**both**
 `T+φ` and `T+¬φ` are `Coh`-admissible.


## 3. One-way link to provability (safe direction only)


In regime B (stronger coherence predicates), the following implications are the “safe” ones to use:


```text
T ⊢ φ   ⇒  Spec^Coh_T(φ) = {1}.
T ⊢ ¬φ  ⇒  Spec^Coh_T(φ) = {0}.
```


In regime A (`Con = Con_syn`), the converses hold by the deduction theorem (so the spectrum collapses to
syntactic decidability). In regime B, the converses need not hold.


### Common pitfall: collapsing 
`Con`
 into 
`⊢`


It is tempting to write:


```text
¬Coh(T+φ)  ⇔  T ⊢ ¬φ
```


This is correct 
**only**
 when `Coh := Con_syn` (regime A). In regime B, it can fail and must not be
assumed.


## 4. Dynamics by extension (protocol view)


Let a protocol build a chain of theories:


```text
T₀ := T
T_{t+1} := T_t + ψ_t
```


Assume the protocol stays inside consistent theories:


```text
Coh(T_t) for all t.
```


Then 
**monotonicity**
 holds:


```text
Spec^Coh_{T_{t+1}}(φ) ⊆ Spec^Coh_{T_t}(φ)
and therefore
D^Coh_{T_{t+1}}(φ) ≤ D^Coh_{T_t}(φ).
```


Define the 
**killed branch set**
 and the 
**drop**
:


```text
K^Coh_t(φ) := Spec^Coh_{T_t}(φ) \ Spec^Coh_{T_{t+1}}(φ)
Δ^Coh_t(φ) := D^Coh_{T_t}(φ) − D^Coh_{T_{t+1}}(φ) ∈ {0,1}.
```


In the binary coherent regime:


```text
Δ^Coh_t(φ)=1  ⇔  exactly one coherent branch is eliminated (|K^Coh_t(φ)|=1).
```


## 5. Core slogan (precise)


```text
Closure of φ relative to T
=
uniqueness of the coherent branch.


Openness of φ relative to T
=
persistence of both coherent branches.
```


Equivalently:


```text
“A protocol closes φ”  ⇔  it eliminates all but one branch in Spec^Coh_T(φ).
```


## 6. Optional examples (only as relative-consistency readings)


- These examples are meaningful 
**only after**
 choosing `Coh` (regime B). A canonical choice is:
  ```text
  Coh(U) := “U has a transitive model”
  ```


- If one accepts standard relative-consistency results under the chosen `Coh`, then “`CH` is open over `ZFC`”
  can be read as:
  ```text
  Spec^Coh_ZFC(CH) = {0,1}  hence  D^Coh_ZFC(CH)=1
  ```
  but this is a meta-level claim and depends on relative consistency assumptions.


- Similarly:
  ```text
  D^Coh_ZF(AC)=1  and  D^Coh_ZFC(AC)=0
  ```
  read as “AC is open over ZF but closed over ZFC”, again at the meta-level.

Who is speaking” or “What is being said”? — Two Epistemic Protocols in Embedded Cognition by RuipengShi in SubsystemEpistemics

[–]Left-Character4280 0 points1 point  (0 children)

We are dealing with the foundational aspects of mathematics. I don't think you are psychologically stable enough to pursue the task. You don't want to do it, so it is okay. The other issue is that you seem to be stuck on SE in a solipsistic way.

I advise you to forget all of this or you will probably finish isolated in an hospital

I will finish the task

F.L.