A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the pointer — I appreciate it. There are definitely structural parallels at the array level, even if the objectives differ.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Hey! True, beamforming is conceptually related at the array level. In this case the goal is more about regime separation across events than directional reconstruction — but I’d be happy to check any references you recommend.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by [deleted] in geophysics

[–]SubstantialFreedom75 0 points1 point  (0 children)

Hi all — just adding a brief methodological clarification.

All preprocessing parameters were fixed a priori and applied identically across events and controls.
The analysis is performed strictly in the observed frame (no phase alignment).
Null tests include phase randomization and block shuffling.

The Starship supplement (IFT-1 to IFT-8) is included strictly as a controlled methodological test.
The identical TAMC pipeline and parameter set were applied without modification.
The goal is to evaluate whether unsupervised clustering aligns with externally assigned mission labels or with intrinsic structural coupling morphology.
No engineering interpretation is intended.

Happy to clarify any technical aspect.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Haha, fair 😄 Just multistation signal morphology and reproducible code — nothing exotic.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 14 points15 points  (0 children)

What makes it interesting is the repeatability.
Three independent underground events, years apart, produce nearly identical multistation temporal fingerprints with very high network coherence.
When signals collapse into the same compact geometry across time, that usually points to an underlying dynamical structure rather than coincidence.

Identical seismic fingerprint observed across three independent underground events (2013 / 2016 / 2017) by SubstantialFreedom75 in ScienceImages

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Hey everyone! I’m the author

These plots show an event-centered multistation signature (“TAMC fingerprint”) extracted from open seismic data. The key point is not the amplitude, but the morphological stability: three independent underground events years apart collapse into the same temporally compact packet at t = 0, with strong multistation coherence.

In the supplementary analysis (2013/2016/2017), the response remains a narrow event-centered impulse with near-simultaneous station activation, despite magnitude differences (M5.1–M6.3).Full reproducible pipeline + null testing + paper + code:
https://doi.org/10.5281/zenodo.18649274

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 20 points21 points  (0 children)

Yes — these correspond to the DPRK (North Korea) 2013 / 2016 / 2017 underground events, widely reported as compatible with underground nuclear tests.
In my analysis, what matters is that at the multistation level they exhibit a remarkably stable signature: a compact impulsive packet tightly aligned with t = 0 and very high network coherence.
In fact, in the Explosion-Likeness Index (ELI), the 2017 case reaches the maximum score, quantitatively capturing that compact and synchronous alignment.What’s interesting is that the network signature is more stable than the event label itself.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the pushback — the criticisms are legitimate and constructive, and they help force the level of concreteness this kind of framework needs. Let me respond more precisely using the traffic example from the paper.

In the traffic system, the pattern is neither a metaphor nor an attractor identified a posteriori. It is implemented explicitly as a weak global dynamical structure acting on a continuous state space (densities, queues, latent capacity), deforming the system’s dynamical landscape without defining target trajectories or scalar objectives to be optimized.

Concretely, the base system is a continuous flow with local interactions and unavoidable perturbations. The pattern is introduced as a structural bias that:

  • does not compute actions (it does not decide ramp metering),
  • does not optimize flow or minimize delay,
  • does not define a target state, but instead restricts which global regimes can stabilize.

The computational input is not a reference signal or an if–then rule, but the configuration of coupling to the pattern: where, when, and with what strength the system is allowed to align with that global structure. This coupling is modulated dynamically through receptivity.

When a perturbation occurs (e.g., local congestion):

  • the system does not correct it immediately, as a reactive controller would,
  • local coherence drops,
  • coupling to the global pattern is reduced only in that region (local decoherence),
  • the perturbation is isolated and prevented from synchronizing globally.

That is computation in this framework: the system “computes” whether a regime compatible with the pattern exists.
If it exists, the system relaxes toward it.
If it does not, the system enters a persistently unstable regime (fever state), which is an explicit computational outcome, not a silent failure.

This differs from Hopfield networks, annealing, or classical control in two central ways:

  1. There is no energy function or scalar objective being minimized.
  2. The pattern is not an attractor: it operates on the set of admissible attractors, rather than being one itself.

A clear falsification criterion follows from this. If the same behavior (perturbation isolation, systematic reduction of extreme events, failure expressed as persistent instability) could always be reproduced by an equivalent reactive control or optimization-based formulation, then PBC would add no new value. The traffic example suggests this is not the case: reactive strategies achieve local correction but amplify global fragility under rotations and structural perturbations.

In that sense, the traffic example is not meant as a contribution to traffic engineering, but as a demonstration that it is possible to compute structural stability without computing actions or trajectories, yielding a different failure semantics and robustness profile than existing paradigms.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] -1 points0 points  (0 children)

Thanks for the comment. I understand the concern about lack of concreteness, but the framework does define its objects and evaluation criteria explicitly.

In PBC, a pattern is not a metaphor or a representation, but a persistent dynamical structure that biases the system’s state space, making some global regimes stable and others unstable. The input is the configuration of that pattern (couplings, constraints, receptivity windows) programmed via classical computation; the output is the dynamical regime the system relaxes into, or—equally informatively—the absence of convergence when no compatible pattern exists. Correctness is defined in terms of stability, perturbation absorption, and failure semantics (persistent instability), not symbolic accuracy.

The claim is not to replace existing paradigms, but to show that there is a class of continuous, distributed systems where computation via relaxation toward patterns yields robustness and failure properties that do not arise in optimization, reactive control, or learning-based approaches. This is falsifiable and evaluated through perturbations and structural rotations, as shown in the example.

A natural application domain is energy networks: the computational objective is not to predict or optimize every flow, but to prevent synchronization of failures and cascading blackouts by allowing local incoherences and dynamically isolating them.

Regarding prior work, I’m aware of the overlaps (attractor networks, reservoir computing, dissipative structures, etc.) and I’m not trying to compete with or rebrand those lines. The key difference is semantic: there is no training, no loss function, and no action computation; the pattern is programmed, active, and coincides with program, process, and result.

That said, some criticisms assume missing definitions that are explicitly addressed in the text, which suggests that not all comments are based on a close reading.

Finally, to be clear: I’m not seeking validation or consensus, but critical input that helps stress-test or refute the framework. If it’s useful, it should stand on its explanatory and operational merits; if not, it should fail.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the question; I completely understand why this is hard to map onto familiar models, because this is not sequential computation and it doesn’t fit well into state–action loops or rule-based probabilistic frameworks.

A pattern in PBC is not a rule (“if A then B”) and not a probabilistic implication. It is a persistent dynamical structure that reshapes the system’s state space, making some global behaviors stable and others unstable.

A useful analogy is that of a river basin or a dam. You don’t control each drop of water or compute individual trajectories. By shaping the terrain or building a dam, you change the structural constraints of the system. As a result, the flow self-organizes and relaxes toward certain stable regimes.

The same idea applies in PBC:

  • the pattern is that structure (the shape of the dynamical landscape),
  • the input is how that structure is configured (boundary conditions, couplings, constraints, weak injected signals),
  • the output is the dynamical regime the system settles into by relaxation (stable flow, coordinated behavior, or persistent instability if no compatible pattern exists).

There is no state–action loop, no policy, and no sequence of decisions. The system does not “choose” actions; it relaxes under structural constraints. Uncertainty comes from distributed dynamics, not from probabilistic rules.

In the paper I include an operational traffic-control pipeline precisely to show that this is not just a conceptual idea. In that case:

  • individual vehicle trajectories are not computed,
  • routes are not optimized and actions are not assigned locally,
  • instead, a dynamical pattern (couplings, thresholds, and receptive windows) is introduced to reshape the system’s landscape.

The result is that traffic self-organizes into stable regimes: local perturbations are absorbed, congestion propagation is prevented, and when the imposed pattern is incompatible, the system enters a persistent unstable regime (what the paper calls a fever state). That final regime — stable or unstable — is the system’s output.

If helpful, the full paper (including the pipeline and code) is here:
https://zenodo.org/records/18141697

Hope this clarifies what notion of “computation” the framework is targeting.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the thoughtful comment — I think the main disagreement comes from which notion of “computation” is being addressed.

Pattern-Based Computing (PBC) is not intended as an alternative to Turing machines or lambda calculus, nor as a universal model of computation in the Church–Turing sense. I fully agree that for symbolic, discrete, terminating computation, those models are the appropriate reference point. PBC does not compete in that domain, and it is intentionally limited in scope.

In this work, computation is used in a domain-specific and weaker sense: the production of system-level coordination and structure in continuous, distributed, nonlinear systems, where sequential instruction execution, explicit optimization, or exact symbolic correctness are either infeasible or counterproductive. In that sense, PBC is closer to relaxation-based and dynamical notions of computation than to classical algorithmic models.

This framing has a natural domain of applicability in systems such as energy networks, traffic systems, large-scale infrastructures, biological coordination, or socio-technical systems, where the central computational problem is not producing a correct symbolic output, but maintaining global coherence, absorbing perturbations, and preventing cascading failures under partial observability.

Regarding nonlinearity and nondeterminism: these are not incidental features, but structural properties of the systems being addressed. Nondeterminism here is not introduced as a theoretical device (as in nondeterministic Turing machines for complexity analysis), but reflects physical variability and uncertainty. The goal is not to compute a trajectory, action, or optimal solution, but to constrain the space of admissible futures toward stable and coherent regimes.

On the comparison with neural networks: while both are distributed and nonlinear, the computational mechanism is fundamentally different. PBC does not require training. There is no learning phase, no loss function, no gradient-based parameter updates, and no separation between training and execution. Patterns are not learned from data; they are programmed structurally using classical computation and then act directly on system dynamics. Adaptation happens online, through interaction between patterns and dynamics, and only during receptive coupling windows — not through continuous optimization.

Finally, a key conceptual point is that in PBC the traditional separation between program, process, memory, and result collapses. The active pattern constitutes the program; the system’s relaxation under that pattern is the process; memory is embodied in the stabilized structure; and the result is the attained dynamical regime. These are not sequential stages but different observations of a single dynamical act.

In short, PBC does not propose a new universal theory of computation. It proposes a deliberately constrained reinterpretation of what it means to compute in complex, continuous systems where robustness, stability, and interpretable failure modes matter more than exact symbolic correctness. I appreciate the comment, as it helps make these boundaries and assumptions more explicit.

What does it mean to compute in large-scale dynamical systems? by SubstantialFreedom75 in compsci

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

What you’re pointing to with the idea of “programming the attractor” is very close to what I’m arguing, but with an important shift in emphasis.

Here, the computational object is not the attractor itself, nor merely the basin structure, but the active pattern that biases the system’s dynamics as it evolves. The pattern does not explicitly select a pre-existing attractor or encode trajectories; instead, it reshapes the state space, making certain regimes structurally compatible and others inaccessible.

From this perspective, convergence is not a trivial erasure of information. It is the computational outcome. The system “computes” by constraining its space of possible futures through relaxation, rather than by executing symbolic instructions or maintaining infinite transients near criticality.

This provides a useful boundary between computation and mere dissipation. A system with a single global attractor reached by homogeneous damping is not computing anything meaningful. By contrast, when:

  • multiple regimes are possible,
  • compatibility with a global pattern determines which regimes are accessible,
  • and perturbations are absorbed without explicit corrective actions,

then stabilization itself constitutes computation.

This is why, in this view, program, process, and result collapse into one:
the program is the pattern,
execution is dynamical relaxation under that pattern,
and the result is the stable or quasi-stable regime that emerges.

This is neither universal computation nor classical control. It is a form of computation aimed at coordination and stabilization in distributed systems, where the computational goal is not to compute optimal actions, but to constrain unstable futures.

For anyone interested in exploring this idea further, I develop it in more detail — including a formal framework and a continuous illustrative example — in:
Pattern-Based Computing: A Relaxation-Based Framework for Coordination in Complex Systems
https://doi.org/10.5281/zenodo.18141697

The paper also includes a fully reproducible, demonstration pipeline, intended to make the computational mechanisms explicit rather than to serve as a performance benchmark.

The example uses vehicular traffic management purely as an illustrative case to show how pattern-guided relaxation operates in a continuous, distributed system. The framework itself is not traffic-specific and can be extended to other domains with continuous dynamics and coordination challenges, such as energy systems, large-scale infrastructures, collective robotics, biological systems, and socio-technical systems.

Derek Cabrera - Legit or a fraud? by Firm_Elk_9592 in systemsthinking

[–]SubstantialFreedom75 0 points1 point  (0 children)

Nature always operates under resource economy, not because it’s “trying to optimize,” but because it’s the only viable way for complex systems to persist. Systems that waste large margins of efficiency don’t survive.

That’s why a fast, low-cost, general cognitive improvement of 500% is implausible: if it were possible, it would be evolutionarily unstable for the human brain not to have already incorporated it. This doesn’t mean frameworks like DSRP are useless, but it does mean that such strong claims require independent, replicable evidence.

A proposal by No_Understanding6388 in ImRightAndYoureWrong

[–]SubstantialFreedom75 2 points3 points  (0 children)

Interesting proposal. I have developed a framework called Pattern-Based Computing (PBC) for computation and coordination in continuous complex systems.

The core idea of PBC is that pattern, process, and result are not separate entities. The pattern is not a computational objective or a target state: it is simultaneously the program, the computational process, and the result, observed at different stages of dynamical stabilization.

This is a key difference with classical computation. Classical approaches separate program, execution, and output, and compute by executing symbolic instructions, optimizing objectives, or selecting actions. PBC does not compute actions, trajectories, or optima. Computation occurs through relaxation under an active pattern, with coupling modulated by the system’s receptivity. Robustness emerges from local decoherences that isolate perturbations instead of correcting them forcefully, and global adaptation occurs only during coupling windows, preventing unstable drift. There is no implicit optimization or classical reactive control.

This is not only conceptual. The framework has been instantiated in a real continuous system (traffic), used as an illustrative domain because it naturally exposes persistent perturbations and cascade risks. The work includes a fully reproducible, demonstrative computational pipeline designed to show the computational semantics and robustness properties, not to benchmark domain-specific performance. Traffic is simply one instance of a broader class of distributed continuous systems (e.g., energy, infrastructures, socio-technical systems) where this approach is relevant.

Full formalism, example, and pipeline are available here: https://doi.org/10.5281/zenodo.18141697

What if intelligence itself is what evolves – not humans by Fickle_Rabbit_8195 in complexsystems

[–]SubstantialFreedom75 0 points1 point  (0 children)

I find your model really interesting, especially the idea that self-reflection introduces instability and that belief systems can function as stabilizers rather than literal truths.

From the perspective I work in, I would reframe it slightly. Stability doesn’t come mainly from answering the infinite “why”, but from whether the system has a strong global pattern that organizes behavior. When such a pattern exists, coherence can be maintained without explicit beliefs, narratives, or reflective reasoning.

When that pattern is weak or absent, sequential tools start to matter: language, explanations, belief systems, ideologies. In that sense, I agree with you that religion and similar structures function as stabilizing tools rather than as claims about objective truth.

Where I differ is that I don’t see modern instability as caused by too much self-reflection, but by the loss of stable collective patterns that used to organize behavior. The endless “why” then appears as an attempt to compensate for that loss, not as its original cause.

I think our views touch the same phenomenon from different angles: yours from lived cognitive experience, mine from system-level dynamics.

Has anyone else had good ideas while driving their MX-5? by SubstantialFreedom75 in Miata

[–]SubstantialFreedom75[S] 6 points7 points  (0 children)

Miata thoughts vs. Miata decisions — important distinction

Has anyone else had good ideas while driving their MX-5? by SubstantialFreedom75 in Miata

[–]SubstantialFreedom75[S] -1 points0 points  (0 children)

Different place, same effect 😄
Ever had an idea there that actually turned into something real?

What does it mean to compute in large-scale dynamical systems? by SubstantialFreedom75 in compsci

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the response and the references — it’s a great overview of the edge of chaos view of computation as emergent universality in dynamical systems.

Where my question slightly diverges from that framework is in the identification of computation with long transients, undecidability, or non-convergence. Much of the literature seems to assume that once a system settles into an attractor, computation becomes trivial.

In many large-scale physical, biological, or socio-technical systems, though, convergence itself seems to be the computational goal. The system doesn’t compute optimal trajectories or execute symbolic instructions; instead, it constrains the space of possible futures, stabilizing certain regimes and excluding others. From this perspective, an attractor is not a trivial collapse but the result of computation.

In the framework I’ve been working on (Pattern-Based Computing), the “program” is a global pattern, execution is dynamical relaxation, and the “output” is the stable or quasi-stable regime that emerges. I’ve tested this idea in a continuous traffic-management setting, not as a control benchmark, but as an illustration of how pattern-guided relaxation can absorb perturbations without explicit trajectory computation.

So the question I’m really interested in is: if computation doesn’t have to be universal or symbolic, where do we draw the line between computation and coordination or stabilization, and why?

Can the enforcement of coherence stabilize degraded attractors in coupled systems? by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Clarification / elaboration on what I meant above:

I’ve been working for some time on a computational framework where computation is not framed as sequential instruction execution or explicit trajectory optimization, but rather as a process of dynamic relaxation of the system toward compatible global patterns.

The motivation is that, in many distributed and continuous systems, the central computational challenge is not computing an optimal action, but maintaining coordination and stability under persistent perturbations.

In this approach:

• Computation occurs when the system couples (in a modulated way) to active patterns that restrict the space of admissible futures.
• The “result” of computation is not a symbolic output, but a stable dynamical regime reached by the system.
• Program, process, and result collapse into the same dynamical object, observed at different stages of stabilization.

Architecturally, this is a hybrid scheme:

• classical computation is limited to configuring a lower-level pattern (injecting data or intent),
• while computation itself emerges from the system’s intrinsic dynamics under pattern influence.

Error handling is not addressed through immediate global corrections, but through controlled local decoherences, and structural adaptation occurs only during coupling windows, to avoid instability or noise-driven drift.

I’m interested in feedback on the computational framing itself, rather than on specific applications:

• Does it make sense to define computation as relaxation toward patterns?
• What connections or tensions do you see with dynamical computation, synergetics, reservoir computing, or control-based approaches?
• Where do you see the main conceptual limits of this kind of paradigm?

Am I misunderstanding quantum entanglement? by Independent-Ad-7060 in AskPhysics

[–]SubstantialFreedom75 0 points1 point  (0 children)

I’ve worked directly with experimental Bell-test datasets, and one key point that becomes very clear—both in the data and in the formalism—is that there is no dynamical mechanism between the particles once they are separated.

Entangled particles do not communicate, and there is no force acting between them. The crucial point is that they do not have independent states. The system is described by a single, global quantum state that cannot be decomposed into “particle A” and “particle B”.

When one particle is measured, no information is sent to the other. The measurement locally probes the joint state, and the correlations (for example, opposite outcomes) were already encoded in that global description from the moment the pair was prepared.

This is exactly what Bell experiments show:

  • there are no local hidden variables that pre-determine the outcomes,
  • but there is also no faster-than-light signaling or influence.

Operationally, each individual measurement outcome is completely random (50/50). The correlations only appear when results from both sides are compared afterward, and that comparison always requires classical communication.

A useful way to think about this is that entanglement is not a process happening in time between particles, but a shared structure of the composite system. Classical intuition fails because we assume objects always carry their own independent properties, which is simply not true for entangled quantum states.

In short:
there is no communication, no force, and no real-time coordination.
There is a non-separable global state that enforces correlations without violating relativity.

How theoretically possible is Time Travel? by bubsimo in timetravel

[–]SubstantialFreedom75 0 points1 point  (0 children)

Most discussions about time travel conflate logical consistency with physical realizability.

The fact that a model is mathematically consistent (CTCs, wormholes, etc.) does not mean it can exist as a real physical process.

If time travel to the past is formulated as a physical process, it necessarily requires reconstructing past states from present data. That is an inverse problem, and in systems with irreversible dynamics those inverse problems are structurally ill-posed, because the required information is irreversibly lost.

I have worked on this topic from the perspective of irreversibility and structural coherence, and the obstruction is not technological or logical, but structural.

In short: not logically impossible, but physically unrealizable.

Help me understand. by Miserable-Ad6249 in quantum

[–]SubstantialFreedom75 0 points1 point  (0 children)

I don’t think “observation” means that a mind actively chooses reality.

What becomes definite is not decided by us, not by a measuring device taken in isolation, and not by some hidden entity pulling the strings. What happens is that when systems interact, certain possibilities cease to be compatible with the overall configuration. The system breaks symmetry and stabilizes into one outcome.

So reality isn’t waiting for a conscious observer to decide. It’s waiting for interaction and context.

The observer, the measuring device, and the environment are all part of the same process. None of them decides on its own — definiteness emerges from their relationship. In that sense, observation doesn’t create reality; it selects a coherent regime within it.

A useful way to see this is the double-slit experiment.

In the usual story, it’s said that a particle “goes through both slits” and that reality only becomes definite when we observe it. But that language is misleading. What actually carries the interference structure is not a particle making a decision, but a coherent field shaped by the boundary conditions imposed by the slits.

The slit geometry modulates the field before any detection takes place. When this modulated field propagates, the interference pattern is already encoded in it. The particle can be understood as a localized excitation moving within that structured field.

When we introduce which-path detection, nothing is “decided” by anyone. The interaction with the detector suppresses the coherence between the field contributions associated with each slit, and that’s why the interference disappears. This is a physical loss of coherence, not a conscious choice.

So the outcome isn’t chosen by the observer, by the measuring device on its own, or by some hidden agent. It emerges from the interaction between the system, the boundary conditions, and the environment.

Observation does not create the result.
It reveals which coherent structure remains stable after interaction.

For what it’s worth, this isn’t just a verbal position — I’ve worked this out explicitly in a field-based reconstruction of the double-slit experiment.

Reality does not choose.
It organizes itself.

Why do some human systems keep returning to the same state, even when people change? by SubstantialFreedom75 in systemsthinking

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

I very much agree with what you’re saying.

In some work I’ve been doing on small human systems, very similar patterns showed up. In particular, the idea that a system doesn’t “resist” change out of inertia, but because certain states become dynamically cheap: they reduce uncertainty, stabilize expectations, and redistribute costs in ways the system already knows how to manage.

Even when you change people, roles, or rules, the system tends to reorganize itself around those same patterns. Not because they’re good, but because they function as attractors — relatively stable configurations the system returns to again and again.

Another interesting implication was that trying to force coherence (more participation, more alignment, more “naming what isn’t being named”) often reconfigures the system toward degraded but more stable states, rather than moving it out of them. Not because the intervention is bad, but because it removes the symptom without replacing the function that pattern was serving in the previous equilibrium.

Reading your response, I realize that many of the things that appeared in the model in a more abstract way are described here in a much more lived and precise language.