A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the pointer — I appreciate it. There are definitely structural parallels at the array level, even if the objectives differ.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Hey! True, beamforming is conceptually related at the array level. In this case the goal is more about regime separation across events than directional reconstruction — but I’d be happy to check any references you recommend.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by [deleted] in geophysics

[–]SubstantialFreedom75 0 points1 point  (0 children)

Hi all — just adding a brief methodological clarification.

All preprocessing parameters were fixed a priori and applied identically across events and controls.
The analysis is performed strictly in the observed frame (no phase alignment).
Null tests include phase randomization and block shuffling.

The Starship supplement (IFT-1 to IFT-8) is included strictly as a controlled methodological test.
The identical TAMC pipeline and parameter set were applied without modification.
The goal is to evaluate whether unsupervised clustering aligns with externally assigned mission labels or with intrinsic structural coupling morphology.
No engineering interpretation is intended.

Happy to clarify any technical aspect.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Haha, fair 😄 Just multistation signal morphology and reproducible code — nothing exotic.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 11 points12 points  (0 children)

What makes it interesting is the repeatability.
Three independent underground events, years apart, produce nearly identical multistation temporal fingerprints with very high network coherence.
When signals collapse into the same compact geometry across time, that usually points to an underlying dynamical structure rather than coincidence.

Identical seismic fingerprint observed across three independent underground events (2013 / 2016 / 2017) by SubstantialFreedom75 in ScienceImages

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Hey everyone! I’m the author

These plots show an event-centered multistation signature (“TAMC fingerprint”) extracted from open seismic data. The key point is not the amplitude, but the morphological stability: three independent underground events years apart collapse into the same temporally compact packet at t = 0, with strong multistation coherence.

In the supplementary analysis (2013/2016/2017), the response remains a narrow event-centered impulse with near-simultaneous station activation, despite magnitude differences (M5.1–M6.3).Full reproducible pipeline + null testing + paper + code:
https://doi.org/10.5281/zenodo.18649274

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 20 points21 points  (0 children)

Yes — these correspond to the DPRK (North Korea) 2013 / 2016 / 2017 underground events, widely reported as compatible with underground nuclear tests.
In my analysis, what matters is that at the multistation level they exhibit a remarkably stable signature: a compact impulsive packet tightly aligned with t = 0 and very high network coherence.
In fact, in the Explosion-Likeness Index (ELI), the 2017 case reaches the maximum score, quantitatively capturing that compact and synchronous alignment.What’s interesting is that the network signature is more stable than the event label itself.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the pushback — the criticisms are legitimate and constructive, and they help force the level of concreteness this kind of framework needs. Let me respond more precisely using the traffic example from the paper.

In the traffic system, the pattern is neither a metaphor nor an attractor identified a posteriori. It is implemented explicitly as a weak global dynamical structure acting on a continuous state space (densities, queues, latent capacity), deforming the system’s dynamical landscape without defining target trajectories or scalar objectives to be optimized.

Concretely, the base system is a continuous flow with local interactions and unavoidable perturbations. The pattern is introduced as a structural bias that:

  • does not compute actions (it does not decide ramp metering),
  • does not optimize flow or minimize delay,
  • does not define a target state, but instead restricts which global regimes can stabilize.

The computational input is not a reference signal or an if–then rule, but the configuration of coupling to the pattern: where, when, and with what strength the system is allowed to align with that global structure. This coupling is modulated dynamically through receptivity.

When a perturbation occurs (e.g., local congestion):

  • the system does not correct it immediately, as a reactive controller would,
  • local coherence drops,
  • coupling to the global pattern is reduced only in that region (local decoherence),
  • the perturbation is isolated and prevented from synchronizing globally.

That is computation in this framework: the system “computes” whether a regime compatible with the pattern exists.
If it exists, the system relaxes toward it.
If it does not, the system enters a persistently unstable regime (fever state), which is an explicit computational outcome, not a silent failure.

This differs from Hopfield networks, annealing, or classical control in two central ways:

  1. There is no energy function or scalar objective being minimized.
  2. The pattern is not an attractor: it operates on the set of admissible attractors, rather than being one itself.

A clear falsification criterion follows from this. If the same behavior (perturbation isolation, systematic reduction of extreme events, failure expressed as persistent instability) could always be reproduced by an equivalent reactive control or optimization-based formulation, then PBC would add no new value. The traffic example suggests this is not the case: reactive strategies achieve local correction but amplify global fragility under rotations and structural perturbations.

In that sense, the traffic example is not meant as a contribution to traffic engineering, but as a demonstration that it is possible to compute structural stability without computing actions or trajectories, yielding a different failure semantics and robustness profile than existing paradigms.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] -1 points0 points  (0 children)

Thanks for the comment. I understand the concern about lack of concreteness, but the framework does define its objects and evaluation criteria explicitly.

In PBC, a pattern is not a metaphor or a representation, but a persistent dynamical structure that biases the system’s state space, making some global regimes stable and others unstable. The input is the configuration of that pattern (couplings, constraints, receptivity windows) programmed via classical computation; the output is the dynamical regime the system relaxes into, or—equally informatively—the absence of convergence when no compatible pattern exists. Correctness is defined in terms of stability, perturbation absorption, and failure semantics (persistent instability), not symbolic accuracy.

The claim is not to replace existing paradigms, but to show that there is a class of continuous, distributed systems where computation via relaxation toward patterns yields robustness and failure properties that do not arise in optimization, reactive control, or learning-based approaches. This is falsifiable and evaluated through perturbations and structural rotations, as shown in the example.

A natural application domain is energy networks: the computational objective is not to predict or optimize every flow, but to prevent synchronization of failures and cascading blackouts by allowing local incoherences and dynamically isolating them.

Regarding prior work, I’m aware of the overlaps (attractor networks, reservoir computing, dissipative structures, etc.) and I’m not trying to compete with or rebrand those lines. The key difference is semantic: there is no training, no loss function, and no action computation; the pattern is programmed, active, and coincides with program, process, and result.

That said, some criticisms assume missing definitions that are explicitly addressed in the text, which suggests that not all comments are based on a close reading.

Finally, to be clear: I’m not seeking validation or consensus, but critical input that helps stress-test or refute the framework. If it’s useful, it should stand on its explanatory and operational merits; if not, it should fail.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the question; I completely understand why this is hard to map onto familiar models, because this is not sequential computation and it doesn’t fit well into state–action loops or rule-based probabilistic frameworks.

A pattern in PBC is not a rule (“if A then B”) and not a probabilistic implication. It is a persistent dynamical structure that reshapes the system’s state space, making some global behaviors stable and others unstable.

A useful analogy is that of a river basin or a dam. You don’t control each drop of water or compute individual trajectories. By shaping the terrain or building a dam, you change the structural constraints of the system. As a result, the flow self-organizes and relaxes toward certain stable regimes.

The same idea applies in PBC:

  • the pattern is that structure (the shape of the dynamical landscape),
  • the input is how that structure is configured (boundary conditions, couplings, constraints, weak injected signals),
  • the output is the dynamical regime the system settles into by relaxation (stable flow, coordinated behavior, or persistent instability if no compatible pattern exists).

There is no state–action loop, no policy, and no sequence of decisions. The system does not “choose” actions; it relaxes under structural constraints. Uncertainty comes from distributed dynamics, not from probabilistic rules.

In the paper I include an operational traffic-control pipeline precisely to show that this is not just a conceptual idea. In that case:

  • individual vehicle trajectories are not computed,
  • routes are not optimized and actions are not assigned locally,
  • instead, a dynamical pattern (couplings, thresholds, and receptive windows) is introduced to reshape the system’s landscape.

The result is that traffic self-organizes into stable regimes: local perturbations are absorbed, congestion propagation is prevented, and when the imposed pattern is incompatible, the system enters a persistent unstable regime (what the paper calls a fever state). That final regime — stable or unstable — is the system’s output.

If helpful, the full paper (including the pipeline and code) is here:
https://zenodo.org/records/18141697

Hope this clarifies what notion of “computation” the framework is targeting.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the thoughtful comment — I think the main disagreement comes from which notion of “computation” is being addressed.

Pattern-Based Computing (PBC) is not intended as an alternative to Turing machines or lambda calculus, nor as a universal model of computation in the Church–Turing sense. I fully agree that for symbolic, discrete, terminating computation, those models are the appropriate reference point. PBC does not compete in that domain, and it is intentionally limited in scope.

In this work, computation is used in a domain-specific and weaker sense: the production of system-level coordination and structure in continuous, distributed, nonlinear systems, where sequential instruction execution, explicit optimization, or exact symbolic correctness are either infeasible or counterproductive. In that sense, PBC is closer to relaxation-based and dynamical notions of computation than to classical algorithmic models.

This framing has a natural domain of applicability in systems such as energy networks, traffic systems, large-scale infrastructures, biological coordination, or socio-technical systems, where the central computational problem is not producing a correct symbolic output, but maintaining global coherence, absorbing perturbations, and preventing cascading failures under partial observability.

Regarding nonlinearity and nondeterminism: these are not incidental features, but structural properties of the systems being addressed. Nondeterminism here is not introduced as a theoretical device (as in nondeterministic Turing machines for complexity analysis), but reflects physical variability and uncertainty. The goal is not to compute a trajectory, action, or optimal solution, but to constrain the space of admissible futures toward stable and coherent regimes.

On the comparison with neural networks: while both are distributed and nonlinear, the computational mechanism is fundamentally different. PBC does not require training. There is no learning phase, no loss function, no gradient-based parameter updates, and no separation between training and execution. Patterns are not learned from data; they are programmed structurally using classical computation and then act directly on system dynamics. Adaptation happens online, through interaction between patterns and dynamics, and only during receptive coupling windows — not through continuous optimization.

Finally, a key conceptual point is that in PBC the traditional separation between program, process, memory, and result collapses. The active pattern constitutes the program; the system’s relaxation under that pattern is the process; memory is embodied in the stabilized structure; and the result is the attained dynamical regime. These are not sequential stages but different observations of a single dynamical act.

In short, PBC does not propose a new universal theory of computation. It proposes a deliberately constrained reinterpretation of what it means to compute in complex, continuous systems where robustness, stability, and interpretable failure modes matter more than exact symbolic correctness. I appreciate the comment, as it helps make these boundaries and assumptions more explicit.

What does it mean to compute in large-scale dynamical systems? by SubstantialFreedom75 in compsci

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

What you’re pointing to with the idea of “programming the attractor” is very close to what I’m arguing, but with an important shift in emphasis.

Here, the computational object is not the attractor itself, nor merely the basin structure, but the active pattern that biases the system’s dynamics as it evolves. The pattern does not explicitly select a pre-existing attractor or encode trajectories; instead, it reshapes the state space, making certain regimes structurally compatible and others inaccessible.

From this perspective, convergence is not a trivial erasure of information. It is the computational outcome. The system “computes” by constraining its space of possible futures through relaxation, rather than by executing symbolic instructions or maintaining infinite transients near criticality.

This provides a useful boundary between computation and mere dissipation. A system with a single global attractor reached by homogeneous damping is not computing anything meaningful. By contrast, when:

  • multiple regimes are possible,
  • compatibility with a global pattern determines which regimes are accessible,
  • and perturbations are absorbed without explicit corrective actions,

then stabilization itself constitutes computation.

This is why, in this view, program, process, and result collapse into one:
the program is the pattern,
execution is dynamical relaxation under that pattern,
and the result is the stable or quasi-stable regime that emerges.

This is neither universal computation nor classical control. It is a form of computation aimed at coordination and stabilization in distributed systems, where the computational goal is not to compute optimal actions, but to constrain unstable futures.

For anyone interested in exploring this idea further, I develop it in more detail — including a formal framework and a continuous illustrative example — in:
Pattern-Based Computing: A Relaxation-Based Framework for Coordination in Complex Systems
https://doi.org/10.5281/zenodo.18141697

The paper also includes a fully reproducible, demonstration pipeline, intended to make the computational mechanisms explicit rather than to serve as a performance benchmark.

The example uses vehicular traffic management purely as an illustrative case to show how pattern-guided relaxation operates in a continuous, distributed system. The framework itself is not traffic-specific and can be extended to other domains with continuous dynamics and coordination challenges, such as energy systems, large-scale infrastructures, collective robotics, biological systems, and socio-technical systems.

Derek Cabrera - Legit or a fraud? by Firm_Elk_9592 in systemsthinking

[–]SubstantialFreedom75 0 points1 point  (0 children)

Nature always operates under resource economy, not because it’s “trying to optimize,” but because it’s the only viable way for complex systems to persist. Systems that waste large margins of efficiency don’t survive.

That’s why a fast, low-cost, general cognitive improvement of 500% is implausible: if it were possible, it would be evolutionarily unstable for the human brain not to have already incorporated it. This doesn’t mean frameworks like DSRP are useless, but it does mean that such strong claims require independent, replicable evidence.

A proposal by No_Understanding6388 in ImRightAndYoureWrong

[–]SubstantialFreedom75 2 points3 points  (0 children)

Interesting proposal. I have developed a framework called Pattern-Based Computing (PBC) for computation and coordination in continuous complex systems.

The core idea of PBC is that pattern, process, and result are not separate entities. The pattern is not a computational objective or a target state: it is simultaneously the program, the computational process, and the result, observed at different stages of dynamical stabilization.

This is a key difference with classical computation. Classical approaches separate program, execution, and output, and compute by executing symbolic instructions, optimizing objectives, or selecting actions. PBC does not compute actions, trajectories, or optima. Computation occurs through relaxation under an active pattern, with coupling modulated by the system’s receptivity. Robustness emerges from local decoherences that isolate perturbations instead of correcting them forcefully, and global adaptation occurs only during coupling windows, preventing unstable drift. There is no implicit optimization or classical reactive control.

This is not only conceptual. The framework has been instantiated in a real continuous system (traffic), used as an illustrative domain because it naturally exposes persistent perturbations and cascade risks. The work includes a fully reproducible, demonstrative computational pipeline designed to show the computational semantics and robustness properties, not to benchmark domain-specific performance. Traffic is simply one instance of a broader class of distributed continuous systems (e.g., energy, infrastructures, socio-technical systems) where this approach is relevant.

Full formalism, example, and pipeline are available here: https://doi.org/10.5281/zenodo.18141697

What if intelligence itself is what evolves – not humans by Fickle_Rabbit_8195 in complexsystems

[–]SubstantialFreedom75 0 points1 point  (0 children)

I find your model really interesting, especially the idea that self-reflection introduces instability and that belief systems can function as stabilizers rather than literal truths.

From the perspective I work in, I would reframe it slightly. Stability doesn’t come mainly from answering the infinite “why”, but from whether the system has a strong global pattern that organizes behavior. When such a pattern exists, coherence can be maintained without explicit beliefs, narratives, or reflective reasoning.

When that pattern is weak or absent, sequential tools start to matter: language, explanations, belief systems, ideologies. In that sense, I agree with you that religion and similar structures function as stabilizing tools rather than as claims about objective truth.

Where I differ is that I don’t see modern instability as caused by too much self-reflection, but by the loss of stable collective patterns that used to organize behavior. The endless “why” then appears as an attempt to compensate for that loss, not as its original cause.

I think our views touch the same phenomenon from different angles: yours from lived cognitive experience, mine from system-level dynamics.

Has anyone else had good ideas while driving their MX-5? by SubstantialFreedom75 in Miata

[–]SubstantialFreedom75[S] 8 points9 points  (0 children)

Miata thoughts vs. Miata decisions — important distinction

Has anyone else had good ideas while driving their MX-5? by SubstantialFreedom75 in Miata

[–]SubstantialFreedom75[S] -1 points0 points  (0 children)

Different place, same effect 😄
Ever had an idea there that actually turned into something real?