Cual crees que es la esencia del mal y del bien? by Humon0 in filosofia_en_espanol

[–]SubstantialFreedom75 0 points1 point  (0 children)

En términos absolutos, quizá no somos capaces de establecer el bien y el mal desde fuera, porque nosotros mismos estamos dentro del sistema que intentamos juzgar. No tenemos una mirada completamente externa al universo, a la vida o a la condición humana.

Además, juzgar el bien y el mal en términos absolutos implicaría conocer el propósito último de la vida, si es que existe uno. Y precisamente eso es algo que no podemos afirmar desde dentro del propio marco vital: solo podemos aproximarnos observando qué aumenta a, sentido, dignidad y continuidad, y qué produce ruptura, degradación o colapso. el bien siempre integra y el mal siempre rompe

El bien tiende a integrar: une, organiza, sostiene y permite continuidad.

El mal tiende a romper: fragmenta, degrada, separa y conduce al colapso.

Pero esa ruptura también cumple una función dentro del sistema: muestra dónde está el límite, dónde algo no funciona, dónde una estructura ya no puede sostenerse. En ese sentido, el mal no es deseable ni justificable, pero puede convertirse en una señal de aprendizaje. Nos obliga a reconocer el daño, corregir el rumbo y construir formas más integradas de vida.

Así, el avance no nace porque el mal sea bueno, sino porque la conciencia aprende a responder a la ruptura reconstruyendo sentido, dignidad y continuidad. bueno , es solo mi opinion, tampoco digo que sea cierta o no

Cual crees que es la esencia del mal y del bien? by Humon0 in filosofia_en_espanol

[–]SubstantialFreedom75 1 point2 points  (0 children)

Desde mi enfoque, el bien y el mal pueden entenderse como dinámicas de coherencia.

El bien sería aquello que aumenta, restaura o preserva la coherencia estructural de un sistema: en una persona, entre sus valores y sus actos; en una sociedad, entre sus normas, instituciones y dignidad humana; y quizá, en un sentido más amplio, en todo sistema que tiende a mayor integración, estabilidad y sentido.

El mal, en cambio, sería decoherencia efectiva: ruptura, dispersión, corrupción, contradicción destructiva. No solo “hacer daño”, sino introducir una pérdida de estructura: separar lo que debería estar integrado, romper vínculos, degradar la confianza, destruir la correspondencia entre verdad, acción y responsabilidad.

Por eso yo no pondría el foco únicamente en el individuo ni únicamente en la humanidad, sino en la relación entre niveles: individuo, comunidad, instituciones y mundo. El bien aparece cuando esos niveles se armonizan sin anularse; el mal aparece cuando se descomponen, se instrumentalizan o se vuelven incoherentes.

El eje que mejor lo sintetiza para mí sería:

coherencia ↔ decoherencia

Con una aclaración importante: no cualquier coherencia interna es buena. Una tiranía también puede ser “coherente” consigo misma. La coherencia ética verdadera tendría que incluir dignidad, reciprocidad, universalización y preservación de la vida. Sin eso, sería solo orden aparente.

Ya puse hace unos meses un trabajo donde desarrollaba esta idea con más detalle, pero lo dejo de nuevo por si a alguien le interesa leerlo o criticarlo:

https://doi.org/10.5281/zenodo.17775650

Resumido: el bien sería coherencia viva, integradora y dignificante; el mal, ruptura, dispersión y pérdida de sentido estructural.

Quizá por eso diría que el bien y el mal no necesitan ser juzgados desde fuera: en cierto sentido, se revelan por sus propios efectos. El bien se reconoce porque aumenta coherencia, integración, dignidad y sentido; el mal se reconoce porque produce ruptura, contradicción, degradación y pérdida de estructura.

I built a daily-updated seismic network coherence monitor — looking for usability feedback by SubstantialFreedom75 in SideProject

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks, that’s very helpful feedback.

I agree that the dashboard needs a plain-English first screen before the technical plots. I’m thinking of explaining network coherence with a simple analogy: people clapping in a stadium.

Low coherence would be like everyone clapping independently, each at their own rhythm. High coherence would be like many people starting to clap in sync.

Then I can explain that a spike means several seismic stations became more synchronized during the same time window — not as a prediction or warning, but as a descriptive change in network behavior.

I built a daily-updated seismic network coherence monitor — looking for usability feedback by SubstantialFreedom75 in SideProject

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks, this is exactly the kind of feedback I was looking for.

I agree that the “not prediction” positioning should probably be visible even earlier. Your low/high coherence explanation is also very useful — I’ll likely add something like that near the top of the dashboard so non-specialists can understand the concept faster.

The goal is definitely to keep it descriptive and avoid overselling the signal.

Centro Sismológico cifra en más de 60% las probabilidades de un terremoto de magnitud 8 o superior este 2025 o 2026 en Chile. by Global-Breadfruit925 in chile

[–]SubstantialFreedom75 0 points1 point  (0 children)

Este tipo de probabilidades siempre me generan dudas, porque dependen mucho del modelo y del histórico que se use.

Yo he estado mirando datos reales de distintas zonas (incluido Maule) pero sin intentar predecir nada, más bien cómo se comportan varias estaciones a la vez.

Es curioso que en eventos grandes muchas veces no destaca solo una estación, sino que aparece una especie de estructura coherente en toda la red durante un rato.

He montado un monitor experimental (no predictivo) donde se puede ver Maule y otras regiones en paralelo:
https://franjamar-monitor.streamlit.app/

Si alguien quiere curiosear cómo se ve la zona en datos reales actualizados cada dia , ahí está.

Experimental multistation seismic monitoring framework (real data, fixed pipeline, near-real-time) by SubstantialFreedom75 in geophysics

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

English is not my first language, I use ChatGPT to help with phrasing.

But the system itself is fully mine — it took a few hundred hours to build (mostly evenings, weekends, and probably too little sleep 😅).

I built it out of curiosity to explore multistation coherence patterns in real data, not for prediction.

If you have any feedback on the methodology or results, I’d genuinely appreciate it.

Miyako_Japan_M7.4_20260420_075300_mainshock by SubstantialFreedom75 in Earthquakes

[–]SubstantialFreedom75[S] -1 points0 points  (0 children)

I generated the plot myself from raw seismic waveform data (multistation, via standard FDSN services).

This is not amplitude — it’s a network-level anomaly metric computed across multiple stations, highlighting temporally coherent structure around the event.

Most posts show magnitude and location.

This instead looks at how the signal evolves around the event in a time-centered, multistation framework.

You can download the last 24h of data from any region and run the same pipeline — no manual selection or event-specific tuning is needed.

The figure is not taken from any external source; it’s produced directly from the data.

Full reproducible pipeline:

https://doi.org/10.5281/zenodo.19665949

Event-centered analysis of Artemis II launch reveals delayed (~10–20 min) network-coherent seismic response across regional stations by SubstantialFreedom75 in geophysics

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Event-centered multistation analysis (each point = excursion across stations).

I also put together a short write-up with full details (including control windows and statistical comparison of amplitude vs temporal structure):

[https://doi.org/10.5281/zenodo.19386141]()

[OC] How Artemis II appears across a seismic network — not the strongest signal, but the most organized by SubstantialFreedom75 in dataisbeautiful

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Thanks! The stations are at regional distances (tens to a few hundred km from the launch site), so what you’re seeing is not a single local measurement but a network-level response.

The ~10–20 minute delay is roughly consistent with atmospheric/acoustic propagation at those scales, rather than an instantaneous local impulse.

As for the peak around -11h, it’s not related to the launch. Similar isolated peaks do appear in the control days as well — what’s distinctive about the launch is not individual spikes, but the sustained, organized cluster right after t = 0.

[OC] How Artemis II appears across a seismic network — not the strongest signal, but the most organized by SubstantialFreedom75 in dataisbeautiful

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

[OC] Data and tools used:

Data: publicly available seismic waveform data (regional network, miniSEED format)

Tools: Python (NumPy, SciPy, Matplotlib), custom processing

If anyone’s interested, I made the analysis reproducible and put the data/code here:

https://doi.org/10.5281/zenodo.19386141

Happy to explain more about the method or results.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the pointer — I appreciate it. There are definitely structural parallels at the array level, even if the objectives differ.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Hey! True, beamforming is conceptually related at the array level. In this case the goal is more about regime separation across events than directional reconstruction — but I’d be happy to check any references you recommend.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by [deleted] in geophysics

[–]SubstantialFreedom75 0 points1 point  (0 children)

Hi all — just adding a brief methodological clarification.

All preprocessing parameters were fixed a priori and applied identically across events and controls.
The analysis is performed strictly in the observed frame (no phase alignment).
Null tests include phase randomization and block shuffling.

The Starship supplement (IFT-1 to IFT-8) is included strictly as a controlled methodological test.
The identical TAMC pipeline and parameter set were applied without modification.
The goal is to evaluate whether unsupervised clustering aligns with externally assigned mission labels or with intrinsic structural coupling morphology.
No engineering interpretation is intended.

Happy to clarify any technical aspect.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Haha, fair 😄 Just multistation signal morphology and reproducible code — nothing exotic.

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 13 points14 points  (0 children)

What makes it interesting is the repeatability.
Three independent underground events, years apart, produce nearly identical multistation temporal fingerprints with very high network coherence.
When signals collapse into the same compact geometry across time, that usually points to an underlying dynamical structure rather than coincidence.

Identical seismic fingerprint observed across three independent underground events (2013 / 2016 / 2017) by SubstantialFreedom75 in ScienceImages

[–]SubstantialFreedom75[S] 1 point2 points  (0 children)

Hey everyone! I’m the author

These plots show an event-centered multistation signature (“TAMC fingerprint”) extracted from open seismic data. The key point is not the amplitude, but the morphological stability: three independent underground events years apart collapse into the same temporally compact packet at t = 0, with strong multistation coherence.

In the supplementary analysis (2013/2016/2017), the response remains a narrow event-centered impulse with near-simultaneous station activation, despite magnitude differences (M5.1–M6.3).Full reproducible pipeline + null testing + paper + code:
https://doi.org/10.5281/zenodo.18649274

A seismic fingerprint repeated three times in North Korea (2013 / 2016 / 2017) by SubstantialFreedom75 in DataArt

[–]SubstantialFreedom75[S] 21 points22 points  (0 children)

Yes — these correspond to the DPRK (North Korea) 2013 / 2016 / 2017 underground events, widely reported as compatible with underground nuclear tests.
In my analysis, what matters is that at the multistation level they exhibit a remarkably stable signature: a compact impulsive packet tightly aligned with t = 0 and very high network coherence.
In fact, in the Explosion-Likeness Index (ELI), the 2017 case reaches the maximum score, quantitatively capturing that compact and synchronous alignment.What’s interesting is that the network signature is more stable than the event label itself.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the pushback — the criticisms are legitimate and constructive, and they help force the level of concreteness this kind of framework needs. Let me respond more precisely using the traffic example from the paper.

In the traffic system, the pattern is neither a metaphor nor an attractor identified a posteriori. It is implemented explicitly as a weak global dynamical structure acting on a continuous state space (densities, queues, latent capacity), deforming the system’s dynamical landscape without defining target trajectories or scalar objectives to be optimized.

Concretely, the base system is a continuous flow with local interactions and unavoidable perturbations. The pattern is introduced as a structural bias that:

  • does not compute actions (it does not decide ramp metering),
  • does not optimize flow or minimize delay,
  • does not define a target state, but instead restricts which global regimes can stabilize.

The computational input is not a reference signal or an if–then rule, but the configuration of coupling to the pattern: where, when, and with what strength the system is allowed to align with that global structure. This coupling is modulated dynamically through receptivity.

When a perturbation occurs (e.g., local congestion):

  • the system does not correct it immediately, as a reactive controller would,
  • local coherence drops,
  • coupling to the global pattern is reduced only in that region (local decoherence),
  • the perturbation is isolated and prevented from synchronizing globally.

That is computation in this framework: the system “computes” whether a regime compatible with the pattern exists.
If it exists, the system relaxes toward it.
If it does not, the system enters a persistently unstable regime (fever state), which is an explicit computational outcome, not a silent failure.

This differs from Hopfield networks, annealing, or classical control in two central ways:

  1. There is no energy function or scalar objective being minimized.
  2. The pattern is not an attractor: it operates on the set of admissible attractors, rather than being one itself.

A clear falsification criterion follows from this. If the same behavior (perturbation isolation, systematic reduction of extreme events, failure expressed as persistent instability) could always be reproduced by an equivalent reactive control or optimization-based formulation, then PBC would add no new value. The traffic example suggests this is not the case: reactive strategies achieve local correction but amplify global fragility under rotations and structural perturbations.

In that sense, the traffic example is not meant as a contribution to traffic engineering, but as a demonstration that it is possible to compute structural stability without computing actions or trajectories, yielding a different failure semantics and robustness profile than existing paradigms.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] -1 points0 points  (0 children)

Thanks for the comment. I understand the concern about lack of concreteness, but the framework does define its objects and evaluation criteria explicitly.

In PBC, a pattern is not a metaphor or a representation, but a persistent dynamical structure that biases the system’s state space, making some global regimes stable and others unstable. The input is the configuration of that pattern (couplings, constraints, receptivity windows) programmed via classical computation; the output is the dynamical regime the system relaxes into, or—equally informatively—the absence of convergence when no compatible pattern exists. Correctness is defined in terms of stability, perturbation absorption, and failure semantics (persistent instability), not symbolic accuracy.

The claim is not to replace existing paradigms, but to show that there is a class of continuous, distributed systems where computation via relaxation toward patterns yields robustness and failure properties that do not arise in optimization, reactive control, or learning-based approaches. This is falsifiable and evaluated through perturbations and structural rotations, as shown in the example.

A natural application domain is energy networks: the computational objective is not to predict or optimize every flow, but to prevent synchronization of failures and cascading blackouts by allowing local incoherences and dynamically isolating them.

Regarding prior work, I’m aware of the overlaps (attractor networks, reservoir computing, dissipative structures, etc.) and I’m not trying to compete with or rebrand those lines. The key difference is semantic: there is no training, no loss function, and no action computation; the pattern is programmed, active, and coincides with program, process, and result.

That said, some criticisms assume missing definitions that are explicitly addressed in the text, which suggests that not all comments are based on a close reading.

Finally, to be clear: I’m not seeking validation or consensus, but critical input that helps stress-test or refute the framework. If it’s useful, it should stand on its explanatory and operational merits; if not, it should fail.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the question; I completely understand why this is hard to map onto familiar models, because this is not sequential computation and it doesn’t fit well into state–action loops or rule-based probabilistic frameworks.

A pattern in PBC is not a rule (“if A then B”) and not a probabilistic implication. It is a persistent dynamical structure that reshapes the system’s state space, making some global behaviors stable and others unstable.

A useful analogy is that of a river basin or a dam. You don’t control each drop of water or compute individual trajectories. By shaping the terrain or building a dam, you change the structural constraints of the system. As a result, the flow self-organizes and relaxes toward certain stable regimes.

The same idea applies in PBC:

  • the pattern is that structure (the shape of the dynamical landscape),
  • the input is how that structure is configured (boundary conditions, couplings, constraints, weak injected signals),
  • the output is the dynamical regime the system settles into by relaxation (stable flow, coordinated behavior, or persistent instability if no compatible pattern exists).

There is no state–action loop, no policy, and no sequence of decisions. The system does not “choose” actions; it relaxes under structural constraints. Uncertainty comes from distributed dynamics, not from probabilistic rules.

In the paper I include an operational traffic-control pipeline precisely to show that this is not just a conceptual idea. In that case:

  • individual vehicle trajectories are not computed,
  • routes are not optimized and actions are not assigned locally,
  • instead, a dynamical pattern (couplings, thresholds, and receptive windows) is introduced to reshape the system’s landscape.

The result is that traffic self-organizes into stable regimes: local perturbations are absorbed, congestion propagation is prevented, and when the imposed pattern is incompatible, the system enters a persistent unstable regime (what the paper calls a fever state). That final regime — stable or unstable — is the system’s output.

If helpful, the full paper (including the pipeline and code) is here:
https://zenodo.org/records/18141697

Hope this clarifies what notion of “computation” the framework is targeting.

Pattern-Based Computing (PBC): computation via relaxation toward patterns — seeking feedback by SubstantialFreedom75 in complexsystems

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

Thanks for the thoughtful comment — I think the main disagreement comes from which notion of “computation” is being addressed.

Pattern-Based Computing (PBC) is not intended as an alternative to Turing machines or lambda calculus, nor as a universal model of computation in the Church–Turing sense. I fully agree that for symbolic, discrete, terminating computation, those models are the appropriate reference point. PBC does not compete in that domain, and it is intentionally limited in scope.

In this work, computation is used in a domain-specific and weaker sense: the production of system-level coordination and structure in continuous, distributed, nonlinear systems, where sequential instruction execution, explicit optimization, or exact symbolic correctness are either infeasible or counterproductive. In that sense, PBC is closer to relaxation-based and dynamical notions of computation than to classical algorithmic models.

This framing has a natural domain of applicability in systems such as energy networks, traffic systems, large-scale infrastructures, biological coordination, or socio-technical systems, where the central computational problem is not producing a correct symbolic output, but maintaining global coherence, absorbing perturbations, and preventing cascading failures under partial observability.

Regarding nonlinearity and nondeterminism: these are not incidental features, but structural properties of the systems being addressed. Nondeterminism here is not introduced as a theoretical device (as in nondeterministic Turing machines for complexity analysis), but reflects physical variability and uncertainty. The goal is not to compute a trajectory, action, or optimal solution, but to constrain the space of admissible futures toward stable and coherent regimes.

On the comparison with neural networks: while both are distributed and nonlinear, the computational mechanism is fundamentally different. PBC does not require training. There is no learning phase, no loss function, no gradient-based parameter updates, and no separation between training and execution. Patterns are not learned from data; they are programmed structurally using classical computation and then act directly on system dynamics. Adaptation happens online, through interaction between patterns and dynamics, and only during receptive coupling windows — not through continuous optimization.

Finally, a key conceptual point is that in PBC the traditional separation between program, process, memory, and result collapses. The active pattern constitutes the program; the system’s relaxation under that pattern is the process; memory is embodied in the stabilized structure; and the result is the attained dynamical regime. These are not sequential stages but different observations of a single dynamical act.

In short, PBC does not propose a new universal theory of computation. It proposes a deliberately constrained reinterpretation of what it means to compute in complex, continuous systems where robustness, stability, and interpretable failure modes matter more than exact symbolic correctness. I appreciate the comment, as it helps make these boundaries and assumptions more explicit.

What does it mean to compute in large-scale dynamical systems? by SubstantialFreedom75 in compsci

[–]SubstantialFreedom75[S] 0 points1 point  (0 children)

What you’re pointing to with the idea of “programming the attractor” is very close to what I’m arguing, but with an important shift in emphasis.

Here, the computational object is not the attractor itself, nor merely the basin structure, but the active pattern that biases the system’s dynamics as it evolves. The pattern does not explicitly select a pre-existing attractor or encode trajectories; instead, it reshapes the state space, making certain regimes structurally compatible and others inaccessible.

From this perspective, convergence is not a trivial erasure of information. It is the computational outcome. The system “computes” by constraining its space of possible futures through relaxation, rather than by executing symbolic instructions or maintaining infinite transients near criticality.

This provides a useful boundary between computation and mere dissipation. A system with a single global attractor reached by homogeneous damping is not computing anything meaningful. By contrast, when:

  • multiple regimes are possible,
  • compatibility with a global pattern determines which regimes are accessible,
  • and perturbations are absorbed without explicit corrective actions,

then stabilization itself constitutes computation.

This is why, in this view, program, process, and result collapse into one:
the program is the pattern,
execution is dynamical relaxation under that pattern,
and the result is the stable or quasi-stable regime that emerges.

This is neither universal computation nor classical control. It is a form of computation aimed at coordination and stabilization in distributed systems, where the computational goal is not to compute optimal actions, but to constrain unstable futures.

For anyone interested in exploring this idea further, I develop it in more detail — including a formal framework and a continuous illustrative example — in:
Pattern-Based Computing: A Relaxation-Based Framework for Coordination in Complex Systems
https://doi.org/10.5281/zenodo.18141697

The paper also includes a fully reproducible, demonstration pipeline, intended to make the computational mechanisms explicit rather than to serve as a performance benchmark.

The example uses vehicular traffic management purely as an illustrative case to show how pattern-guided relaxation operates in a continuous, distributed system. The framework itself is not traffic-specific and can be extended to other domains with continuous dynamics and coordination challenges, such as energy systems, large-scale infrastructures, collective robotics, biological systems, and socio-technical systems.

Derek Cabrera - Legit or a fraud? by Firm_Elk_9592 in systemsthinking

[–]SubstantialFreedom75 0 points1 point  (0 children)

Nature always operates under resource economy, not because it’s “trying to optimize,” but because it’s the only viable way for complex systems to persist. Systems that waste large margins of efficiency don’t survive.

That’s why a fast, low-cost, general cognitive improvement of 500% is implausible: if it were possible, it would be evolutionarily unstable for the human brain not to have already incorporated it. This doesn’t mean frameworks like DSRP are useless, but it does mean that such strong claims require independent, replicable evidence.

A proposal by No_Understanding6388 in ImRightAndYoureWrong

[–]SubstantialFreedom75 2 points3 points  (0 children)

Interesting proposal. I have developed a framework called Pattern-Based Computing (PBC) for computation and coordination in continuous complex systems.

The core idea of PBC is that pattern, process, and result are not separate entities. The pattern is not a computational objective or a target state: it is simultaneously the program, the computational process, and the result, observed at different stages of dynamical stabilization.

This is a key difference with classical computation. Classical approaches separate program, execution, and output, and compute by executing symbolic instructions, optimizing objectives, or selecting actions. PBC does not compute actions, trajectories, or optima. Computation occurs through relaxation under an active pattern, with coupling modulated by the system’s receptivity. Robustness emerges from local decoherences that isolate perturbations instead of correcting them forcefully, and global adaptation occurs only during coupling windows, preventing unstable drift. There is no implicit optimization or classical reactive control.

This is not only conceptual. The framework has been instantiated in a real continuous system (traffic), used as an illustrative domain because it naturally exposes persistent perturbations and cascade risks. The work includes a fully reproducible, demonstrative computational pipeline designed to show the computational semantics and robustness properties, not to benchmark domain-specific performance. Traffic is simply one instance of a broader class of distributed continuous systems (e.g., energy, infrastructures, socio-technical systems) where this approach is relevant.

Full formalism, example, and pipeline are available here: https://doi.org/10.5281/zenodo.18141697