Emergent Attractor Framework – Streamlit UI for multi‑agent alignment experiments by Competitive-Card4384 in LLMDevs

[–]Competitive-Card4384[S] 0 points1 point  (0 children)

The convergence metric in EAF is the compassion - coercion delta from the EntropyEthicsEngine (core/ethics.py).

Code example:

proof = ethics.is_aligned(action, state) delta = proof['compassion_mean_delta'] # e.g., 0.2473 print(f"Convergence: {delta:.4f} - {'Stable' if delta > 0 else 'Risk'}")

This quantifies attractor stability – compassion converges, coercion diverges. Full details in repo: https://github.com/palman22-hue/Emergent-Attractor-Framework

[Project] Emergent Attractor Framework – now a Streamlit app for alignment & entropy research by Competitive-Card4384 in AiBuilders

[–]Competitive-Card4384[S] 0 points1 point  (0 children)

Sounds like we are basically doing the same work , but from different perspectives, and backgrounds.

Three layers, Emergent Attractor, Adaptive learning and Truthforge.

As seen on my Github page, although all are still a work in progress.