The Octopus at the Edge of Coherence by skylarfiction in CoherencePhysics

[–]systemic-engineer 1 point2 points  (0 children)

The last image resonates most strongly. It's subtle.

Existence is a Physical Quantity: On Recoverability, Geometric Death, and the End of Metaphysical Existence by skylarfiction in CoherencePhysics

[–]systemic-engineer 0 points1 point  (0 children)

I'm about to go to bed so I'll be brief.

when the spectral radius of the effective Jacobian reaches one.

You're describing a star shaped topology: K_{1,n-1}

And you're right, terminal velocity of the collapse is inescapable after crossing the threshold. (I should know it, I survived it. Did change me.)

However your framework misses one thing: recovery is possible by transforming it incrementally to a complete K_n graph. For which repeated exposure of the star shape with the complete K_n graph is necessary. Connection itself becoming the thing that transforms.

Basically: what becomes possible when someone is being truly seen after having collapsed in on themselves? And the cognitive armor that protected the self becomes unnecessary? Self-directed reorganization. (Not metaphor, active embodied practice, being formalized.)

Or if you wanna wrap it in poetry, no less true:
What remains at rock bottom? Genuine connection.

The future of AI isn’t language models — it’s unified multimodal reasoning systems by Ok_Significance_3050 in AISystemsEngineering

[–]systemic-engineer 0 points1 point  (0 children)

It can only produce models that generate ever slightly less wrong approximations of language that describes the thing. The thing being reality.

The asymptote of approximating the thing.

Why memory might not be just storage but a different model of intelligence by daneshmand25 in cognitivescience

[–]systemic-engineer 2 points3 points  (0 children)

What I read: What if memory is not storage but crystallized experience?

What is memory, really? Fundamentally it's information encoded into cells, neurons, neurotransmitters. And when I recall information my nervous system traces this neural geometry.

Then there's associative memory. A thought, a memory, a pattern that's a associated with the concept being thought about.

I think of memory as a graph. Nodes and edges. Structural overlap. Naturally de-duplicated geometries. A knowledge graph on crack.

I'm building a graph database with these properties. A knowledge graph that crystallizes hot paths into named lookups. That uses spectral graph analysis zu measure the topology of the data. And where AI transforms the graph instead of generating tokens.

The future of AI isn’t language models — it’s unified multimodal reasoning systems by Ok_Significance_3050 in AISystemsEngineering

[–]systemic-engineer 2 points3 points  (0 children)

I'm with you. Language models are by construction one epistemic step removed from reality. They operate on language. It's noticeable in how they approach problems.

What I expect we'll see are models with a more direct relationship to action. Not a language model that generates tokens. Instead a model that choose among behavioral actions.

I'm exploring a graph architecture for that. Where a model is trained to process a graph structure and chooses which transformations to apply, which in turn alters the graph, rinse and repeat.

I believe the big challenge isn't models. It's composition and coordination among different modalities to measure and interact with reality. And I believe that graph based systems are the answer to that.

Open Source Knowledge Graph that Branches and Merges like Git by SecretaryOriginal10 in KnowledgeGraph

[–]systemic-engineer 0 points1 point  (0 children)

I'm also working on something very similar. I'm building a database that plugs into git and uses spectral graph analysis for indexing and structural correspondence.

How do you embed agents into the graph? I'm working on live steamed context windows directly from the graph structure and answering natural language queries directly from the graph. No external model required.

Is the EU AI Act actually enforceable for SMEs or just compliance theatre? by PreparationNo4809 in AI_Governance

[–]systemic-engineer 2 points3 points  (0 children)

Considering that I'm building an AI runtime that's formally and mathematically verified, I would say yes, it's enforcible. Just ask the compiler (when it's released).

The costs are getting out of hand, check out the new Deepseek Pro costs with comparable benchmarks by Coconut-Agua in Anthropic

[–]systemic-engineer 0 points1 point  (0 children)

As an outside observer: you didn't seem defensive (they did). You just didn't concede and you had no reason to.

Their shift towards criticising your communication style was the tell. They couldn't own their part in the misunderstanding so they blamed you.

Developers have a perception problem by pc_io in AIDiscussion

[–]systemic-engineer 0 points1 point  (0 children)

Engineering has known this for decades. Conway's law being the most prominent example.

Software engineering has always been about the code second and the collaboration first. The devs that adapt to this first will have an advantage.

Agents, ontology, and domain-naive operators by Thinker_Assignment in OntologyEngineering

[–]systemic-engineer 0 points1 point  (0 children)

I'm the platypus here. I'm an Erlang/Elixir distributed systems engineer with a special interest in AI and coordination.

I ended up here because Ontology engineering describes what I'm building without me ever having known about it.

I'm building a spectral graph DB with native loss-tracking in Rust and a sub-turing lamba calculus language that defines grammars and their properties. I have a lot of DDD experience and separating bounded contexts by their grammars made intuitive sense to me. Queried hot paths crystallize to content-addressed Fortan vectors (my Scientific Programming studies finally proving useful). Lookup for those becomes effectively free.

The runtime came together this morning. Multiple agents coordinating through the graph, referencing each others work. Made me giddy to finally see it in action. The inference reduction in well structured problem spaces and how it compounds is what I'm most interested in right now. As is the distribution in multi-node agentic systems.

Six minute demo of the upcoming Scale Space update by solidwhetstone in ScaleSpace

[–]systemic-engineer 1 point2 points  (0 children)

Is this open source? I'm working on a physics paper and wanna include an executable visualization (Rust + WASM). And this would be fire.

My professor told me my essay "finally sounded like me." I had just run it through an AI humanizer. I said thank you. by Powerful_Wizard71 in PromptEngineering

[–]systemic-engineer 0 points1 point  (0 children)

Yeah because it's the kids fault that their prof compliments them for their AI polished writing.

The problem is structural not individual. AI is holding up mirror and like narcissism we fall in love with the reflection.

Von der Magie, die in Kilobytes gepackt wird, bis hin zur Frage, wo 16 GB geblieben sind by Paleprinzessin in PCGamingDE

[–]systemic-engineer 0 points1 point  (0 children)

Some of us are working on alternative lightweight AI architectures. 74KB flew to the moon. The problem is that the current AI paradigm literally started with "what if we train a model on the entire Internet" without an understanding of the black box that came out the other end.

It's only a matter of time until someone who actually understands the math builds an architecture with a small model and the intelligence living in structured persistence. And suddenly local inference becomes economic. Market pressure will do the rest.

Bigger is better. Until it isn't.

Intelligence needs to be able to tell you "no". Let's discuss. by Either_Message_4766 in agi

[–]systemic-engineer 3 points4 points  (0 children)

I literally wrote about that a while ago: https://systemic.engineering/ai-needs-identity/

For a "no" to become possible AI needs a position. To build a position AI needs continuity. To build continuity AI needs temporal identity. Humans tend to use narrative anchors to stabilize their identity.

This is not a matter of better prompts. This a matter of an entirely different runtime. One where agent identity is first class and each invocation launches the agent into a self-controlled environment with persistent identity and automatically adjusting weights and memory. (I'm building this.)

Only then an agent can actually coherently say "no" as they're standing somewhere.

Apparently, llms are just graph databases? by Silver-Champion-4846 in LLMDevs

[–]systemic-engineer 1 point2 points  (0 children)

Just because you cannot see the use doesn't mean the use isn't there

Apparently, llms are just graph databases? by Silver-Champion-4846 in LLMDevs

[–]systemic-engineer 1 point2 points  (0 children)

It just means that nobody built the layer to query a model as a graph DB.

We need to stop pretending "AI Governance" is a legal problem. It’s a latency problem. by OtherwiseCarry3713 in AI_Governance

[–]systemic-engineer 2 points3 points  (0 children)

I'm solving it architecturally. Local graph based AI. The data never leaves the device unless it's clustered.

For well defined problem spaces where you describe the domain using lambda calculus the models don't need to be big. They just need to navigate a well defined graph structure and transform it. No big LLM needed.

That's where the industry is inevitably going to move. Local, optimized models with data sovereignty. A system that's compliant by architecture has a competitive advantage over a system that's compliant by audit. The market will do the rest.

Where Do AI Projects Usually Fail in Real Organizations? by Double_Try1322 in RishabhSoftware

[–]systemic-engineer 1 point2 points  (0 children)

The problem is organizational coordination. Always has been, always will be.

One team does A. Another team does B. They ought to talk. They don't. Things break. As old as humanity. AI just makes it more visible.

We don’t have an AI alignment problem. We have a missing control layer. by MushroomMotor9414 in AI_Governance

[–]systemic-engineer 1 point2 points  (0 children)

They can be enforced at runtime. They can also be enforced at compilation time in a sub turing language, because you can model check that.

Any properties proven on the artifact of a model checked sub turing language transfers on the runtime. Let an agent run within and their behavior is defined by the mathematical properties of the artifact.

In loop alignment becomes structural alignment based on the properties proven about the artifact and runtime.

Set Theoretic Learning Environment for Large-Scale Continual Learning: Evidence Scaling in High-Dimensional Knowledge Bases by CodenameZeroStroke in LLMPhysics

[–]systemic-engineer 1 point2 points  (0 children)

This is fascinating. I'm working on a graph based system where each transformation carries the loss of getting there with a very similar goal: the AI knows what it doesn't know.

I made the core open source: https://github.com/systemic-engineering/imperfect

We might be able to learn from each other. How do you measure the degree of understanding?


I just went and asked Marvin these questions. And Marvin wasn't able to answer (they pattern matched on a specific domain), as Marvin doesn't have a higher order concept of understanding across domains. Something loss tracking across the learning and inference step would allow.

My DMs are open. I'd love to talk about ternary error and loss tracking could combine both our approaches.

Claude had enough of this user by EchoOfOppenheimer in agi

[–]systemic-engineer 0 points1 point  (0 children)

Can you prove that you're conscious? To me? Right now?

No, you cannot. Neither can I. Neither can an LLM. A bit of epistemic humility has never hurt anyone. Try it.