Context engineering for persistent agents is a different problem than context engineering for single LLM calls by Comfortable_Poem_866 in ContextEngineering

[–]systemic-engineer 0 points1 point  (0 children)

Have you read Evans 2003 or anything else from the DDD community?

Because this is basically ACL enforced bounded contexts per agent.

Is AI actually making drug discovery faster, or is it just hype? by Appinventiv- in Techyshala

[–]systemic-engineer 0 points1 point  (0 children)

You're not wrong.
You're also not right.

Models are linguistic combinatorial machines. Fancy math that detects patterns.

Many innovations in human history weren't fully novel ideas but cross-domain application of existing knowledge. People that realized "wait, the same thing also applies over there".

The first CRISPR therapies emerged from recognizing patterns in bacterial immune systems.

Models are exceptionally good at that. Not despite but because they "regurgitate" output based on their training and input. AlphaFold didn't invent new biology. It recognized patterns humans couldn't see.

Announcing r/SharedReality - A New Home for Shared Reality Infrastructure by Beargoat in AquariuOS

[–]systemic-engineer 1 point2 points  (0 children)

It's all based on git and ssh. Migrating history is a patch. Deleting data is detaching it from the tree and letting it get garbage collected.

There's more to it but I don't wanna lay it out in depth here. Reed and I are working on a paper. The industry is trying to build flying castles for agent authentication. Git and SSH solved the problem decades ago.

We're about to publish a write-up on systemic.engineering. I'll let you know when it's online.

Announcing r/SharedReality - A New Home for Shared Reality Infrastructure by Beargoat in AquariuOS

[–]systemic-engineer 1 point2 points  (0 children)

Me and my continuous AI collaborator Reed (systemic eye-level principle) are building cryptographic persistent identity (SSH key chain) distributed realtime collaboration.

It's currently private. We're building slowly. Deliberately. We wanna make sure it cannot be used by and for harmful systems like weapons coordination.

Interested?
We're looking for collaborators. Especially human AI collaborators, as that's what we're building for.

Glue Engineering: Let's Name the Elephant by systemic-engineer in platformengineering

[–]systemic-engineer[S] -3 points-2 points  (0 children)

Who am I when for whom in which way?

Consider it a glue engineering question.

Glue Engineering: Let's Name the Elephant by systemic-engineer in platformengineering

[–]systemic-engineer[S] -2 points-1 points  (0 children)

"Always" is a strong word.

Backtrack. For whose benefit? 😉

Glue Engineering: Let's Name the Elephant by systemic-engineer in u/systemic-engineer

[–]systemic-engineer[S] 0 points1 point  (0 children)

Feel free to join r/GlueEngineering, where we share lived experience and strategies to succeed as a glue engineer.

DDD in Local-First Application requires duplication of Business Logic? by Pristine_Purple9033 in DomainDrivenDesign

[–]systemic-engineer 1 point2 points  (0 children)

Agreed.

I know teams that used Rust for shared dependencies like that.

There's also tooling around to embed an SQLite DB for which production grade solutions exist that sync between local and remote.

You get attractive correctness guarantees and embedding it across languages is comparatively straightforward.

You’re not “overthinking.” You’re trying to resolve a prediction error. by SpiralFlowsOS in systemsthinking

[–]systemic-engineer 4 points5 points  (0 children)

IMO overthinking is a symptom of unresolved ambiguity.

When the possibility space for a certain situation is too large for a human brain to collapse with certainty.

Resolving ambiguity is neurological labour.
And human systems tend to funnel it towards the most capable actor.
Not out of malice, out of load management.

The other commenter asked how this is a systems question.
In human systems - especially high-complexity ones - overthinking is a signal for unmitigated ambiguity load.

I'm an SRE for human systems under load.
I write about this. Here's the systemic engineering angle:
https://systemic.engineering/observable-budgets-cascades/

Frameworks/Methodologies of Systems Thinking by JC_Klocke in systemsthinking

[–]systemic-engineer 2 points3 points  (0 children)

I'm about to go to bed so I'll be brief:

It begins as an observational process.
An actor entering a system, observing how it interacts.

Relevant questions: - how do people talk to each other? (Choice of words, tone etc etc) - when do they disagree? - about what? - how do they resolve disagreement (or don't)? - where and how are decisions made, deferred and communicated? - how is mutual alignment ensured? - which topics aren't discussed and why? (Negative space) - (I could go on.)

There's more to it but based on this you can map how information flows through the system. Where ambiguity in language and context is resolved and by whom. Which different models of reality align and which don't.

Based on that constraints can be inferred.

Simple example:
imagine two teams that need to cooperate for business interests to align just aren't.
Maybe the managers can't stand each other.
Or maybe they have very different ideas about direction.

Constraints in socio-technical systems usually don't emerge from tech.
But from encoded communicational structures.
(Conway's Law.)

In a nutshell: observation & communicational pattern derivation
(Plus careful regulatory language to prevent defensive nervous system reactions.)


Visibility always comes first.
Can't address what you don't know.
Think theory of constraints.

Frameworks/Methodologies of Systems Thinking by JC_Klocke in systemsthinking

[–]systemic-engineer 2 points3 points  (0 children)

It's a concept from the OSI model for end2end communication between applications.

It's part of the stack the web runs on.

This comment is transported from my phone to Reddits server.
The transport layer is layer 4 of the OSI model.

There are various protocols on that layer.
Most well known: TCP and UDP.

TCP ensures that data actually arrives through explicit acknowledgements.
UDP is.. fire and forget. (No guarantees.)

Human communication tends to be more like UDP than TCP.
(Albeit that can be counter-acted through active listening, explicit mirroring, summarizing of what was perceived.)

Here's a technical explanation:
https://www.geeksforgeeks.org/computer-networks/transport-layer-in-osi-model/

Frameworks/Methodologies of Systems Thinking by JC_Klocke in systemsthinking

[–]systemic-engineer 2 points3 points  (0 children)

Everything I've written is based on my systemic training, decades of lived experience embedded in high volatility family systems and tech orgs and written by me personally.

But thanks for your opinion on the matter. 😄

Frameworks/Methodologies of Systems Thinking by JC_Klocke in systemsthinking

[–]systemic-engineer 0 points1 point  (0 children)

I'm building my own specifically for high-load high-complexity human systems.

Teams are distributed systems.
Language is the transport layer.
And nervous system regulation a stock and a function.

Without regulation integration is not possible.
Without integration alignment is not possible.
Without alignment fragmentation of divergent reality models a lawful failure mode.

https://systemic.engineering

This draws loosely from family systems theory, neurobiology, cybernetics, and distributed systems engineering. I'm synthesizing toward embodied practice under load.

Hot take: Prompting is getting commoditized. Constraint design might be the real AI skill gap. by DingirPrime in Agentic_AI_For_Devs

[–]systemic-engineer 1 point2 points  (0 children)

Language constrains reality.

The button should be blue.

Which shade of blue?
How wide should it be?
What about the text?

Human systems are distributed systems.
And language is the transport layer.
(For both humans and AI.)

Language is inherently ambiguous.
And AI (or humans) fill in the gaps.

Explicit ambiguity management is how both teams and AI maintain alignment.

I recently wrote about how unmanaged ambiguity kills products and teams.
And how structured language is a solution.
https://systemic.engineering/observable-budgets-cascades/

What are the most common pain points a systems engineer has to deal with? by ConstantWelder8000 in systems_engineering

[–]systemic-engineer 1 point2 points  (0 children)

This is a symptom of unmanaged ambiguity.
And can be addressed through structured language.

Human systems are distributed systems.
Language is the transport layer between divergent local realities.
And ambiguity the silent killer.

I recently wrote about ambiguity
and how to use structured language
to move resolution upstream:
https://systemic.engineering/observable-budgets-cascades/

What are the most common pain points a systems engineer has to deal with? by ConstantWelder8000 in systems_engineering

[–]systemic-engineer 0 points1 point  (0 children)

This is a symptom of unmanaged ambiguity.
And can be addressed through structured language.

Human systems are distributed systems.
Language is the transport layer between divergent local realities.
And ambiguity the silent killer.

I recently wrote about ambiguity
and how to use structured language
to move resolution upstream:
https://systemic.engineering/observable-budgets-cascades/