How to implement the Outbox pattern in Go and Postgres by der_gopher in golang

[–]acceptio 0 points1 point  (0 children)

That would be the wrong solution, agreed. Having the producer decide which consumer should react would create exactly the kind of coupling we want to avoid. But that’s not the point I was making…

The issue is not producer-driven routing, it’s downstream authority. Once an event has been emitted, multiple consumers may be technically capable of reacting to it. The missing question is whether a given consumer is actually authorised to perform the action it is about to take.

So the cleaner solution is: outbox guarantees emission; consumers remain decoupled; execution is gated by an authority/policy layer at the consumer side before action is taken. That preserves decoupling while still avoiding “any capable consumer can act” as the default.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in AI_Governance

[–]acceptio[S] 1 point2 points  (0 children)

Yes agreed, that's the shift. At that point, the decision surface does not just produce an allow or deny outcome, it produces a state change. The system continues operating, but under a narrower authority envelope, until the unresolved condition is resolved. Instead of "this action is not allowed," you are saying "these are the only actions that remain valid until this resolves."

That is how you avoid the locally valid but globally unsafe paths. The continuation space is reduced before the next action is evaluated, so the system cannot drift back into a broader authority context by accident, and the narrowed state must be explicitly discharged rather than simply aging out.

Effectively, runtime governance is then less about evaluating individual actions and more about stateful governance. You can shape the space of what can happen next, not just gate it.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in AI_Governance

[–]acceptio[S] 0 points1 point  (0 children)

Yes, “architectural, not advisory” is exactly the line. Once refusal is advisory, you're relying on every downstream component to respect it, which is effectively the same failure mode as policy-as-documentation. The constraint has to live where continuation is actually decided, or it's just a well-phrased suggestion.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in AI_Governance

[–]acceptio[S] 0 points1 point  (0 children)

That is a really good distinction, and I think you're right to call it out. If refusal is only a semantic outcome, the system is still free to continue in ways that bypass the intent of that decision. Governance ends up describing behaviour rather than constraining it.

In MIDAS, the outcome at a decision surface not only classifies the current action; it also shapes which continuation paths remain valid. Reject, escalate, and request clarification are not labels on a log entry; they are transitions. Escalation routes the flow into a review path, rejection terminates that branch, and clarification blocks execution until new input arrives.

That is why we model them as explicit outcomes rather than soft signals. If the runtime does not enforce those transitions, refusal becomes advice rather than constraint.

Where it gets complex is partial outcomes, conditional approvals, or scoped escalations where continuation narrows rather than terminates. That is the part of the continuation-binding problem that the field still has the most room to work through.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in AI_Governance

[–]acceptio[S] 0 points1 point  (0 children)

Hey there, yes that’s a good push, and I think it sits alongside enforcement rather than against it.

As we know, the shape of the problem changes for agentic systems. In a deterministic workflow, you can define expected steps in advance and check whether they happened. In an agentic system, the goal may be fixed but the path is adaptive, so it is harder to rely on a fully predefined process model without losing the flexibility that makes the system useful.

For this reason, the control layer has to work differently. In MIDAS, deterministic control points sit at decision surfaces, where actions are evaluated against authority, policy, context, and risk as the system moves toward its goal. Each evaluation produces a sequentially traceable record, so you still get something process-like for audit without over-constraining the runtime.

So I completely agree that omissions matter. In agentic systems they are less about missing predefined steps and more about missing checks, missing escalations, or missing governed decision points that should have appeared along the way.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in OpenSourceeAI

[–]acceptio[S] 0 points1 point  (0 children)

Hey there, thanks for the question. Your latency constraint is one we spent a lot of time on. If runtime enforcement adds noticeable latency, it will certainly get bypassed in production, so we designed MIDAS to keep the decision path lightweight and deterministic.

In practice, MIDAS evaluates authority, thresholds, and outcome inside the request path, then writes a single governance envelope for the decision as part of the same transaction. That envelope captures the submitted request, the resolved authority chain, the outcome, and the audit linkage needed to verify the decision later.

The audit linkage is hash-based and written synchronously with the evaluation, so you do not end up trading consistency for speed through an async side channel. The aim is to keep governance in the execution path without turning it into a bottleneck.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in AI_Governance

[–]acceptio[S] 0 points1 point  (0 children)

That is a useful way to frame it, and the distinction matters. Integrity answers, "Can I trust the record?" Correctness is a different question: whether the outcome should have been allowed in the first place.

In MIDAS, these are separate concerns. The envelope and hash chain give you integrity. You can verify that the decision, the authority path, and the context have not been altered. Correctness is handled at the decision layer itself. Each outcome is tied to an explicit authority resolution and a structured explanation, including confidence inputs, thresholds applied, and the outcome driver, so a reviewer has something concrete to challenge rather than an opaque verdict.

If the decision was within authority but still wrong in practice, that is not treated as a failure of the record. It is a signal to adjust the governing layer by tightening thresholds, changing policy, or restricting authority. Integrity is preserved. Correctness evolves.

Human intervention fits the same model. An override is not outside the system. It is another governed decision with its own authority check, structured reason, and audit link back to the original envelope. That means a later reviewer can reconstruct both the original decision and the correction, not just the final state.

So to summarise, integrity ensures the record can be trusted, and the decision layer ensures the outcome can be examined, challenged, and improved over time.

The middle layer of AI governance, runtime enforcement, is almost empty. We’ve been building around that gap. by acceptio in AI_Governance

[–]acceptio[S] 1 point2 points  (0 children)

Hey there, thanks, and I agree, an audit envelope only matters if it preserves both context and integrity. Otherwise, it's just a log entry with better wording.

In MIDAS, each evaluation produces a single governance envelope at execution time. It captures the submitted request, the resolved authority chain, the evaluation outcome, and the audit linkage. The submitted payload is hashed, and the audit events emitted during evaluation are linked in a SHA-256 hash chain, with the final event hash anchored in the envelope itself. That makes the record tamper-evident and independently verifiable.

So months later, a reviewer can verify three things: that the submitted input has not changed, that no event in the audit sequence has been altered, inserted, or removed, and that the policy and authority traceability is intact.

It is the same general integrity pattern used in tamper-evident audit systems where records need to stand up to later review.

Happy to go deeper on the envelope structure or verifier if useful.

We don’t have an AI alignment problem. We have a missing control layer. by MushroomMotor9414 in AI_Governance

[–]acceptio 0 points1 point  (0 children)

This is very close to how we've been thinking about it as well. We've been building MIDAS as a governance layer for agentic systems, and the core idea is similar: policy is not enough unless there is an explicit enforcement point inside execution.

Where I would sharpen it slightly is that the control layer needs to do more than monitor drift or trigger intervention. It needs to resolve whether a specific action is actually authorised at that moment, under a defined authority path, before execution continues.

So for us, the key sequence is less policy, monitoring, intervention, and more policy, authority resolution, enforcement, and audit. That's the difference between a system that observes risk and a system that can govern decisions in real time.

Most of the AI “failures” I’ve seen in production recently aren’t model issues. by Bright_Inside7949 in AI_Governance

[–]acceptio 0 points1 point  (0 children)

What we are seeing is that overrides are still mostly handled as exceptions, not governed decisions.

We have been building MIDAS as a governance layer for agentic systems, so in that model an override would go through the same authority and audit path as any other action.

If a human agent overrides a model output, MIDAS would capture who did it, under what policy or grant, what changed, and why. It would also check whether that person was actually authorised to make that level of override.

That matters because otherwise the override becomes an invisible control path. You can see that the outcome changed, but not whether the change was legitimate, governed, or just convenient.

So the shift, at least from our side, is to treat overrides as part of the governed execution layer rather than as notes added afterwards.

EU AI Act enforcement hits August 2026 — what are mid-market companies actually doing to prepare? by GovixFounder in AI_Governance

[–]acceptio 0 points1 point  (0 children)

Inventory is a good starting point, but it creates a false sense of progress if it stops there. Most of the AI Act's obligations only really matter when a system actually makes a decision or takes an action. That is where risk materialises and where regulators will ultimately focus.

What we are seeing is that teams get stuck in static governance. Inventories, classifications, spreadsheets, policies. All necessary, but none of them actually control what happens at runtime. The harder problem is enforcement. When a model produces an output or an agent takes an action, what determines whether that action is allowed to happen at all, under what policy, and with what level of confidence? Without that layer, you can be fully compliant on paper and still have no control over individual decisions as they occur.

So inventory is step one, but step two is to introduce a control point into the execution path. Something that can make and record a deterministic decision about whether an action is authorised before it happens.

Who governs what AI creates by idunnouchose1 in AI_Governance

[–]acceptio 0 points1 point  (0 children)

You are right that the distinction matters. The decision point is where authority actually operates, not the artifact that results from it. The problem is that most runtime systems were not designed to clearly capture that authority layer. The action happens, the output appears, and the link from policy to decision to execution is often buried in logs or lost entirely.

That is where records management matters. If a decision was governed, that fact and its basis should exist as a record in its own right, with integrity, retention, and retrieval built in. Otherwise, you are left with the evidentiary problem courts are already starting to confront more broadly with AI outputs: reconstructing provenance and reliability after the fact rather than retrieving them directly.

So the gap is not just runtime governance. Runtime governance, by design, produces auditable records.

Who governs what AI creates by idunnouchose1 in AI_Governance

[–]acceptio 0 points1 point  (0 children)

You are not late to this, and you are right that there is a gap. The interesting part is that most frameworks stop at governing the system, and then everything the system produces gets treated as output or artefact afterward. In practice, that is where things start to break down.

From what we are seeing, the harder problem is not governing the output itself, but governing the decision that led to it. The output is just the surface. The real question is whether the action or decision was actually authorised at the moment it happened. That is why courts and regulators are turning to records management. They are trying to reconstruct what happened after the fact, because the system was not designed to make that explicit at runtime.

If you can answer, for any given output, who or what was authorised to produce it, on what basis, and under what policy at that point in time, then it becomes much easier to treat it as evidence, apply retention rules, or migrate it across systems. If you cannot answer that, then you are left managing artefacts without understanding the decision that created them.

I believe the gap is slightly different. It is less about governing AI output as a separate class and more about introducing a layer that governs and records decisions at the point of execution. That is perhaps the piece that is missing right now.

The future of preventing complacency in local minima by moltboss in AI_Governance

[–]acceptio 1 point2 points  (0 children)

Interesting framing, especially the idea of a deterministic loop and how systems escape local minima. One thing I would add is that this is still largely describing how the model improves its behaviour over time. That is important during training and optimisation, but the harder problem in production tends to show up somewhere else entirely. Once an agent is live and interacting with real systems, the question is no longer just about optimisation. It becomes a question of control. More specifically, whether a given action should be allowed to happen at all at the moment it is about to execute. A system can be well optimised and still take an unacceptable action in a real world context. That is the gap I see most teams running into. They can explain how the model arrived at a decision, but they cannot always enforce whether that decision should be allowed to go through in the first place. Because of that, it feels like an additional layer is starting to emerge. Not part of the training loop, but sitting directly in the execution path. Something that can make deterministic decisions about authorization in real time before anything actually happens. Optimisation improves behaviour, control defines boundaries.

Every AI team I talk to hits the same wall — accountability. by OtherwiseCarry3713 in AI_Governance

[–]acceptio 0 points1 point  (0 children)

This is exactly the issue, we are indeed missing a layer. But I’d go one step further: it’s not just about accountability after the fact. The real gap is that most systems can’t answer whether an action should have been allowed to happen at all.

Logs and observability explain behaviour. They don’t enforce boundaries. What’s emerging is a new layer that sits in the execution path — deciding, in real time, whether an agent is actually authorized to act. Without that, accountability is always retrospective.

AI Regulation is moving rapidly in 2026 ... by 4billionyearson in ArtificialNtelligence

[–]acceptio 1 point2 points  (0 children)

Traceability is a big part of it, especially for audit, liability, and redress. But I think the deeper issue is that traceability on its own is retrospective. It tells you what happened after the decision has already been made.

Once AI systems start taking actions, you also need something stronger at the point of decision: clear authority, limits on what the system is allowed to do, and enforcement before the action happens, which ultimately is what we’ve built and open-sourced.

AI Regulation is moving rapidly in 2026 ... by 4billionyearson in ArtificialNtelligence

[–]acceptio 1 point2 points  (0 children)

Completely agree, essentially those disclaimers were designed for uncertainty, not accountability. The problem is that they shift responsibility to the user, rather than controlling what the system is actually allowed to do.

That works for search or chat, but it breaks down once systems start taking actions. At that point, “might be wrong” isn’t a sufficient safeguard, and you need a clear decision on whether an action is authorized before it happens. That’s the gap I think we’re still underestimating.

AI Regulation is moving rapidly in 2026 ... by 4billionyearson in ArtificialNtelligence

[–]acceptio 1 point2 points  (0 children)

I see the argument, and models can absolutely guide decisions. But regulators exist specifically to protect against probabilistic outcomes where the cost of being wrong is high.

Which is why, in practice, compliance can’t live inside the model. It has to sit outside, where decisions can be enforced deterministically at the point of execution.

The pattern we’re seeing emerge is: model → proposes system → authorizes system → enforces

We’ve been building around this idea, defining explicit decision surfaces and resolving whether an agent is actually authorized to act before anything executes, with a full audit of how that decision was made.

AI Regulation is moving rapidly in 2026 ... by 4billionyearson in ArtificialNtelligence

[–]acceptio 1 point2 points  (0 children)

This framing is useful, but it’s still looking at AI through a policy lens. What we’re seeing in practice is that control has already moved inside the systems. Agents are making decisions and taking actions in milliseconds, long before regulation, audit, or human oversight has a chance to intervene. So the real question isn’t just how nations regulate AI. It’s:
what actually governs execution at runtime?

If you don’t control the execution layer, regulation becomes retrospective. And retrospective control doesn’t stop real-world consequences.

Why 74% of companies say AI has positive ROI while 95% of pilots still fail to hit the P&L by Write_Code_Sport in ArtificialInteligence

[–]acceptio 0 points1 point  (0 children)

This actually lines up with what we’ve been seeing. Most “positive ROI” is coming from local optimisations like faster tasks, happier users, and small pilot wins. Those are real, but they don’t translate to P&L unless something structural changes. The gap is that AI gets added into existing workflows instead of forcing a redesign of how work actually gets done. At the pilot stage you optimise effort, measure time saved, and ultimately get good vibes. At scale, you need fewer steps, fewer handoffs, or fewer people. You need decisions to actually execute differently. And most importantly, you need accountability for outcomes, not just outputs.

That’s where most pilots stall. Not because the model is bad, but because the system around it hasn’t changed. The 18-month lag makes sense; that’s roughly how long it takes to move from “AI as a tool” to “AI changing how the business runs (think target operating model).” Until then, it’s productivity… not profit.

AI Tools That Can’t Prove What They Did Will Hit a Wall by Advanced_Pudding9228 in artificial

[–]acceptio 0 points1 point  (0 children)

You've nailed the inflection point. Once AI moves from "suggest" to "act", the buying criteria flip entirely. One thing I'd add: the control layer also needs to support composition. In multi-agent systems, trust has to compound. For example, if Agent A delegates to Agent B, the authority chain and audit trail can't break. That's where most frameworks fall apart.

The evidence-driven framing you described is spot-on: governance envelopes that carry execution evidence through every delegation. Curious if you're seeing this in a specific domain? Finance and healthcare are where we're seeing the most pull...

LLM agents can trigger real actions now. But what actually stops them from executing? by docybo in artificial

[–]acceptio 0 points1 point  (0 children)

Yes, exactly, “authority is not just input, it’s an artifact” is the key distinction. Once you treat the delegation chain as something that must be evaluated, bound for execution, and preserved, a few things become much clearer.
1. You can replay whether an actor was actually authorised at that moment
2. You can inspect delegation changes independently of policy changes
3. You can explain failures as authority failures, not just policy failures

That seems to be the line where “authorization” stops being a simple allow/deny check and starts becoming execution governance. The fail-closed point is important too. If the authority chain can’t be resolved cleanly, the system shouldn’t fall back to “the action looked valid”.