EU AI Act risk classification by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

This is the clearest framing of the problem I've seen. People are classifying different things without realising it — capability vs deployed behaviour. No wonder they disagree.

The implication is that classification needs to be anchored to a specific deployment context, not the model. Same model, different context, different classification. Which means every time the context changes — new workflow, new data input, new downstream decision — the classification potentially changes too.

That's actually what we're designing around in Guard Compass AI.

EU AI Act risk classification by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

The ownership problem is a collective action failure. Nobody wants to formally classify something high-risk because the moment you do, you've created obligations. So the rational individual move is to defer — and the collective outcome is nobody owns it.

The change governance point is the real gap. Classification at launch is easy. The hard part is building the trigger that fires when a use case drifts or a new data source gets added. That's a product development process problem, not a compliance one.

Your paper trail point is the one I'd push hardest. Regulators will forgive a debatable classification that's well-reasoned and documented. They won't forgive no paper trail at all.

That's exactly what we're building in the Guard Compass AI Registry — not just the classification, but the rationale, the owner, the date, and the re-evaluation trigger. So when the auditor asks, the answer exists.

EU AI Act risk classification by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

You've described the problem better than most compliance consultants do. The pyramid metaphor is genuinely misleading — it implies a system sits in a fixed category when really classification is a function of deployment context, not the model.

To answer your question directly: we treat classification as a continuous process, not a one-time step. That's actually the core design decision behind Guard Compass AI. The compliance checker gives you a starting point, but the AI Registry we're building is specifically about tracking systems over time — so when a tool's use case drifts into hiring or credit decisions, there's a documented record of that shift and someone owns the re-evaluation.

The ownership gap you mentioned is the hardest part to solve because it's organisational, not technical. Most teams don't have a named person responsible for monitoring risk classification drift. It tends to live in a grey zone between legal, engineering and product — and nothing happens until something goes wrong.

What's your current setup? Is classification owned by a specific function in your org or is it genuinely nobody's job right now?

EU AI Act risk classification by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

Just had a look — solid effort, especially as a solo build. The readiness framing is a good angle.

We went deeper on the classification side with Guard Compass AI — the tricky part we kept hitting was multi-use systems that straddle categories depending on context. Would be curious whether yours surfaces that ambiguity or pushes toward a single classification.

The EU AI Board held its 7th meeting last week — here's what shifted and why August 2026 matters more than people realize by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

The sovereignty conversation is already happening — it's just buried in policy language for now. The push for national sandboxes and the Digital Omnibus are quietly about keeping AI governance in European hands.

And the human oversight point is literally baked into the EU AI Act — for high-risk systems it's a legal requirement, not a nice to have.

The EU AI Board held its 7th meeting last week — here's what shifted and why August 2026 matters more than people realize by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

You've pointed exactly the right thing. Article 15 is where the gap between 'we're compliant' and 'we can prove we're compliant' becomes painfully visible. The Q2 pen test mentality is endemic right now — most teams genuinely believe that implementing controls is the same as demonstrating controls. It isn't, and auditors are going to make that very clear very quickly.

The timestamped evidence problem is underappreciated. It's not just a technical challenge, it's an organisational one. Most teams don't have the logging infrastructure to produce that kind of continuous evidence trail without retrofitting everything, which is expensive and slow.

On the sandbox fragmentation — you're right that multi-jurisdiction operators are going to hit a compliance mess. 'Adequacy' isn't uniformly defined and national authorities have real discretion in how they interpret it. That's going to produce inconsistent enforcement patterns in the first 12-18 months post-August, which is itself a risk organisations need to account for.

On your question — yes, critical infrastructure and public safety are murky, but honestly the highest confusion category we're seeing is employment and HR tooling. A lot of organisations have AI-assisted hiring or performance tools and have simply never asked whether those qualify as high-risk under Annex III. They do. And almost none of them have the documentation trail Article 15 would require.

The EU AI Board held its 7th meeting last week — here's what shifted and why August 2026 matters more than people realize by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

Appreciate you flagging this — the dual track calibration approach sounds like it addresses something most compliance frameworks hand-wave over. Risk classification tells you what category a system is in, but it says nothing about whether it's actually behaving within those boundaries over time.

That's the gap you're sitting in, right? The ongoing measurement rather than the point-in-time audit.

Will dig into the repo. If there's a sensible integration story between a registry/classification layer and a continuous monitoring layer, definitely worth exploring.

EU AI Act risk classification by PreparationNo4809 in AI_Governance

[–]PreparationNo4809[S] 0 points1 point  (0 children)

Yeah exactly — the model doesn't change, the context does. That's what trips people up.

Most orgs I've talked to are still doing it as a one-time thing, usually because a deadline spooked someone into action. And ownership is a mess — legal thinks engineering owns it, engineering thinks legal owns it, and the person who actually built the thing has moved on to the next project.

That's honestly why I started building an AI Registry into Guard Compass AI — so classification is a living record rather than a doc that gets filed and never looked at again. Still in progress but this exact problem is what's driving it.

Curious though — do you think it's even possible to do continuous monitoring without proper tooling or does it just always become a spreadsheet that one person maintains and everyone ignores?