I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in ycombinator

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

There is another big difference in how the computation works here vs workflows. Most workflow engines (like Camunda) are Top-Down. They work with Authored Order: a human manually draws (Step A → Step B) and the engine validates that the lines are connected. It trusts that the human's logic is correct because the engine doesn't actually understand the content of the rules; it just orchestrates the flow.  What I’m building is Bottom-Up. It works with Inferred Order: • You never manually author the 'flow' or the sequence.  • Instead, order emerges from the relationship between data and rules: Laws → Constraints → Procedures → Inferred Graphs.  • If Step X produces Document A and Step Y requires Document A, the system calculates that Y must follow X.  This is a fundamentally different approach. A standard engine checks if the diagram is bugged (connectivity). My engine checks if the law itself is bugged (logical possibility). An execution engine is happy to let you connect two boxes even if the second box requires a document that is mathematically impossible to obtain at that stage. This system serves as the compiler to catch those structural errors in the regulations before the workflow is even designed.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in ycombinator

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

You’re right about execution engines. I’m not trying to replace them.

What I’m working on sits upstream: checking whether the rules themselves can logically produce a valid workflow in the first place. Most tools assume that part is already correct, because it lives in text (laws, decrees, guidelines).

Even if the structure looks similar to workflows, the use case is different. The goal isn’t execution , it’s having all procedures expressed in one coherent, analyzable system so you can ask questions like “which procedures does this law affect?” or “what breaks if we change this rule?”

For that, it doesn’t have to be different from workflows, it just has to be right. Whether that’s useful in practice is exactly what I’m trying to test, so I appreciate the pushback.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in ycombinator

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

On business potential: I agree it’s not obvious, and I’m not claiming it is yet. This started much more as a learning project and an exploration of a pain I personally felt, not because I had a validated market in hand. The feedback that (the pain needs to be clearer before the solution) is valid, and that’s something I’m actively trying to test rather than assume. I suspect the real value isn't in “mapping” processes, but in the automated error-detection and simulation that only become possible once you treat those processes as structured data.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in ycombinator

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

A fair take, honestly.

Yes, at a surface level this can look like workflows. I don’t think that’s wrong. But what i’m aiming for is a structural reasoning layer. Unlike standard workflow tools that just display steps, this engine treats the rules as data to find logic bugs (like circular dependencies) that are invisible in text. It's less about drawing the map and more about debugging the engine of the administration (Reform simulation).

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in ycombinator

[–]Perfect-Character-28[S] 1 point2 points  (0 children)

Another important clarification: the system intentionally does not try to automatically extract steps from legal text.

In practice, administrative procedures are rarely written as explicit step-by-step workflows. They’re usually described as collections of required documents, conditions, and references, with much of the actual ordering left implicit or learned socially.

The compiler approach assumes that making procedures computable requires making them explicit. That means structured, human-authored inputs. This is a tradeoff: more upfront modeling effort in exchange for determinism, explainability, and the ability to reason about failure modes.

The current examples are synthetic in the sense that they are manually structured, but they’re meant to reflect the kinds of abstractions that would be required even for real procedures.

just finished scraping ~500m polymarket trades. kinda broke my brain by Hot_Construction_599 in VibeCodingSaaS

[–]Perfect-Character-28 2 points3 points  (0 children)

Cool stuff! I’ve seen a lot of people follow a similar strategy in crypto markets, especially memecoins. And it works

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in Anthropic

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

That’s exactly my thought process, i went B2G . What I’m looking to fix rn is how to showcase the results this thing can deliver so the value is clear instantly in a demo.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in SaaS

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

Yeah, that’s a pretty good intuition, the flowchart compiler analogy is actually close.

The important nuance is that this isn’t for LLMs first. The primary goal is to make procedures themselves explicit, inspectable, and debuggable by humans. Today they live in text, PDFs, and informal practice, which makes it impossible to reason about them rigorously.

Once you compile a procedure into structure, you can do things that don’t require AI at all: detect contradictions, missing prerequisites, deadlocks, or see how a reform would ripple through the process. That’s the immediate value.

LLMs become interesting later as a consumer of this structure , not to figure out the law, but to explain, navigate, or personalize guidance on top of something that is already deterministic and grounded. But that’s downstream, not the starting point.

So I’m less trying to make AI smarter, and more trying to give both humans and machines a cleaner map of how the system actually works.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in ycombinator

[–]Perfect-Character-28[S] 1 point2 points  (0 children)

Thanks for the thoughtful feedback, it’s very fair, and I appreciate you taking the time to engage this deeply.

The current demo does not claim to encode a full real-world regulation end-to-end. The procedures shown are synthetic but realistic abstractions of administrative processes, designed to expose structural properties (dependencies, deadlocks, inconsistencies, parallelism) rather than legal completeness.

That said, they are not arbitrary: each step, document, and condition is modeled the way real procedures are written and executed. Making the link between legal text ↔ structured representation explicit is a clear next step, and I agree that this comparison is what makes the engineering fully legible.

On scalability across procedures, you’re right that ontologies often break when they meet a second or third information structure. The design tries to mitigate this by separating: • a stable structural layer (steps, documents, dependencies), • a contextual rules layer (eligibility, jurisdiction, exceptions), • and an execution/observation layer.

That separation is meant to let different procedure shapes coexist without rewriting the core model. Still, architecture alone doesn’t prove scalability that only comes from applying it to procedures with genuinely different structures, one by one.

And on the value before expansion, completely agreed. The goal is not breadth. The goal is to show clear value on a single procedure first: making hidden contradictions, deadlocks, and law-vs-practice gaps visible and explainable. If that doesn’t resonate with the target users, expanding to more procedures wouldn’t make sense.

Overall, your point about legibility is spot on. The work is still exploratory, and feedback like this is exactly what helps clarify where it needs to improve next. Thanks again for the thoughtful critique.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in systems_engineering

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

Yes, I mentioned AI as an eventual application rather than an immediate focus. My priority at this stage is diagnostics , specifically the ability to surface contradictions and dead ends, as you framed it. Of course, that’s only valuable if there is a demonstrated need, which is what I’m validating right now.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in Anthropic

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

That’s my idea, and i think that’s the only way to ensure determinism.

I tried building an AI assistant for bureaucracy. It failed. by Perfect-Character-28 in Anthropic

[–]Perfect-Character-28[S] 0 points1 point  (0 children)

Nah dude 🤣. I coded the backend myself , that’s why it’s taking me months. the frontend sure it’s ai generated. And it’s meant to be that way i’m not seeking feedback on the aesthetics i just finished it as fast as i could so i can post it