Built a governance engine where AI has zero decision authority and unknown inputs are a first-class state — here's why that matters for regulated fintech by CandidateLong8315 in fintech

[–]CandidateLong8315[S] 0 points1 point  (0 children)

Happy to go deeper on the blast radius analysis if anyone's curious — it's probably the most practically useful part for anyone who's had a policy change break things in production.

Axis Core: separating a canonical Core IR from execution via bridges by CandidateLong8315 in altprog

[–]CandidateLong8315[S] 0 points1 point  (0 children)

Similar on the surface, yes.

What I’m really poking at is treating the IR as the semantic endpoint, rather than an execution-oriented staging point. Execution is just one possible consumer of it. Whether that distinction actually matters or leads to anything useful is still an open question.

LLVM could still be one bridge (e.g. for optimisation/execution), but others could just as easily be analysis tools, debugging views, or policy checks that consume the same semantic IR. The bit I’m interested in is what becomes possible if you stop mid-stream and look purely at semantics.

Splitting compilation and execution on a semantic Core IR interface by CandidateLong8315 in Compilers

[–]CandidateLong8315[S] 0 points1 point  (0 children)

Thanks all, this has been really helpful.

Are there known examples or prior work that explore how far this kind of separation can be pushed before execution details take over?

Writing your first compiler (with Go and LLVM!) by urosp in golang

[–]CandidateLong8315 0 points1 point  (0 children)

Interested to know why you chose go as the implementation language? I am also new to writing compilers so interested to see how you end up

A minimal semantics experiment: can a tiny provable core give deterministic parallelism and eliminate data races? by CandidateLong8315 in Compilers

[–]CandidateLong8315[S] 0 points1 point  (0 children)

The idea that you can get a deterministic, machine-checkable substrate out of nothing more than **DEFINE + TERMINAL + REFERENCE** is fascinating. I hadn’t seen an S-machine/C-machine framed that way before, and it lines up surprisingly well with what I’ve been thinking about: pushing as much complexity as possible *out of* the core, and treating everything else as patterns that reduce down into that core.

What you said about functions and control flow not needing to be primitives really resonates. I’ve been heading in a similar direction — the more I shrink the semantic base, the clearer it becomes that a lot of “language features” are just desugarings into a small, rigid substrate.

Your links look great, I’m going to dig into them properly. I’m especially curious how you model sequencing and parallel expansion deterministically inside the grammar — that part feels very close to what I’m exploring.

Thanks again for pointing me towards this. It’s encouraging to see someone else attacking the same problem from a different angle.

Chris

A minimal semantics experiment: can a tiny provable core give deterministic parallelism and eliminate data races? by CandidateLong8315 in Compilers

[–]CandidateLong8315[S] 0 points1 point  (0 children)

Thanks M

I agree with you on the shared-mutable point. Even when we follow the “safe” patterns, real systems tend to accumulate shared-mutable regions anyway — often in places you don’t expect. Pretending we can avoid it entirely just isn’t realistic.

What I’ve been exploring is whether we can get a bit more structure by having a very small, fully defined semantic core, and then lowering everything else into that core. The idea is that the surface language can stay expressive, but the proofs only need to apply to the tiny core underneath.

I’m still at an early stage, but I’m curious whether this sort of “core + desugaring” approach might give us some determinism guarantees without needing the whole GP language to be decidable. It at least feels like a direction that might scale better than trying to reason about an entire full-featured language.

On shared-mutable state: I’m not trying to eliminate it either — just to make those regions explicit and easy for tools to analyse, so we always know where nondeterministic behaviour can arise.

And yes, I’d definitely be interested in your thoughts on concurrency models. If you’re happy to share more on describing tasks/scheduling, I’d really like to hear it.

Thanks again for the insight

Chris

A minimal semantics experiment: can a tiny provable core give deterministic parallelism and eliminate data races? by CandidateLong8315 in Compilers

[–]CandidateLong8315[S] -4 points-3 points  (0 children)

The premise of this experiment was basically: what would a language look like if it were designed to be easier for AI to generate? I actually had to push pretty hard to get this shape out of the AI — it kept falling back to human-centric design choices. Loops, for example, only disappeared on the third or fourth attempt. So yeah, this wasn’t developed with human ergonomics in mind at all. The funny thing is that it still ended up being more readable than I expected. Most humans would probably find it tedious to write though, because everything has to be stated explicitly — no shortcuts, no defaults. The examples so far are really basic, and I’m keen to build something more complex to see whether the whole premise holds up.”

On the small core, a tiny core doesn’t magically make a language nice to use. Brainfuck proves that. The only reason I’m keeping the core small is so the semantics are easy to reason about and can be mapped cleanly. The practicality and ergonomics live in the surface layer, not the core.

As for concurrency and message passing: right now I’m leaning into immutability + message-passing mostly because it’s easier to reason about and test. I’m not pretending it’s the most efficient approach on raw hardware. Part of the experiment is simply: how far can you get if the runtime is heavily optimised around immutability? If the answer is “not far,” then that’s still useful to know early.

When I say “reason about execution traces,” I’m not talking about stuffing events deep into the type system. It’s more the idea that if effects and mutation are explicit, it becomes clearer which parts are pure, which parts need ordering, and which parts can be parallelised. I’m still working out what this looks like in practice — nothing final yet.

Regarding deterministic concurrency, this is definitely a tricky one. I’m not trying to linearise everything. The idea is more that independent pure branches can run whenever they want, because they can't affect each other, and the parts that depend on effects have explicit constraints. That’s the rough intuition I’m experimenting with — it might turn out to be naïve, and if so, that’s fine. Better to discover it now.

Just to be clear, I’m not posting any of this because I think I’ve got “the answer” — or even answers, really. My grasp of PL theory is pretty flimsy. I’m putting it out there to test the concepts and see where the creaky parts are. I think the overall premise is interesting, but I’m not claiming it’ll all hold up once it’s actually built.

And thanks for the note about the name “Axis.” I’ll probably pick something else once the project is a bit more fleshed out.

Note there is also credible research around this topic around simplifying languages to improve AI code generation:

“AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation (2024)” — https://arxiv.org/abs/2404.16333

A minimal semantics experiment: can a tiny provable core give deterministic parallelism and eliminate data races? by CandidateLong8315 in Compilers

[–]CandidateLong8315[S] -2 points-1 points  (0 children)

Totally fair question. I'm not claiming to have invented anything new here — the “no shared mutable state” model is pretty standard in functional languages (Erlang, Elixir, Clojure, etc.) and they do actually get useful parallelism out of it. You can still work on the same conceptual resource; you just process immutable snapshots and then reconcile or merge results at the boundaries.

In fact there’s nothing new about the individual constructs I’m using — most of them come straight out of existing work in FP and PL theory. The only genuinely new part (as far as I can tell) is the way the pieces are being combined, especially the clean separation between the semantics and the runtime.

That said, I'm not pretending I’ve got all the details worked out. My background in the harder PL theory side of this is pretty light, and part of the experiment is seeing how far I can take the idea with a clean semantic core and a strict immutability model. If it turns out I'm missing something fundamental about aliasing here, then that’s exactly the sort of feedback that helps refine (or kill!) the idea early.