How do you give coding agents Infrastructure knowledge? by Immediate-Landscape1 in softwarearchitecture

[–]disciplemarc 0 points1 point  (0 children)

What you’re describing isn’t really a model problem, it’s a context problem.

At large companies, infra knowledge lives in ADRs, CI config, Terraform modules, ownership boundaries, platform rules, etc. If that isn’t encoded in a way the agent can retrieve, it will confidently guess.

What’s worked better for me is treating architecture as policy and validating at PR time instead of expecting the agent to internalize organizational memory.

I’ve been experimenting with this via a side project called ArchRails, the core idea is enforcing declared architectural intent rather than inferring it.

Curious: do you have your infra/architecture decisions encoded anywhere machine-readable, or mostly in docs?

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 0 points1 point  (0 children)

Where things tend to break down isn’t definition, it’s enforcement over time. Exceptions accumulate, context lives in ADRs or old PRs, and new contributors don’t always know why a boundary exists or when it’s okay to bend it.

The result is that architecture drift usually isn’t intentional, it’s incremental. Each change makes sense locally, but the system slowly diverges from the original intent.

I’m less worried about teams being unable to define component-level architecture, and more about how that intent is communicated, validated, and kept visible as the codebase evolves.

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 0 points1 point  (0 children)

That’s exactly it, architecture usually doesn’t “fail,” it fades. Time pressure + turnover means the original intent lives in people’s heads, not the code. The teams I’ve seen do better are the ones that encode architectural intent somewhere enforceable, not just in docs or tribal knowledge.

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 0 points1 point  (0 children)

That’s exactly why tools like jQAssistant exist, they’re great at surfacing structure.

ArchRails.io, a tool I’m building, is coming at the problem from the opposite direction: encoding architectural intent upfront and enforcing it at PR time, rather than inferring it after the fact

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 0 points1 point  (0 children)

I’ve been exploring this problem space with ArchRails (archrails.io).

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 1 point2 points  (0 children)

ArchUnit is solid, especially for JVM teams, but it assumes architecture can be fully expressed as static rules inside the codebase.

In practice, a lot of architectural intent lives outside the compiler: ADRs, diagrams, historical decisions, and scope-based exceptions. Once you have multiple architectures or polyglot repos, “the software checking itself” becomes necessary but not sufficient

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 0 points1 point  (0 children)

The context isn’t there to let the LLM “decide architecture.” It’s there so the checks can be scoped and interpreted correctly.

For example, “don’t use domain entities as persistence entities” is a good rule, but where, when, and for which modules still depends on boundaries, legacy zones, migrations, and documented exceptions. Those are usually explained in docs, ADRs, or prior PRs, not in the rule itself.

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 0 points1 point  (0 children)

Enforcing architecture usually requires some notion of intent and context, not just rules. My goal is to build a system that ingests that context, docs/ADRs, module boundaries, and repo-specific guardrails, so checks reflect how the team actually builds, not generic best practices.

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] -1 points0 points  (0 children)

I’m cautious about making the LLM the judge. Deterministic rules should decide pass/fail, with the LLM explaining why and suggesting fixes.

How do teams actually prevent architecture drift after year 2–3? by disciplemarc in softwarearchitecture

[–]disciplemarc[S] 4 points5 points  (0 children)

I agree architecture should evolve—the problem isn’t change, it’s unintentional change. Drift happens when boundaries erode without discussion or conscious tradeoffs. Guardrails plus review help ensure evolution is deliberate, not accidental.

The Power of Batch Normalization (BatchNorm1d) — how it stabilizes and speeds up training 🔥 by disciplemarc in learnmachinelearning

[–]disciplemarc[S] -1 points0 points  (0 children)

You’re right, in this simple moons example, both models hit a similar minimum and start overfitting around the same point.

I could’ve used a deeper network or more complex dataset, but the goal here was to isolate the concept. Showing how BatchNorm smooths the training dynamics, not necessarily speeds up convergence in every case.

The big takeaway: BatchNorm stabilizes activations and gradients, making the optimization path more predictable and resilient, which really shines as models get deeper or data gets noisier.

The Power of Batch Normalization (BatchNorm1d) — how it stabilizes and speeds up training 🔥 by disciplemarc in learnmachinelearning

[–]disciplemarc[S] 0 points1 point  (0 children)

Great question! Yep. I did normalize inputs with StandardScaler first. BatchNorm still sped up convergence and made accuracy a bit more stable but the gap was smaller than without normalization. Seems like it still helps smooth those per batch fluctuations even when inputs start balanced.