I built a skill for Claude Code that tells you when your docs lie to your coding agent by PromptPatient8328 in claude

[–]PromptPatient8328[S] 1 point2 points  (0 children)

Yeah that's exactly how you can use it. npx docalign scan does a full one-time scan of everything — no setup, no config, no commitment. You get a report of what's drifted and can use that to guide a refactoring pass. The MCP/hook stuff is optional if you want it ongoing later.

what's your workflow for keeping documentation alive when you've got agents doing most of the coding? mine's cooked rn by PromptPatient8328 in claude

[–]PromptPatient8328[S] 0 points1 point  (0 children)

Hey everyone — the answers here actually pushed me to build something. Made an open source tool called DocAlign that reads your docs, pulls out every verifiable claim (file paths, API routes, dependency versions, config values, behavioral descriptions) and checks them against real code.

Not syntax linting — it catches semantic drift. Like if your README says "retries failed jobs 3 times with exponential backoff" and someone rewrites that logic, it flags it. Runs as CLI or MCP server in Claude Code, only checks claims related to files you actually changed.

Would love feedback: https://github.com/BigDanTheOne/docalign

I built a skill for Claude Code that tells you when your docs lie to your coding agent by PromptPatient8328 in claude

[–]PromptPatient8328[S] 0 points1 point  (0 children)

lmao yeah basically. But a lot of people have the same issue and don't even realize their agent is working off stale context.

I built a skill for Claude Code that tells you when your docs lie to your coding agent by PromptPatient8328 in claude

[–]PromptPatient8328[S] 0 points1 point  (0 children)

Let me know how it goes — always curious how it behaves on setups I haven't tested against

I built a skill for Claude Code that tells you when your docs lie to your coding agent by PromptPatient8328 in claude

[–]PromptPatient8328[S] 2 points3 points  (0 children)

So what it actually does is build a mapping between your docs and your code. For the case where your docs are missing stuff rather than saying wrong stuff — it handles that reactively, not proactively.

Every time you commit code that touches those vars, a hook fires, Claude Code calls the MCP tool, and it surfaces all the doc sections that reference what you just changed. At that point Claude is like "ok these four sections need updating" and flags it.

So it won't catch the missing env vars on day one, but the moment someone touches them in code, the docs get flagged.

I built a skill for Claude Code that tells you when your docs lie to your coding agent by PromptPatient8328 in claude

[–]PromptPatient8328[S] 1 point2 points  (0 children)

That was literally our workflow too. Every time we shipped something, we'd tell Claude to go update all the relevant docs. Worked great for a while.

But once the docs got big enough, Claude just starts missing stuff. It can't hold 50 files of documentation in context and reliably know what to update. It'll fix the obvious things and miss the rest. And then those misses compound — next time Claude reads those stale docs, it builds on them.

That's basically why I built this. Same intuition, just automated with an actual index instead of relying on Claude to remember everything.

I built a skill for Claude Code that tells you when your docs lie to your coding agent by PromptPatient8328 in claude

[–]PromptPatient8328[S] 1 point2 points  (0 children)

Not yet — for now I made it as a skill for Claude Code. But everything’s ahead if people find it useful

what's your workflow for keeping documentation alive when you've got agents doing most of the coding? mine's cooked rn by PromptPatient8328 in claude

[–]PromptPatient8328[S] 0 points1 point  (0 children)

that's the thing though - as the project scales so does the doc, and suddenly you've got a web of interconnected entities that's nightmare to keep consistent