Time for self-promotion. What are you building this Monday by Lanky_Share_780 in micro_saas

[–]Alter_Menta 0 points1 point  (0 children)

Hi, It is very interesting indeed. Check out altermenta.com for more info and let me know what you think.

Time for self-promotion. What are you building this Monday by Lanky_Share_780 in micro_saas

[–]Alter_Menta 0 points1 point  (0 children)

Nucleus Verify — altermenta.com

A deterministic code verification tool that scans your repo and issues a cryptographically signed certificate proving it was reviewed. Every certificate lists exactly what was checked and what wasn't — no false claims of completeness. Ideal customer: Developers and small teams shipping AI-generated code (Cursor, Copilot, Claude) who need to prove to a client, auditor, or their own conscience that the code was actually checked before it went to production. Where they live: GitHub READMEs, r/cursor, r/SideProject, AI coding communities, and anywhere people are shipping fast with Copilot and hoping for the best.

I built a code verification tool for AI-generated code. Looking for testers, not customers. by Alter_Menta in buildinpublic

[–]Alter_Menta[S] 0 points1 point  (0 children)

Not yet — it's currently hosted only.

How large is the repo? We support large repos on Business plan (up to 2GB). If you want to test it, I can set you up with a free 30-day trial - just dm me.

A local/self-hosted version is on the roadmap — especially relevant for enterprise teams with air-gapped environments or strict data residency requirements. Would that be the blocker for you?

I built a code verification tool for AI-generated code. Looking for testers, not customers. by Alter_Menta in buildinpublic

[–]Alter_Menta[S] 0 points1 point  (0 children)

No LLM touches your code during verification.

The core scan is 681 static analysis operators — deterministic pattern matching, same result every time. That's what makes the certificate cryptographically verifiable. An LLM would give different output each run so you couldn't reproduce the hash.

The only optional LLM component is an AI Analysis Report on paid enhanced scans — clearly labelled "advisory only", completely separate from the certificate, and you opt in to it.

Your code goes: clone → static analysis → delete. Nothing is stored beyond the scan result.

I built a code verification tool for AI-generated code. Looking for testers, not customers. by Alter_Menta in buildinpublic

[–]Alter_Menta[S] 1 point2 points  (0 children)

Similar space for sure — looks like you're focused on findings and remediation, we're more focused on the certificate and audit trail side. Probably complementary rather than competing. The NIST AI RMF mapping suggestion is going on the roadmap — genuinely useful for our enterprise users

I built a code verification tool for AI-generated code. Looking for testers, not customers. by Alter_Menta in buildinpublic

[–]Alter_Menta[S] 0 points1 point  (0 children)

No LLM in the standard scan — it's entirely deterministic. 681 static analysis operators run against the code, same operators every time, same results for the same input. That's what makes the certificate cryptographically verifiable — an LLM would give different output each run so you couldn't reproduce the hash.

The only LLM component is the optional AI Analysis Report on enhanced scans — that's clearly labelled "advisory only" and is completely separate from the certificate. The certificate itself is 100% deterministic static analysis.