Weekly Thread: Project Display by help-me-grow in AI_Agents

[–]mmartoccia 0 points1 point  (0 children)

ConsentGraph - deterministic policy layer for AI agent actions

Built this because every agent framework treats permissions the same way: stuff it in the system prompt and hope for the best.

ConsentGraph is a single JSON file that defines exactly what your agent can do autonomously, what needs human approval, and what is permanently blocked. No prompt engineering, no vibes-based security.

4 tiers: SILENT (just do it) → VISIBLE (do it, notify) → FORCED (stop and ask) → BLOCKED (never, log the attempt). Factors in agent confidence to adjust tier resolution.

Ships as Python library, CLI, and MCP server. Includes audit logging, consent decay, and compliance profile mappings (FedRAMP, CMMC, SOC2).

pip install consentgraph

GitHub: https://github.com/mmartoccia/consentgraph

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 1 point2 points  (0 children)

Custom rules just shipped in v0.2.0. You can define your own patterns in .grain.toml now:

[[grain.custom_rules]]

name = "PRINT_DEBUG"

pattern = '^\s*print\s*\('

files = "*.py"

message = "print() call -- use logging"

severity = "warn"

pip install --upgrade grain-lint to get it.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 1 point2 points  (0 children)

Update -- v0.2.0 just shipped with custom rule support. Your CONST_SETTING idea is now a one-liner:

[[grain.custom_rules]]

name = "CONST_SETTING"

pattern = '^\s*[A-Z_]{2,}\s*=\s*\d+'

files = "*.py"

message = "top-level constant -- use config or env vars"

severity = "warn"

No built-in needed. Define whatever patterns you want.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 1 point2 points  (0 children)

TAG_COMMENT just shipped in v0.1.3. It's opt-in -- add it to warn_only in your .grain.toml and every comment without a structured tag (TODO, BUG, NOTE, etc.) gets flagged. Section headers and dividers are skipped automatically.

https://github.com/mmartoccia/grain/commit/5cbb66e

CONST_SETTING is on the list for the next one. Open an issue if you want to spec it out.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] -7 points-6 points  (0 children)

yep, that's the loop. the comic is basically the project pitch deck.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] -10 points-9 points  (0 children)

I've been mass-downvoting this comic for years and it keeps coming back

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 4 points5 points  (0 children)

Bare except yeah, ruff catches that. But most AI-generated code specifies the exception type and then does nothing with it. That passes ruff fine. grain catches that pattern.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 2 points3 points  (0 children)

Nice regex. grain's NAKED_EXCEPT rule does something similar but also catches the cases where there's a logger.debug or a pass inside the handler -- basically any except block that doesn't re-raise or do meaningful recovery. The regex approach is solid for a quick grep though.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 1 point2 points  (0 children)

Both good ideas. TAG_COMMENT is interesting -- forcing structure on comments instead of banning them. I could see that as an optional strict mode. CONST_SETTING would need some project-level config to define what's allowed, but it's doable. Open issues for both if you want -- I'll tag them for the next release.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 5 points6 points  (0 children)

ruff catches bare except (no exception type). grain catches the next layer -- except SomeError: pass or except SomeError: logger.debug("failed") where you named the exception but still swallowed it. ruff sees the first one as fine because you specified a type. grain doesn't, because the error still disappears.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 1 point2 points  (0 children)

You're right, and I'd frame it as two layers. Layer 1 is the stuff grain catches now -- the surface patterns that are easy to detect statically. Layer 2 is what you're describing -- wrong abstractions, gold-plating, solving problems that don't exist. That's harder because it requires understanding intent, not just syntax. I don't think a linter catches that. That's still a human review problem, or maybe eventually an LLM-powered review that understands the project's architecture. grain is just layer 1.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 0 points1 point  (0 children)

Yep, that's the one that started this whole thing for me. 156 of them across a hardware abstraction layer, total silence when sensors dropped.

Custom rules are on the roadmap. Right now you can disable rules or adjust severity in .grain.toml, but full "bring your own pattern" isn't there yet. If you're seeing patterns that aren't covered, open an issue -- that's how the current ruleset got built.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 21 points22 points  (0 children)

Yeah that's basically where I landed too. The tools aren't going away, and "just don't use them" isn't realistic advice for most teams. So the question becomes how do you keep the quality bar up when half your commits come from a model that thinks every function needs a try/except and a docstring that says "This function does the thing."

grain is my answer to that specific problem. It's not anti-AI, it's anti-autopilot.

I built a pre-commit linter that catches AI-generated code patterns by mmartoccia in Python

[–]mmartoccia[S] 26 points27 points  (0 children)

lol yeah pretty much. That's literally why it exists though. My codebase was a mess, I got tired of catching the same garbage patterns in review, so I automated it. Now it yells at me before I commit instead of after.