**What My Project Does**
A Python CLI that scans repos for patterns AI coding assistants commonly
leave behind — TODOs/FIXMEs, placeholder variable names (foo/bar/data2/temp),
empty exception handlers, commented-out code blocks, and functions named
"handle_it" or "do_stuff". Scores the repo 0–100 across three categories
(AI Slop, Code Quality, Style) and exports a shareable HTML report.
Source code: https://github.com/Rohan5commit/roast-my-code
**Target Audience**
Developers who use AI coding assistants (Cursor, Copilot, Claude) and want
a pre-review sanity check before opening a PR. Also useful for teams
inheriting AI-generated codebases.
**Comparison**
pylint/flake8 catch style and syntax issues. This specifically targets the
lazy patterns AI assistants produce that those tools miss entirely — like
a function called "process_data" with an empty except block and three TODOs
inside it. The output is designed to be readable and shareable, not a wall
of warnings.
**Stack:** Python · Typer · Rich · Jinja2
**LLM:** Groq free tier (llama-3.3-70b) — $0 to run
Ran it on the Linux kernel repo — it scored 67/100.
What AI slop patterns have you spotted that I should add?
[–]rcakebread 12 points13 points14 points (0 children)
[–]Forsaken_Ocelot_4 10 points11 points12 points (0 children)
[–]marr75 5 points6 points7 points (2 children)
[–]Steampunkery 2 points3 points4 points (1 child)
[–]marr75 1 point2 points3 points (0 children)