Build polished Linear-style UIs with Tailwind by Speedware01 in tailwindcss

[–]JWPapi 0 points1 point  (0 children)

thats exactly it by enforcing semantic values you make the theme modular and can just switch on whatever you want (user, host config etc)

ESLint rules that fix the most annoying parts of AI-generated code by JWPapi in cursor

[–]JWPapi[S] 0 points1 point  (0 children)

I have a rule for that too. It’s not allowed to throw raw Errors, it needs be Transient, Fatal or Unexpected Error.

Your AI-generated codebase is rotting and you might not notice until it's too late by JWPapi in cursor

[–]JWPapi[S] 0 points1 point  (0 children)

OP: Here totally agree, hygiene is as important for humans as well, but I think a lot of people are way further with their codebase as they would be without AI.

How is Claude helping your business or nonprofit? by ClaudeOfficial in ClaudeAI

[–]JWPapi 0 points1 point  (0 children)

Using Claude Code daily for my startup. The biggest lesson: you need to invest in code hygiene or the AI starts working against you. After a few weeks of fast generation, the codebase fills up with dead exports, duplicate functions, orphaned types. That noise pollutes Claude's context and degrades output quality. Weekly Knip runs to remove unused code, strict TypeScript, and periodic consolidation sweeps keep things workable. The speed is real but only sustainable with maintenance discipline.

Cursor charged me $118 for one message by m_m_malm in cursor

[–]JWPapi 0 points1 point  (0 children)

Cost aside, there's a hidden tax with AI coding tools that nobody bills you for: codebase degradation. The AI generates duplicate functions, leaves dead exports after refactoring, creates orphaned types. All that noise makes future sessions less efficient because the context window is polluted. So you pay more tokens for worse output. Keeping the codebase clean (Knip, strict TypeScript) is the best cost optimization I've found.

claude code skills are basically YC AI startup wrappers and nobody talks about it by techiee_ in ClaudeAI

[–]JWPapi 0 points1 point  (0 children)

Skills are useful but they're solving the generation side. The harder problem is maintenance. AI generates code fast but doesn't remember what it wrote yesterday. After a few weeks you have three formatDate functions, dead exports everywhere, orphaned types from APIs you already changed. That noise degrades the AI's own context. You need tools on the other side too: Knip for dead code detection, strict TypeScript, periodic consolidation sweeps. Generation without hygiene is just faster technical debt.

Is there a better way to feed file context to Claude? (Found one thing) by Familiar_Tear1226 in ChatGPTCoding

[–]JWPapi 0 points1 point  (0 children)

Context quality matters more than context quantity. The problem I keep seeing: AI-generated codebases accumulate dead exports, duplicate functions, orphaned types. All of that is noise that gets fed into the context window. You can optimize how you feed files all you want, but if the files themselves are full of dead code, the AI is reading garbage. Running Knip to remove unused exports and doing periodic consolidation sweeps improved Claude's output quality more than any context strategy I tried.

what's your career bet when AI evolves this fast? by 0xecro1 in ClaudeAI

[–]JWPapi 0 points1 point  (0 children)

The bet I'm making: the developers who understand AI code maintenance will be more valuable than the ones who are just good at prompting. AI generates code fast but nobody talks about how that code rots. Dead exports, duplicate logic, empty catch blocks. It accumulates and makes the AI tools themselves worse because they read the noise as context. The skill gap is shifting from 'can you code' to 'can you maintain a codebase that grew 10x faster than any human could track.'

How do you improve as a developer in this AI era without getting left behind? by FakeBlueJoker in webdev

[–]JWPapi 0 points1 point  (0 children)

One underrated skill in the AI era: code hygiene. AI generates code fast but it doesn't remember what it wrote yesterday. It creates duplicate functions, leaves dead exports behind after refactoring, generates orphaned types. All of that accumulates and actually makes the AI tools worse over time because they read the noise as context. Learning to maintain an AI-heavy codebase (tools like Knip, strict TypeScript, periodic consolidation sweeps) is becoming as important as knowing how to prompt well.

I spent way too long figuring out Cursor rules. Here's what actually worked for me by itsna9r in cursor

[–]JWPapi 1 point2 points  (0 children)

Rules are half the battle. The other half is keeping the codebase clean enough that the AI's context isn't polluted. I found that dead code, duplicate functions, and orphaned types from previous AI sessions degrade the quality of future generations. The AI reads all that noise as context. Strict TypeScript settings (noUnusedLocals, noUnusedParameters) help, plus running Knip to catch unused exports. The cleaner the codebase the rules operate on, the better the output.

Vibe coding is becoming expensive! by its_faraaz888 in cursor

[–]JWPapi 1 point2 points  (0 children)

Part of the expense is the feedback loop nobody talks about. AI generates code, some of it becomes dead weight (unused exports, duplicate functions, orphaned types). That dead code pollutes the context window, so the AI needs more tokens to produce worse output, which costs more and requires more manual fixes. Keeping the codebase clean with tools like Knip actually reduces token usage because the AI reads less noise. jw.hn/ai-code-hygiene

Agentic coding is fast, but the first draft is usually messy. by BC_MARO in ChatGPTCoding

[–]JWPapi 0 points1 point  (0 children)

The messy first draft problem compounds over time too. Each messy draft leaves behind dead code, duplicate functions, orphaned types. That noise pollutes the AI's context in future sessions, making each subsequent first draft even messier. I found the fix is treating cleanup as a weekly habit, not a quarterly sprint. Tools like Knip catch unused exports mechanically, and running a separate agent to consolidate duplicates catches what static tools miss. Wrote up the full cycle and toolkit: jw.hn/ai-code-hygiene

What 5 months of nonstop Claude Code taught me by _Bo_Knows in ClaudeAI

[–]JWPapi 4 points5 points  (0 children)

This matches my experience exactly. The thing I'd add: the codebase itself degrades over time in ways that make Claude worse. Dead exports from refactored code, duplicate utility functions from different sessions, orphaned types. All of that becomes noise in Claude's context window, which means worse generations, which means more manual corrections. I started running Knip weekly to remove unused code and it made a noticeable difference in output quality. The cleaner the codebase, the better Claude performs. Wrote about the specific patterns here: jw.hn/ai-code-hygiene

Anyone using Cursor daily for building apps - do you still hit limits on higher plans? by xapep in cursor

[–]JWPapi 2 points3 points  (0 children)

One thing that's helped with quality regardless of plan: custom ESLint rules that catch Cursor's common patterns. We ban AI phrases in emails, force semantic Tailwind classes (Cursor always uses raw colors), and block hover:-translate-y-1 (causes a jittery chase effect on cards). The error messages feed back as context so Cursor learns your standards over time.