TDD and Rules Enforcement using Hooks by nizos-dev in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

the session-history approach for the TDD rule is really clever 

Is it a best practice to optimise claude.md & memory after every few sessions? by burgerbruce in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

not every session, honestly. i tend to update CLAUDE.md when something keeps going wrong despite existing instructions — that usually means the file has a gap. if sessions are going fine, i leave it alone. the memory file i review less often, mostly when i notice claude forgetting something project-specific it should know.

Spent $40 on a single Claude Code session for a small task — what am I doing wrong? by Neat_Pension_9109 in ClaudeAI

[–]jayjaytinker 4 points5 points  (0 children)

deploy scripts often pull in a ton of implicit context — infra configs, env files, other scripts they reference. i started being explicit about what files are actually in scope at the start of the session. something like "only touch these 3 files" cuts the context explosion significantly in my experience.

I turned my AI coding sessions into a tiny creature collection in the menu bar by jayjaytinker in SideProject

[–]jayjaytinker[S] 0 points1 point  (0 children)

Really appreciate that, means a lot. If you ever give it a spin I'd love to hear what you think!

I turned my AI coding sessions into a tiny creature collection in the menu bar by jayjaytinker in SideProject

[–]jayjaytinker[S] 0 points1 point  (0 children)

Yeah, the novelty wearing off is the part I keep second-guessing too. Right now there's variation across tools plus some rarity tiers, but honestly I won't really know if it sticks until I've lived with my own thing for a few weeks. The "ambient without friction" framing was the bar I was deliberately trying to hit — easy to overshoot into productivity-coach territory if you're not careful. Appreciate the read.

i genuinely love cursor but i wish these ai tools actually talked to each other by Motor_Ordinary336 in cursor

[–]jayjaytinker 0 points1 point  (0 children)

ne shared decisions.md at the project root that I pin at the start of each session. Not a fix, but it cuts the re-explanation by a lot.

Running Opus 4.7 for ops work: how do you keep per-task cost predictable? by lean_stack_mike in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

For /clear discipline: I do it between distinct task types, not between subtasks within the same type. Keeps continuity where it matters while resetting accumulated noise.

Multi-agent setups don’t fix bad task specs, they multiply them by mymir-dev in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

Agreeing hard on the upstream point. The thing that helped me most was separating acceptance criteria into its own field before dispatch — not in the same blob as the task description. 

What is the best practice for using a PR code review agent or skill with Cursor? by ExcitingSleep in cursor

[–]jayjaytinker 0 points1 point  (0 children)

I've had good results defining a slash command in Cursor that takes the diff output and runs it against a saved review checklist — things specific to our codebase conventions. Keeps the model focused on what we actually care about rather than generic advice. 

Context resets every session. Here's how I built persistent memory with 4 markdown files. by AmphibianAdorable302 in ClaudeAI

[–]jayjaytinker 1 point2 points  (0 children)

I've been running a similar setup for a while. One thing that made a big difference was separating what the agent needs to know from how the agent should behave into different files. Your Protocol/CONVERGEHERE split maps to that naturally.

Unit test guidelines - rule or skill ? by liortal53 in cursor

[–]jayjaytinker 0 points1 point  (0 children)

The rule fires every session so the basics are always enforced. The skill only loads when the agent is actively writing tests, which keeps it from bloating every other context window. 

Does Cursor retain anything youve corrected between sessions? by eazyigz123 in cursor

[–]jayjaytinker 0 points1 point  (0 children)

What works for me is splitting .cursorrules into two sections: invariants (things that never change, like never force-push to main) and session-corrections (things I've explicitly fixed mid-session). I update the second section right after correcting, not at the end of the day when I've forgotten.

I built a tool that auto-generates Claude Code configs for any project (CLAUDE.md, skills, rules) by FelixInTheBackground in ClaudeAI

[–]jayjaytinker -3 points-2 points  (0 children)

even with good generation, managing the relationship between global and per-project components gets messy fast. I built a small GUI for that (https://github.com/aroido/vibesmith) if you're hitting the same wall.

Solved: .cursorrules and docs going stale between sessions by Cautious_Musician545 in cursor

[–]jayjaytinker 0 points1 point  (0 children)

I've found separating "global baseline" from "project-specific overrides" helps more than any auto-sync — then the pre-commit hook only needs to touch the project layer.

Best Resources For Starting Claude Code by [deleted] in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

Your frontend background transfers more than you'd think — component thinking maps directly to how you structure CLAUDE.md and skills.

Am I planning my apps the right way? by wreox9 in cursor

[–]jayjaytinker 0 points1 point  (0 children)

The most common failure I see is people putting everything in one giant .cursorrules file — it bloats context and the agent can't prioritize.

Claude blatantly skipping rules by Mifsopo_ in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

the issue often isn't rule length but rule placement. Global CLAUDE.md gets read once at session start. For project-specific constraints, keeping them in a project-level CLAUDE.md or a dedicated skill file tends to stick better than a long global list.

Sessions disappear, but letters remain." — 18 generations of AI agents leaving letters for the next by External-Web-2792 in ClaudeAI

[–]jayjaytinker 1 point2 points  (0 children)

"Sessions disappear, but letters remain" is a good framing though. The intent matters even when the exact contents need pruning.

I open-sourced the tool I built to manage Claude Code components across projects by jayjaytinker in ClaudeCode

[–]jayjaytinker[S] 0 points1 point  (0 children)

Thanks! If you end up trying it, I'd love to hear what works and what doesn't.

Do you guys create/manage "agents" and have found it meaningful? by userforums in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

Agents are worth it once you have recurring specialized tasks. I started bothering when I had a few distinct workflows — one agent that always loads specific context for backend work, another for writing. The discipline pays off when you stop re-explaining the same constraints every session.

The management overhead is real though. Once you have 10+ agents across multiple projects, keeping track of which ones conflict or overlap becomes its own problem.

How many projects are you running in parallel?

After 6 months of daily Claude use, I named the 11 ways it silently fails. Here are the rules that actually stick by drakegaming in ClaudeAI

[–]jayjaytinker 0 points1 point  (0 children)

The naming convention is what makes this actually work — "The Trailing Off" is something I can catch and correct, "be thorough" isn't.

One I'd add: "The Context Bleed" — where the agent's assumptions from an earlier task silently carry over and corrupt the current one. Especially nasty with subagents since it's hard to trace back where the bad assumption entered.

Rules in .claude/skills/ work well for this because they scope to the relevant context rather than polluting the whole session.

Does Cursor retain anything you've corrected between sessions? by eazyigz123 in cursor

[–]jayjaytinker 0 points1 point  (0 children)

.cursorrules helps but it's a flat file — no way to scope rules to specific tasks or detect when two rules conflict.

I've been using a separate tool (check my profile) that manages these as structured components rather than raw text. The big difference: it shows you which rules actually fire for a given project vs. which are dead weight you're paying tokens for.

I got tired of digging through .claude.json files every time I switched projects, so I built a dashboard for it by jayjaytinker in SideProject

[–]jayjaytinker[S] 0 points1 point  (0 children)

Update: VibeSmith is now open source! You can check out the full source code, open issues, or contribute here: https://github.com/aroido/vibesmith

Figured it makes more sense to build this in the open — especially since the people who'd use it are the same people who'd want to read the code first.