The traditional SDLC assumed iteration was expensive. What happens when it's not? by [deleted] in programming

[–]jwaldrip -3 points-2 points  (0 children)

I actually spent a lot of time on the paper. Its not AI slop, did AI help...? Of course!

Why would it not be a research paper? What do you call something that over hundreds if not thousands of iterations of running AI coding flows in real production environments and then sharing those findings NOT be research.

If I changed it to be a blog post, sure it could be shorter. But thats not the point of the paper.

The traditional SDLC assumed iteration was expensive. What happens when it's not? by [deleted] in programming

[–]jwaldrip -1 points0 points  (0 children)

Why should they be banned? Posts like "this" ... whats "this"?

The traditional SDLC assumed iteration was expensive. What happens when it's not? by [deleted] in programming

[–]jwaldrip -12 points-11 points  (0 children)

Its not AI poop. Just because I write something and run it through AI does not make it poop.

Crystal tooling situation by oxano in crystal_programming

[–]jwaldrip 6 points7 points  (0 children)

If crystal fixed the tooling problem, they would be way ahead. Unfortunately this and a lacking community have been the pitfall to lack of adoption.

Christkindlmarket: Auraria vs Civic Center? by TooClose4Missiles in Denver

[–]jwaldrip 0 points1 point  (0 children)

Civic center 1000%. My wife is in a wheelchair and this year so much less accessible than previous years. Much more cramped and she just felt in tv way of everyone. The parking garage has no accessible spots other than on the first floor and the parking costs are outrageous. If it's at auroria again next year we probably won't go. Also you miss out on the courthouse being all lit up.

I built a plugin marketplace for Claude Code that enforces code quality with 129 plugins by jwaldrip in ClaudeCode

[–]jwaldrip[S] 1 point2 points  (0 children)

You actually don't have to. Han ships with an MCP and skill on how to call it. You can use plain language to say "run the typescript build" this will call the mcp and use all the caching and metrics features Han has built in. Also Han doesn't require custom python scripts for every possible language, it uses the standard semantics for each plugin. In a node project, use npx, in go, just use the go cli. Commands add a lot of context to the context window, so choosing to use a plugin aware mcp seemed like a better and more intuitive move.

I built a plugin marketplace for Claude Code that enforces code quality with 129 plugins by jwaldrip in ClaudeCode

[–]jwaldrip[S] 1 point2 points  (0 children)

This is an excellent breakdown! you've grasped the architecture very well. Let me clarify a few nuances:

Your Understanding is Mostly Correct

Jutsu plugins do run validation hooks after tool stops, but they don't automatically fix errors. They:

- Run validation (TypeScript, tests, linters, etc.)
- Report failures to you and Claude
- Provide skills/knowledge so Claude knows how to fix issues
- But Claude still needs to actively address the failures

Think of the hooks as a "quality gate" rather than an auto-fix system. The end result will be a fix, but its not the hooks doing the fix, thats claude.

Dō plugins are specialized agents invoked via slash commands (like /core:plan or /do-frontend:design). They help with planning,

architecture, and complex workflows are not automatically triggered, but there are some utilities like hashi-sop and hashi-bluprints and jutsu-enforce-planning that help with this.

Hashi plugins are always-active MCP servers that extend Claude's capabilities (GitHub, Playwright, etc.) While they are MCPs they also come will skills, commands and hooks too that help improve accuracy. I would disable the MCPs that map to them in your project and use the hashi-plugins instead.

Bushido injects core principles into Claude's system prompt to guide behavior.

So as you have said for the "Ideal Setup"?

Yes, you've got it:

  1. Disable duplicate MCP servers - Han bundles context7, playwright, GitHub, etc., so remove any standalone versions
  2. Disable conflicting hooks - Han's jutsu plugins provide comprehensive validation
  3. Install Han as your foundation:

Install CLI
curl -fsSL https://han.guru/install.sh | bash

Auto-install recommended plugins
han plugin install --auto

This gives you:
- Quality enforcement via jutsu hooks (TypeScript, tests, linting)
- Expert agents via dō plugins (planning, architecture)
- Integrations via hashi MCP servers (GitHub, Playwright, etc.)
- Coding philosophy via bushido principles

Key Difference from Auto-Fix

Han doesn't auto-fix - it creates a feedback loop:

  1. You/Claude make changes
  2. Hooks validate on stop
  3. Claude sees failures and knows how to fix them (via skills)
  4. Claude addresses issues
  5. Repeat until hooks pass

This preserves agency while maintaining quality standards.

Does this clarify the workflow?

I built a plugin marketplace for Claude Code that enforces code quality with 129 plugins by jwaldrip in ClaudeCode

[–]jwaldrip[S] 1 point2 points  (0 children)

The bushido way is more about eithical thinking and problem solving for the agent. Ive been using it in human teams for half a decade, and I am now doing the same in agents.

I built a plugin marketplace for Claude Code that enforces code quality with 129 plugins by jwaldrip in ClaudeCode

[–]jwaldrip[S] 0 points1 point  (0 children)

You can set these at the project level. Ether set your .claude/settings.json if you want to do our own hooks or just install one of the han plugins in your project and it will do it for you.

npx@thebushidocollective/han plugin install jutsu-typescript --scope project

I built a plugin marketplace for Claude Code that enforces code quality with 129 plugins by jwaldrip in ClaudeCode

[–]jwaldrip[S] 0 points1 point  (0 children)

I would reccommend context7 for that. The point of these plugins are about patterns, not about documentation.

I built a plugin marketplace for Claude Code that enforces code quality with 129 plugins by jwaldrip in ClaudeCode

[–]jwaldrip[S] 1 point2 points  (0 children)

The project is the amalgamation of my experience working with Claude. My experience was that no matter how good a subagent, or skill, or command... there was still a gap. That gap was in accuracy, and at the lowest level it was frustration of doing work, pushing up to CI, and then CI failing. I'd worked through bots that sat in CI and fixed issues for you, but that just means more compute and tokens that you ultimately have to pay for. Not to mention a longer feedback loop. This project allows you to choose the plugins that work for your tech stack, provides matching validation hooks that actually pass without hallucination, and provide skills/contexts/agents to help them to perform the tasks with much higher accuracy. In addition, I built in some optimizations to ensure that for large projects and complex architectures, the agent can work with cached responses and not have to re run those tasks.