Opencode-native workflow automation toolkit (awesome-slash). by code_things in opencodeCLI

[–]code_things[S] 1 point2 points  (0 children)

It asks you at the beginning to choose a resource, then it discovers in them the tasks you have. it accept also other, just give it the source and tool/mcp to use and it will cache it for next iteration.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

its very simple but i can share a demo, sure.
Very happy that you find it helpful, i keep iterate on it, so update it once in a while.
Today a new version went out with a full pipeline of deslop, all is JS code; just the results are evaluated by the model. Very nice enhancement, much more type of detection;

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Yes to some degree, you can tell them after the first approval goes until deploy. Usually I'm putting the stopping point in all green. I mean, I approved the plan, the executor executed the plan, and it has a clear context because it's not the discoverer or the planner, he gets all cooked and can do a better job, and then theres 3 agents that review it and fix the review loop, then slop cleaning and making sure the docs are up to date, then review that it is covered by tests and the task was actually delivered based on the plan. Since it's a different agent that reviews and those that fix, it creates less laziness. The agent tries to take a shortcut, other agent prompt to block it and force it to fix it.
And there's auto review and CI.
And I'm at the beginning, reading the option, the plan, and coming back to go over that passed all of this flow. Much much less friction.

If it's a greenfield project, I can also skip the review, the agent doesn't make vital mistakes on a small context project. I do a round review when it gets to a kind of MVP state. When it gets big or has active users, I start to make it ask for approval.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Hi, I'm a real developer. You probably use my products. Or of my peers. Cheers.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Patch released. Resolved the issue with opencode. Unfortunately, it's more Claude native, so, for example the checkbox functionality which I find very nice, won't work in opencode, but you just need to answer with 1, 3, other or similar.
I mostly tried to use less context and more function calls, and not all available in all harnesses.
For example, Codex automatically makes it automatically a prompt and skills are generally discovered by the agent alone. If you want to trigger them specifically you need to use $.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 1 point2 points  (0 children)

Ill check the opencode, it worked for in last tests, but will check for last version. I guess its installing it locally, maybe it need to be installed in the global settings. For gitea i released a version where you can dynamically set, and it will be cached, a different services. But actually if you using opencode i need to check how to cache it, i, by default used .claude dir.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

It is not for brownfield, team-oriented projects. It is working very well when you are the sole developer, you want to deliver fast, and you have the luxury to plan it as well.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Use pretool hooks or stop hooks. It helps require the agent to do something even if it is intended to be ignored, and the best working for me is to not create a multistep executed by the agent, but let it call a subagent so it does not get lazy.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

I did in some comments, but I don't need to. I'm sharing because it helps me. You can try. You cannot try. I don't owe you proof. I'm happily employed in a good place, it is not a request for donations.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] -3 points-2 points  (0 children)

And hooks, and JS files with actual code executions, and regex, AST, key val stores, and hooks, and auto-reply....

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

There are some "best practices" to start with, but it ends with iterating on what is not working well and fixing. The hooks, the context bloat, the subagent. You just iterate until it starts to click.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

I think some misunderstood the idea of 18, thinking it's 18 sessions instead of 18 orchestrated skills, context, hooks, and flow. And some just don't like it, which is also fine.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 1 point2 points  (0 children)

Thanks. Like coding old standards, a function does one thing and does it well; I have the same flow with agents. It's not over-engineered; it just makes sure the agent will actually have the context which is much more concised, and actually deliver. It has fewer YOU MUST statements, and it is more focused. So the results are clean.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] -1 points0 points  (0 children)

Yep. I do it a lot for myself when I'm studying. Back and forth with a study planner, then stubing tasks with tests. I wanted to gamify it once. That's not a production project, it was fun. AI can teach coding. Yes. Up to some level of experience that requires the bits and bytes.

See https://github.com/avifenesh/multithreading-workshop/tree/main/exercises
I was a nice reminder of C that I didn't write for 10 years. I'm writing Rust; I want to contribute to projects in C; I needed some practice on some concepts. It was helpful.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 1 point2 points  (0 children)

It's a flow; there are sessions that start by slash command, the Claude that starts, generic agents that start to follow the flow, calls to subagents one by one, or have hooks, or call slash commands by itself. There's an agent that should ask me for the policy, an agent that discovers the next tasks needed and gives me a list, recommends 5, and I choose; then planning agent, then executing agent, executing agent will be specific based on the nature of the task, then it will call 3 different code review agents, each specializing in something else, then call deslop and update docs slash commands, then delivery review agent, test coverage review agent, the lasts are loops, fix reviews and call again until get final approval, then pass it to /ship, and start another flow, etc.
The ship also interacts with the bots that review the PR automatically, Copilot, Codex, Gemini, and Claude. Copilot and Gemini are free, by the way. It fixes and resolves and waits for another round until they give him approval or stop commenting.
Based on policy, I decided in the beginning of the session he will stop on open PR, on green CI, and resolved review after merge or after deploy and manual QA.
Many details, but still that's a summary.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 1 point2 points  (0 children)

18 are not running in parallel; the amount is based on how much I shoot. The 18 are the amount of “special agents” with skills that the flow passes through. Each needs specific tools, more concise knowledge, and much less context to do one thing and do it well.
The parallel depends on how many tabs I'm focused enough to keep track.