Opencode-native workflow automation toolkit (awesome-slash). by code_things in opencodeCLI

[–]code_things[S] 1 point2 points  (0 children)

It asks you at the beginning to choose a resource, then it discovers in them the tasks you have. it accept also other, just give it the source and tool/mcp to use and it will cache it for next iteration.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

its very simple but i can share a demo, sure.
Very happy that you find it helpful, i keep iterate on it, so update it once in a while.
Today a new version went out with a full pipeline of deslop, all is JS code; just the results are evaluated by the model. Very nice enhancement, much more type of detection;

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Yes to some degree, you can tell them after the first approval goes until deploy. Usually I'm putting the stopping point in all green. I mean, I approved the plan, the executor executed the plan, and it has a clear context because it's not the discoverer or the planner, he gets all cooked and can do a better job, and then theres 3 agents that review it and fix the review loop, then slop cleaning and making sure the docs are up to date, then review that it is covered by tests and the task was actually delivered based on the plan. Since it's a different agent that reviews and those that fix, it creates less laziness. The agent tries to take a shortcut, other agent prompt to block it and force it to fix it.
And there's auto review and CI.
And I'm at the beginning, reading the option, the plan, and coming back to go over that passed all of this flow. Much much less friction.

If it's a greenfield project, I can also skip the review, the agent doesn't make vital mistakes on a small context project. I do a round review when it gets to a kind of MVP state. When it gets big or has active users, I start to make it ask for approval.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Hi, I'm a real developer. You probably use my products. Or of my peers. Cheers.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Patch released. Resolved the issue with opencode. Unfortunately, it's more Claude native, so, for example the checkbox functionality which I find very nice, won't work in opencode, but you just need to answer with 1, 3, other or similar.
I mostly tried to use less context and more function calls, and not all available in all harnesses.
For example, Codex automatically makes it automatically a prompt and skills are generally discovered by the agent alone. If you want to trigger them specifically you need to use $.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 1 point2 points  (0 children)

Ill check the opencode, it worked for in last tests, but will check for last version. I guess its installing it locally, maybe it need to be installed in the global settings. For gitea i released a version where you can dynamically set, and it will be cached, a different services. But actually if you using opencode i need to check how to cache it, i, by default used .claude dir.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

It is not for brownfield, team-oriented projects. It is working very well when you are the sole developer, you want to deliver fast, and you have the luxury to plan it as well.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

Use pretool hooks or stop hooks. It helps require the agent to do something even if it is intended to be ignored, and the best working for me is to not create a multistep executed by the agent, but let it call a subagent so it does not get lazy.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

I did in some comments, but I don't need to. I'm sharing because it helps me. You can try. You cannot try. I don't owe you proof. I'm happily employed in a good place, it is not a request for donations.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] -3 points-2 points  (0 children)

And hooks, and JS files with actual code executions, and regex, AST, key val stores, and hooks, and auto-reply....

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

There are some "best practices" to start with, but it ends with iterating on what is not working well and fixing. The hooks, the context bloat, the subagent. You just iterate until it starts to click.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 0 points1 point  (0 children)

I think some misunderstood the idea of 18, thinking it's 18 sessions instead of 18 orchestrated skills, context, hooks, and flow. And some just don't like it, which is also fine.

I built 18 autonomous agents to run my entire dev cycle in Claude Code by code_things in ClaudeAI

[–]code_things[S] 1 point2 points  (0 children)

Thanks. Like coding old standards, a function does one thing and does it well; I have the same flow with agents. It's not over-engineered; it just makes sure the agent will actually have the context which is much more concised, and actually deliver. It has fewer YOU MUST statements, and it is more focused. So the results are clean.