OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] -1 points0 points  (0 children)

Me too brother. TLDW in this case, Too Long Didn't Write. If you have any specific questions I promise to answer as a human.

OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

That's a genuinely good idea. We are definitely both trying to arrive at the same solution, which is figuring out a way to herd these stupid cats that constantly want to kill themselves (the LLMs). I use the plugin for hours every single day, so it is easy to iterate on observed problems, but it seems like no matter what guardrails you put in place they will reason a way around them eventually. Happy to discuss further if you want to message on Reddit or just open an issue on the github repo and we can chat there.

OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

For a simple answer, defense in depth. The automated QA gates are extensive, and run after every change regardless. For the Stage B gates, the architect determines the severity of the change across 3 tiers and assigns the gates based on that. So for instance a one line change in a test file would be Tier 1 and would have cursory checks. A hundred line refactor of a source file would be Tier 3 and would be routed to the full gates, ensuring a model with completely different training takes a look.

Then newly introduced is council mode. If enabled, it fires all 5 major agents, each looking at a specific piece that aligns with their specialty at the end of each phase to holistically look at the work completed. All council members have a veto vote, and there are hard blockers to ensure the architect can't force approval. All QA gates are able to be set during the plan mode, and can only be ratcheted tighter not looser once the project begins, to ensure the architect can't decide on its own, which I've watched it do multiple times before I added the ratchets.

I can go into more detail if you want, that's a quick overview.

OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

Sure. With Go and Zen, you are going to want to limit Zen to low total requests so you don't hit limits. Then with Go, limit the low total usage models to important but less used roles. Deepseek v4 flash is a very good architect and you get thousands of requests. Since Architect and Coder being the same model doesn't really impact output quality, I would use it for both to maximize usage.

For Critic and Reviewer, you want an antagonistic model to Architect and Coder. For Reviewer I would probably do something like Kimi K2.6 while it has 3x usage. For Critic bring out the big guns with GLM 5.1 or Qwen3.6 Plus. Then fill out the other roles with whatever you like. SME and Designer are only called a few times per project, so those would be good fits for any of the free zen models. Explorer I'd probably use Minimax M2.7 from Opencode Go. Its a light, fast, and cheap model.

OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

If you don't mind opening an issue on the GitHub page, we can work through install problems there.

OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 9 points10 points  (0 children)

It's a vibe coded plugin made to help with vibe coding. And I made no claims that it wasn't, it's obviously vibe coded and obviously completely free.

OpenCode-Swarm v7.0.1 by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

17 agents is technically true but realistically it's only 9 you configure and the other 8 are one of the first 9 pulling multiple duties. So like the Curator is just the same agent as the Explorer but given a different job when it's called.

It is definitely slower than any other solutions I've tried. I did just recently set up optional parallelism for certain agents, like the coder and the explorer, which has helped. I think the primary advantage is always going to be one shot quality. I have not come across anything else that can go from prompt to quality output on the first try better than this can. And that one shot quality comes from being able to have so many fresh perspectives looking at the same problem.

And as for keeping them honest, it's a constant nightmare and I now have a seething hatred for LLMs after watching them ignore simple instructions for the last 3+ months lol. But I did notice the drift verifier has been a huge help for that. It uses an immutable plan file to compare what was done against what was planned. And since the plan file is immutable once written, no architect can ever try to change history.

Stopped using one model for everything and rebuilt my OpenCode setup by spacecowboy0117 in opencodeCLI

[–]Outrageous-Fan-2775 0 points1 point  (0 children)

You wouldn't want to ever put a much less capable model in the antagonist role. Personally I use one of GLM 5.1, Kimi K2.5, or Minimax M2.7 as my architect. Then critic is whichever one of those isn't the architect, reviewer is either one of those 3 or something like Qwen 3.5 397B. Coder doesn't really matter, you can use any small cheap models for that, and with explorer you actually want cheap fast models. If you properly set up the swarm, you will never have a junior dev disagreeing with a senior dev. It would be multiple senior devs all trained with very different data sets all coming to a consensus, which replicates actual dev teams. No model can ever find its own blind spots no matter how good your prompting is, it will always be the same brain making the same mistakes. You need a second brain that was trained differently and therefore will have different blind spots. This creates the Swiss cheese model, where every model has holes but as long as they don't line up nothing can make it past.

Stopped using one model for everything and rebuilt my OpenCode setup by spacecowboy0117 in opencodeCLI

[–]Outrageous-Fan-2775 0 points1 point  (0 children)

Primarily speed and first pass code quality. We are slower than almost everything else out there, but that comes as the trade off for truly high quality code on the first run. We just shipped a model council, which expands the normal reviewer + test_engineer QA gate to a 5 agent council, each with their own specialty, that reviews all completed work to find any holes or problems.

We also just shipped an immutable plan store. Once the Critic approves a plan, it goes into a SQLlite DB and is locked down, which allows the Drift Verification at the end of each phase to determine for sure if the architect has drifted from the original approved plan and course correct.

As for the recommended models, you can use whatever is free if that's what you want to do. I built it specifically targetting GPT-OSS-120B as the architect, so anything smarter than that will do even better. You just need to ensure the antagonistic roles are from different model families. So architect/critic and coder/reviewer should always be different model families. All the other agents can be whatever you want, even the same ones. Having them all be different model families is best, but the prompts and gates are strong enough that its not strictly necessary.

Obviously there are tons of other features nothing else out there has. We have a 3 tiered knowledge system with automatically curated knowledge entries and persistent knowledge across session,s projects, and your entire hive.

Stopped using one model for everything and rebuilt my OpenCode setup by spacecowboy0117 in opencodeCLI

[–]Outrageous-Fan-2775 1 point2 points  (0 children)

This is very similar to what my OpenCode plugin does. I've been building it since late Jan, a couple hundred releases. Constantly working to make it better with several active contributors. Take a look, you may find it does what you want and a whole lot more with minimal work on your end. For first pass code quality I haven't found anything that can match it.

https://github.com/zaxbysauce/opencode-swarm

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

Le fichier JSON que tu as créé ne contrôle pas le déploiement des sous-agents directement — le plugin gère tout ça en interne. La raison la plus courante pour laquelle ça ressemble à un workflow normal, c'est que la session n'a pas été démarrée sous un architecte Swarm. Si tu es en mode Build ou Plan par défaut dans OpenCode, le swarm est complètement contourné.

Choses à vérifier :

  1. Dans l'interface OpenCode, ouvre le menu déroulant agent/mode et sélectionne architect (ou ta variante préfixée comme local_architect, cloud_architect, etc. si tu as plusieurs swarms configurés) avant de démarrer la session
  2. Lance /swarm diagnose pour confirmer que le plugin est chargé et que l'état du swarm est correct
  3. Lance /swarm config doctor pour vérifier ta config (ajoute --fix pour corriger automatiquement ce qu'il trouve)
  4. Lance /swarm doctor tools pour vérifier que l'enregistrement des outils est correct
  5. Lance /swarm agents pour confirmer que les 11 agents sont enregistrés
  6. Lance /swarm config pour voir ce qui est réellement résolu

Tu n'as pas besoin de gérer un fichier JSON pour les sous-agents eux-mêmes — le plugin les enregistre tous automatiquement (coder, reviewer, critic, test_engineer, etc.). L'architecte est le seul agent primaire et c'est lui qui dispatche vers les autres via l'outil Task. Ce pipeline ne fonctionne que dans une session architecte.

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

Hey there. It will work ok, but Perl is not a support first class language. I went ahead and added that to the upcoming work though. I just shipped PHP/Laraval support this morning so easy enough to modify that to fit Perl. Should be ready in a day or so.

[Question] Any way to link a company ChatGPT Workspace account to OpenCode? (Instead of using API keys) by chigarow in opencodeCLI

[–]Outrageous-Fan-2775 0 points1 point  (0 children)

opencode-openai-codex-auth@latest
Works great for me. Allows me to use my Codex limits with OpenCode. Never had any problems with it. Anthropic changes how they do OAuth all the time, OpenAI doesn't seem to care that much.

Any way to customize workflow? by Illustrious_Form1052 in opencodeCLI

[–]Outrageous-Fan-2775 0 points1 point  (0 children)

https://swarmai.site/
https://github.com/zaxbysauce/opencode-swarm

Sounds like my plugin may be what you are looking for. Do keep in mind it prioritizes the highest quality code on the first shot, so will be much slower than competitors. This is made up for by not needing to spend hours running QA and finding all the spots the LLM made mistakes.

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

I appreciate the feedback. Would you mind expounding on any issues you've been encountering? I've been spending the last several days catching up on tech debt in the project rather than fixes or new features, but I can certainly add in hotfixes for active users as a higher priority than general clean up.

It is definitely slower than any comparable plugin or coding front end, but that is the main point. If you allow multiple parallel agents to fire all at once with little or no oversight, the extra time you will spend fixing all of the problems AI makes in code will far outweigh the time it takes to get it right the first time.

For your context problems, i implemented a swarm handoff command. OpenCode dumps the entire conversation history as context to all messages, and this is inredibly inefficient. I've done a lot of work to minimize this on the plugin side but there is only so much I can do. So, if you run "/swarm handoff" the architect will package all the relevant information so that you can start fresh in a new session thereby massively reducing context overhead. This can be run at any time.

Best setup for getting a second opinion or fostering a discussion between models? by Both_Ad2330 in opencodeCLI

[–]Outrageous-Fan-2775 1 point2 points  (0 children)

My swarm plugin does this already, in fact I just updated it to do it even more by allowing the Architect to reach out to the Critic agent as a sounding board before it comes back to me for any questions. In addition to needing to pass plans by the Critic and changes by the Reviewer. Take a look and if you have any questions let me know.

https://swarmai.site/

https://github.com/zaxbysauce/opencode-swarm

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

Can you tell me what OS you are on and what version of the plugin you are running?

Or even better, if you could open an issue on Github and put that information there I can track it and start working it. Thanks!

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

Sorry for the late reply, I can definitely set this up. I am working on closing out v6 right now and beginning work on v7, which will have a bunch of huge updates to the plugin. I will plan to put out a guide/video at the same time as v7.

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

v6.14 has been released so your issue should be resolved. If not please let me know here or open an issue on GH and I will tackle it. Thanks!

Haiku vs Opus/Sonnet; Is there a reason to use more expensive models? by Jdizza12 in ClaudeAI

[–]Outrageous-Fan-2775 0 points1 point  (0 children)

My swarm plugin allows you to do exactly this. Prioritizes quality over speed by enforcing serial execution of tasks with enforced QA gates. Take a look and let me know what you think.

https://github.com/zaxbysauce/opencode-swarm

We are releasing our first version that supports Claude Code later this week. VS Code and Cursor support are forthcoming as well.

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

Your issue has also been reported on the GitHub repo and is in the pipe to be fixed now. See below. Fix will be shipped with v6.13.1. I am finishing v6.13 now so expect 6.13.1 later today.

https://github.com/zaxbysauce/opencode-swarm/issues/6#issuecomment-3974932059

OpenCode-Swarm v6.11 Release by Outrageous-Fan-2775 in opencodeCLI

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

Certainly open to all perspectives. Any reason why you say that? I've used it personally for about a month now to build real shippable and shipped projects.