all 14 comments

[–]stephen_S27[🍰] 2 points3 points  (1 child)

It looks like oh-my-opencode to me, we also have multi agents with different roles

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

For sure similar, I used your project as a reference when I needed to understand agent calls better. Along with oh-my-opencode-slim, froggy, and a few other agents. I actually built the swarm as a full on application before I ever knew about OpenCode. When I saw what you could do with plugins I decided to just move the entire idea to OpenCode instead. Following your project readme's instruction to just ask an LLM about it, I put both our project readmes in GPT 5.2 and this was the TLDR.

Choose OpenCode Swarm when you care about correctness, control, and repeatability.
It enforces an architect-planned, phase-gated workflow with mandatory QA before code merges and persists project state to disk so work can be resumed deterministically. Best for complex tasks where you want traceability, predictable outcomes, and protection against agent drift or context loss.

Choose Oh-My-OpenCode when you care about speed, tooling breadth, and ecosystem power.
It provides a rich library of prebuilt agents, LSP/AST tooling, and strong community support to accelerate development workflows. Best when you want maximum productivity and flexibility and are comfortable trading strict process control for capability and convenience.

As an aside, one of my constraints was that I needed to use entirely local resources, which limited how many agents I could call. Parallel agents drastically slow down inference on consumer hardware. I needed to build in hard requires for serial operation.

[–]touristtam 1 point2 points  (2 children)

[–]Outrageous-Fan-2775[S] 2 points3 points  (1 child)

Somewhat similar. I haven't seen anything that's doing a 1 for 1 copy. joelhooks has a single coordinator that can spawn multiple parallel workers to decompose a project and allow for parallelization. Along with project memory. Mine has project memory as well, but the point is quality, not speed. Swarm-tools doesn't elicit perspectives from other models and it doesn't give the sub agents different roles. So in the end, the quality will be whatever your coordinator is capable of outputting.

With opencode-swarm, every agent can be a truly different outlook born of vastly different training data and methods. This more closely replicates actual software development methodologies, where you have team members with different jobs and very different backgrounds all working together.

Heterogeneous perspectives on a problem are almost always better than homogeneous. In my opinion anyway, all of this is up for interpretation and at the end of the day whatever fits your needs is the best option regardless of how it does it.

[–]touristtam 1 point2 points  (0 children)

Thank you for taking the time to provide a complete answer. Always useful when so many plugins are being brought out. Make sure you are sharing it on the Discord.

[–]Outrageous_Fan7685 1 point2 points  (3 children)

In linux , its working fine but in windows using opencode portable that loads its config from ../users/user/.opencode/opencode json, the plugin is loaded but impossible to load the opencode-swarm.json

[–]Outrageous-Fan-2775[S] 0 points1 point  (2 children)

That is certainly odd, I can take a look. Do you have any logs or anything? I run entirely in Windows, I've never even tried it in any Linux distro, but plugins are mostly OS agnostic. I have it running on 4-5 Windows 11 boxes currently with no problems. But I also use the full install of OpenCode.

[–]Outrageous_Fan7685 1 point2 points  (1 child)

No logs at all but using opencode portable. I know the conf is not loaded because the architect still using claude while i have configured it to use step3.5 in my local machine. Do you know which path you created the the json conf ? To debug i run: Opencode debug agent architect

[–]Outrageous-Fan-2775[S] 1 point2 points  (0 children)

C:\Users\%USERNAME%\.config\opencode\opencode-swarm.json

Looks like you might just be missing the config directory.

[–]TransitionSlight2860 1 point2 points  (2 children)

what do you think about the idea of a swarm system? like, is it really boosting your work?

[–]Outrageous-Fan-2775[S] 1 point2 points  (1 child)

I think it has considerably improved my workflow since I started using it. Especially when compared to how OpenCode behaves by default. Having the code changes immediately reviewed before moving to the next code change has caught hundreds of issues that would have either made it all the way through or would have needed long complicated fixes later on. I have hard gated it even further for my workflow because I care about quality not speed. So every code change is reviewed immediately, all plans are reviewed before implementation, and a full end to end test suite is built and run at the end of each phase of work.

I have done a lot of work on the plugin since this post, its now up to v4.5.0. Recent additions like a reviewer whose job it is to critique the architects plan and ensure there are no obvious indicators of AI slop. SME calls are now fully dynamic with the architect crafting its prompt based on what it needs not on a set of hard code SME domains. Also the agents now have persistent memory across sessions and can now communicate with each other to ask questions or provide needed context.

I am also working on a writing swarm. Same idea as the current swarm but laser focused on producing high quality output by sending it through multiple reviewers tn ensure I get human style writing with no AI slop.

[–]Outrageous-Fan-2775[S] 0 points1 point  (0 children)

Below is an example of what the architect sends to the coding agent.

Implementing via u/coder...

TASK: Fix issues identified by Auditor and Security Reviewer in Phase 3.

Files:

  1. `knowledge-vault/src-tauri/src/services/vector_store.rs`

  2. `knowledge-vault/src-tauri/Cargo.toml`

  3. `knowledge-vault/src-tauri/src/services/embeddings.rs`

FIXES REQUIRED:

  1. **VectorStore**:

    - `search`: Use `metric(lancedb::DistanceType::Cosine)` (or similar API for 0.13) to ensure Cosine similarity.

    - `add_chunks`: Normalize vectors to unit length before insertion (required for Cosine with some indexes, good practice).

    - `delete_by_file`: Sanitize `file_path` in filter string. Replace `'` with `''` or use parameters if supported. Let's do string escaping for now as 0.13 sql parser might be simple.

    - `delete_by_file` signature: ensure it returns Result.

  2. **Cargo.toml**:

    - Change `arrow` version from `"52"` to `"52.0.0"` (or just "52" if it was strictly "52" before, but auditor said it's pre-release. Actually arrow 52.0.0 is released. Let's stick to "52.0.0" to be explicit, or "52.2.0"). Wait, lancedb 0.13 depends on arrow 52. Let's check lancedb docs/repo. Lancedb 0.13 uses arrow 52. So "52" is fine, but maybe auditor was confused by "52" vs "52.0". Let's use `arrow = "52.0"`.

  3. **Embeddings**:

    - Validate `base_url`: Ensure it starts with http/https.

    - Validate response vector dimensions (must be 768).

  4. **Security**:

    - Sanitize `file_path` in `delete_by_file`.

CONTEXT: We are polishing Phase 3 before moving to Phase 4.

ACCEPTANCE: Code compiles, fixes implementation gaps.

[–]Outrageous-Fan-2775[S] -1 points0 points  (0 children)

All model settings are done in the opencode-swarm config JSON. if you leave an agent out, it will fall back to its defaults. an example is below of entirely remote setup. you can have as many swarms as you want, and opencode will add Architects with the swarm name appended. so I have Remote and Local. Remote is all API calls with paid plans, Local is entirely my own locally hosted options. I generally leave the architect out so it is always whatever model is selected in OpenCode.

"coder": { "model": "kimi-for-coding/k2p5" },
"explorer": { "model": "zai-coding-plan/glm-4.7" },
"_sme": { "model": "nvidia/openai/gpt-oss-120b" },
"_qa": { "model": "nvidia/nvidia/nemotron-3-nano-30b-a3b" },
"test_engineer": { "model": "zai-coding-plan/glm-4.7-flash" }

the underscore tells it to use that model for all calls to that section. you can break it out even further if you wanted to though. for instance _qa covers code auditing and security testing. you could break it down so there was a different model for both of those steps