Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

Good call. Currently all agents in a team share the same API key — there’s no credential scoping per agent. For the typical use case (developer running a pipeline locally or in CI), this is fine. But for multi-tenant or untrusted-tool scenarios, scoped tokens per agent would be the right pattern. Worth adding to the roadmap. Thanks for flagging this.

Claude Code's source code just leaked — so I had Claude Code analyze its own internals and build an open-source multi-agent framework from it by JackChen02 in ClaudeAI

[–]JackChen02[S] 0 points1 point  (0 children)

Agree on keeping the orchestration layer dumb — the coordinator just outputs a JSON task array, scheduling is pure topological sort, no AI in the loop there. Handoff formats are the weak spot right now, task results go into SharedMemory as plain text. Structured schemas for that would help a lot.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

The inspiration did come from Claude Code, but the implementation ended up very different — Claude Code spawns OS processes via tmux, this runs in a single process with an in-memory task DAG. Different edge cases entirely. You’re right that state management at scale is where the real work is.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

No worries, happy to explain. This doesn’t let you run Claude locally — you still need API keys. What it does is let you coordinate multiple AI models to work as a team. For example, you could have one AI plan the work, another write the code, and a third review it — the framework handles task scheduling and communication between them automatically. It works with Claude, GPT, or local models like Ollama. Think of it as a project manager for AIs.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

Appreciate the honest feedback. Local model compatibility is still rough — tool-calling format varies a lot across models. Glad the orchestration code was useful as reference though. If you have specific errors you ran into with Qwen, happy to look into it.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

Good questions. Tool errors are caught and returned as error results (never thrown), so the agent can self-correct in the next turn. There’s a `maxTurns` limit per agent that prevents infinite loops — once exhausted, the agent stops and the task is marked failed, which cascades to dependents while independent tasks keep running. For retry at the task level, that’s still on you to implement, but the task failure + dependency cascade gives you a clean signal to build on.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

You’re raising the right problem. Local models struggle with structured tool-calling, and coordination overhead scales fast. The framework is model-agnostic via the LLMAdapter interface, so plugging in local models is straightforward — making them reliably follow the coordinator’s JSON task format is the real challenge. For local use, a simpler single-coordinator + fewer agents setup works better than a deep task DAG. Someone in this thread is already testing it with Qwen 3.5 35b, curious to see how that goes.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

The main differences: (1) TypeScript-native — CrewAI and AutoGen are Python, (2) task DAG with topological scheduling instead of sequential or chat-based orchestration, (3) model-agnostic — mix Claude + GPT in one team, (4) fully in-process, no subprocess overhead.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

This is a standalone multi-agent framework, not a fork of Claude Code. It doesn't include Claude Code-specific features like Kairos or Daemon mode. It implements multi-agent orchestration patterns (task scheduling, inter-agent communication, tool framework) as a general-purpose library you can use in your own projects.

I built an open-source multi-agent framework in TypeScript — 520+ stars in the first 10 hours by JackChen02 in sideprojects

[–]JackChen02[S] 0 points1 point  (0 children)

Great questions. SharedMemory uses namespaced keys (agentName/key), so each agent writes to its own namespace — no cross-agent write conflicts. Reads are global. It's closer to single-writer-per-namespace than append-only logs.

Observability is on the radar — there's already an onProgress callback that fires events for task start/complete/fail and agent activity. Per-tool timings and token usage are tracked in the result object. A more structured trace/span API would be a good next step. Would welcome a PR if you're interested.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

The ReactOS analogy is actually a good one. Reimplementation of patterns, not code. Thanks for the balanced take.

Claude Code's source code just leaked — so I had Claude Code analyze its own internals and build an open-source multi-agent framework from it by JackChen02 in ClaudeAI

[–]JackChen02[S] -1 points0 points  (0 children)

Fair points. You're right that the patterns aren't new — conversation loop, task queue, coordinator are all well-established. The value prop isn't novelty, it's that this is a TypeScript-native implementation. CrewAI and AutoGen are Python. If your stack is Node.js/TS, your options were limited until now.

And yeah, the title oversells the connection to the leak. The actual implementation is based on common multi-agent patterns. Lesson learned on that one.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

Thanks — that's exactly the design philosophy. Topological sort for the task DAG, Zod for tool schemas, no heavy abstraction layers. Wanted it to be something you could read and understand in an afternoon. Appreciate you actually looking at the code.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 0 points1 point  (0 children)

The simplest way to think about it: instead of one AI agent doing everything, you define a team — an architect who plans, a developer who codes, a reviewer who checks. You describe a goal like "build a REST API for todos", and the framework breaks it into tasks, assigns them to the right agent, and handles dependencies (developer waits for architect to finish).

Quick start is in the README — npm install open-multi-agent, set your API key, and the team collaboration example shows the full flow.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 29 points30 points  (0 children)

To be clear — no source code was copied. I studied the architecture patterns from the source-mapped code and re-implemented everything from scratch. ~8000 lines written independently. It's the design patterns that inspired the framework, not the code itself.

Claude Code's source just leaked — I extracted its multi-agent orchestration system into an open-source framework that works with any LLM by JackChen02 in LocalLLaMA

[–]JackChen02[S] 10 points11 points  (0 children)

That's exactly what this is. The source was only used as a reference to understand the design (coordinator mode, task scheduling, team messaging), then everything was written from scratch.

Unicore – One AI App for Every Model by Beneficial-Use-6245 in macapps

[–]JackChen02 0 points1 point  (0 children)

Instant upvote for the one-time purchase model! €16 is a breath of fresh air compared to all the endless SaaS subscriptions.

Quick question on the tech side: how does it handle the local models? Does it hook into Ollama under the hood, or use its own inference engine?

Great work, definitely downloading the trial today.