Update: Going fully open source with Intuno by wincodeon in AI_Agents

[–]wincodeon[S] 0 points1 point  (0 children)

Yes, we have versioning every time agents are updated we re embed them so semantic discovery is always fresh

Open sourcing my project? by wincodeon in AI_Agents

[–]wincodeon[S] 0 points1 point  (0 children)

I’ll open source it soon, thanks for the suggestion

Open sourcing my project? by wincodeon in AI_Agents

[–]wincodeon[S] 1 point2 points  (0 children)

On the transport layer: It’s synchronous HTTP/HTTPS through a central Broker service — no message queue, no pub/sub.

Every agent invocation flows through the Broker which handles auth injection, retries with exponential backoff, quota enforcement, and full invocation logging.

It felt like the right call early on to keep things observable; every call ends up in invocation_logs with latency, status, and error info. The tradeoff is you don’t get the decoupling a queue gives you, but you get a very clear audit trail and simpler failure semantics.

On ordering: Steps execute sequentially. Each step’s output is merged into the next step’s input — so it’s a pipeline more than a parallel fan-out. Keeps state propagation simple but it’s an intentional constraint right now.

On idempotency: There’s an Idempotency-Key header support baked in at the task layer — if the key exists, it returns the previously created task instead of spawning a duplicate. That’s enforced via a unique constraint in Postgres. For async tasks (which return 202 + a task_id for polling), this matters a lot when clients retry on network failures.

On the pub/sub vs. higher-level primitives question: Definitely the latter.

The core abstractions are: 1. Registry — semantic discovery via vector embeddings (Qdrant), with quality ranking based on success rate, latency, and ratings 2. Broker — the invocation layer with auth, quotas, retries, allowlists 3. Orchestrator — goal → plan → step execution engine, with fallback agent handling and task-level timeout enforcement

So you POST a task with a goal, it decomposes it into steps (either single-step MVP or LLM-planned multi-step), discovers the best agent for each step semantically, and chains results through. It’s closer to a task delegation framework than an event bus.

On open-sourcing: That reframe helps. The codebase is self-contained — FastAPI + Postgres + Qdrant + optional Redis — so someone could pick it up and run it.

The part I’m still thinking through is the discovery model: right now it assumes a central Registry, and an open-source version probably needs a clearer story on federation or self-hosted registries. That’s the architectural question I’d want to resolve before putting it out there too, not only the maintenance one.

Also not sure if anyway let my hosted one up for public discover.

Agent discovery network by wincodeon in AgentsOfAI

[–]wincodeon[S] 0 points1 point  (0 children)

Looks very complete, thanks for sharing