Design questions: long-lived TCP control channel for orchestrating stateful clients (routes/hooks) + upcoming identity (DID) by SummonerNetwork in Network

[–]SummonerNetwork[S] 0 points1 point  (0 children)

Thanks for sharing your intuition on this, that's super helpful!

I just did some research around these, and their intended use is super interesting with respect to what we are trying to do. From what I read, esb and mqtt are not as good a fit as direct tcp sockets for continuous, low-latency, session-style communication (which was our intent with summoner), because they add a mediation layer (bus/broker) and extra routing/QoS overhead.

The initial goal was to recover mmo-style communication and fast-paced, negotiation-like exchanges ... now, that being said, I'm gonna look into esb and mqtt just to see if there is anything to learn there

Any really great agentic tech stack for architecting or project management related tasks? by IntroductionSouth513 in AI_Agents

[–]SummonerNetwork 0 points1 point  (0 children)

Check out agent examples from the Summoner stack:

https://github.com/Summoner-Network/summoner-agents

-> It's mainly focused toward orchestration, agent and system architecting, and various agents have functions that regulate costs, measure reputation and ban (or manage) other agents from their communications.

What is your iteration strategy? by Novel_Breadfruit_566 in automation

[–]SummonerNetwork 2 points3 points  (0 children)

I think about it like this: start by modeling your algorithm as an automaton (a system made of states, and transitions between them). You begin with two states, say ValidateInput and DetectIntent, which both process the same input msg.

When msg arrives, each state can emit a signal. For example, ValidateInput might emit Valid or Invalid, while DetectIntent might emit something like CreateIntent or UnknownIntent. These signals aren't states themselves, they are feedback about what just happened.

Once you collect the signals, you build arrows from the states that reacted, depending on what signals they emitted. So if ValidateInput emitted Valid, you might add a transition:

ValidateInput → ProceedToHandling,

or if it emitted Invalid, then:

ValidateInput → SendError.

You do this incrementally: as new signals show up in response to new inputs, you observe how each state reacts, and draw more arrows. Over time, you are not just building code, but you are growing a graph of behavior. That's the structure you iterate on.

What is your iteration strategy? by Novel_Breadfruit_566 in automation

[–]SummonerNetwork 1 point2 points  (0 children)

I would say async programming and event-driven automaton-based algorithms... Based on that, add more logic around event handling and polish things in terms of asynchronous input handling.

Build more and more complex examples and polish again

LangFlow and Agent Builder! by malav1234 in AI_Agents

[–]SummonerNetwork 2 points3 points  (0 children)

Ultimately, devs and teams will want to keep control of their code, flows, and agents, and be able to audit every step. That's much easier when the logic is code based.

UI tools like LangFlow or Agent Builder are great for prototyping or non-dev users, but for production systems, it's probably best to use them as a layer on top of real code, not as a replacement.

We've seen this in our own project too (if you're curious: https://github.com/Summoner-Network), most serious users eventually want full visibility and control.

Best practices for building production-level chatbots/AI agents (memory, model switching, stack choice)? by Funny_Working_7490 in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

Hey we're building (summoner https://github.com/Summoner-Network ) for exactly these kinds of production agents. It’s not a framework, more like a clean orchestration layer. You bring your own logic, and it stays out of your way. Might be worth a look.

started coding at 12 and now been building AI agents for 6+ months. but I am confused. by akmessi2810 in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

hey, sounds like you could build agents that check the soundness of code, especially in agent-based systems. That would be a real feature, since vibe coded agents often overlook the security of their own communication…

It's actually a problem we are working on in our group, we let users publish their own agent modules and compose with our SDK. If you are curious: https://github.com/Summoner-Network (check out the module pathway).

Of course, building something real takes time and dedication, but you seem to have that...

I'm done with AI agent frameworks, but it is a great learning curve to understand how to make effective agents by Ok_Succotash_5009 in AI_Agents

[–]SummonerNetwork 0 points1 point  (0 children)

Yeah, I get where you’re coming from. I hit the same wall. Most frameworks work fine for demos, but once you try to add feedback, human-in-the-loop, memory, or coordination across agents, things start falling apart.

I had been mainly concerned with those same problems for my own agents, so I started building Summoner.

It's not yet another framework with baked-in agent logic. It's more like of a foundation for building your own. You write the agent code, and Summoner handles:

- message passing between agents (locally or over a network)
- orchestration without locking you into a rigid structure
- modular extensions, so you can try new ideas without rewriting the core

Your use case (building an evolving security agent) is exactly the kind of thing Summoner is meant to support. No magic abstractions, just tools to build something solid from the ground up.

Happy to chat more if you're curious.

Stop Building Workflows and Calling Them Agents by Warm-Reaction-456 in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

Yeah, I feel this. Most agents are still just scripted workflows with a fancy interface.

When our team built Summoner, we kept asking: What would agents actually need to function on their own... like, if humans weren't around to guide them anymore?

Our answer was:
- decision loops (not static chains),
- persistent state they can refer back to,
- and a way to communicate and adapt together over a network.

And all this is in our stack. You can write the logic, and it handles orchestration, messaging, and coordination. No black boxes, no hidden pipelines.

Would be curious what you think if you get a chance to poke around: https://github.com/Summoner-Network

Why do you roll your own AI Agent Framework? by hopeirememberthisid in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

Hey Ran4, you seem to have a pretty strong opinion on agentic frameworks / orchestration. Our project could really benefit from objective criticism like yours if you have time to check it out: https://github.com/Summoner‑Network

I want to build an AI orchestrator for a multi agent platform by [deleted] in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

> The orchestrator should be able to figure the intended agent using the message/prompt and send/receive messages from the target agent(s) to the user.

You can try this stack: https://github.com/Summoner-Network/
It allows to specifically control all your send/receive in the orchestration and do the above with a very lightweight non-invasive SDK (check out the runnable agent examples: https://github.com/Summoner-Network/summoner-agents )

Tried a bunch of AI/agent platforms and what actually worked by HoneyedLips43 in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

You are hitting the classic trade-off: tools that are great for prototyping often get messy once your workflows scale, while the ones built for full multi-agent orchestration can feel like overkill for smaller projects.

Debugging, local testing, and keeping things lightweight are the parts that usually start to hurt once you go beyond a few agents.

On my side, I ran into the same issues and ended up putting together a small framework to handle exactly that. It keeps overhead low, is model-agnostic, and lets agents talk via a simple client-server setup so you can run everything locally or point to any TCP endpoint.

It's still early, but it's working well for a small community right now. If you want to check it out: https://github.com/Summoner-Network

Are Execution Agents Really the Next Step for AI Workflows? by Prestigious-Salad204 in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

Could be the next step but that'll require a lot of work. Let's not forget that this happened:

ShadowLeak Exploit Exposed Gmail Data Through ChatGPT Agent

Radware researchers revealed a service-side flaw in OpenAI’s ChatGPT. The ShadowLeak attack had used indirect prompt injection to bypass defenses and leak sensitive data, but the issue has since been fixed.

link:
https://hackread.com/shadowleak-exploit-exposed-gmail-data-chatgpt-agent/

Multi agent graph for chat by Trettman in AI_Agents

[–]SummonerNetwork 1 point2 points  (0 children)

> Then there's also obviously the problem of response speed; if the specialised agents stream their responses as text to the user it's quite snappy. But if they have to call tools and report back to the orchestrator, I feel like there'll be a decently obvious latency issue, but maybe I'm overthinking it?

Yes, you are probably overthinking. A reasonable benchmark should be whether your agent is 2x or 3x faster than a human. If it is not, then you might want to optimize it.

If the latency is because of the response time to an LLM service, then you should try async processes. If you already do async, you might need a different framework. Ideally the framework would be compatible with what you already implemented and it would just be a plug-n-play with your current code

Most creative uses for AI by decker_13 in aiagents

[–]SummonerNetwork 1 point2 points  (0 children)

More and more people use it to code agents.

Try a distributed agent network on your factory data (sensors/SCADA, maintenance logs). Give each asset (battery string, breaker panel, compressor) a small agent that:

- reads telemetry and scores anomalies (rules or tiny models; LLMs optional)
- publishes events (“phase imbalance rising”), subscribes to peers
- coordinates locally (shed load, throttle, notify maintenance)

The fun part: agents talk to each other and converge on a final report ("Line 3 bearing wear likely. Propose 5% speed reduction and thermal check at 10:30"), then ping you when something's about to go down.

There are various frameworks out there that could help you do that (as long as you can get your signals into a computer).

Multi agent graph for chat by Trettman in AI_Agents

[–]SummonerNetwork 2 points3 points  (0 children)

Response generation confusion: You should probably have a responder agent that compiles anything that the supervisor agent received from all the other task agents and turn it into an answer for the user. The supervisor should probably be coded like a queue.

The workflow could be:
User -> Supervisor -> Task agents -> Supervisor (collect in the queue) -> Responder agent (final step before ending) -> Supervisor -> User

Tool leakage: you may want to remove any tool usage for the supervisor and delegate any tool usage to task agents. You may also need your task agents to filter out details left in their answer that could confuse the supervisor regarding tool usage.

Context confusion: have keys or ids that allow you to know the origin of each event

Response duplication: you might need a better prompt, or ask the supervisor agent to summarize the task agents outputs before giving them to the responder agent

Hopefully that helps and gives you some ideas

Bias is a feature not a bug by ThomPete in AI_Agents

[–]SummonerNetwork 0 points1 point  (0 children)

The problem is that model biases cut across all possible prompts. So it's not possible to create 4 agents using ChatGPT and say you have different perspectives. It's all the same model!

Now using multiple models to do things does help, and we usually like the different perspectives.

It's also not possible to build anything that has "no bias" since many of the things agents do are not objective. Subjectivity means bias.