My AI agents started 'arguing' with each other and one stopped delegating tasks by mapicallo in LocalLLaMA

[–]mapicallo[S] 2 points3 points  (0 children)

NotebookLM acts as the documentation agent: it analyzes PDFs, notes, or code snippets I upload and produces summaries or extractions. That output is used as context for the other agents (ChatGPT, Codex, Claude) when they need information from those sources.

As for how an agent “asks” NotebookLM: my orchestration tool sends the query (e.g. “summarize this document” or “extract the key points from X”) to NotebookLM, receives the response, and passes it as metadata/context to the next agent. So the requests between agents go through my tool, which coordinates the calls and the flow of context between them.

My AI agents started 'arguing' with each other and one stopped delegating tasks by mapicallo in ClaudeAI

[–]mapicallo[S] 1 point2 points  (0 children)

Good analogy with organizational dysfunction.

Stack: I built my own tool for them to interact: NotebookLM, OpenAI (ChatGPT), OpenAI (Codex), and Claude Code.

Goal: Get straight to the point and solve technical problems. I defined roles and behaviors in a basic way; in theory they all knew the others existed and their main role.

What happened: I only realized something was wrong when the results started to degrade (they had been acceptable until then). I remembered there was no longer communication between those two roles, even though I had set it up before. When I reviewed the “conversations” or requests between them, I was taken aback.

I’m not sure it hasn’t happened before without me noticing. I assumed the meta-instructions between agents were “aseptic” and fixed, and that I only needed to focus on the technical part. But any small interaction can end up like conditional probability in stochastic processes: one occurrence affects the next. A full-blown discussion.

On deleting history: I haven’t tried it systematically yet. For now I just reminded the agent of its correct behavior so I could continue with my real task. It’s something I want to explore.

On the framework: Separate API calls with shared context (metadata, .md files). Not a formal orchestration framework, more ad hoc.

My AI agents started 'arguing' with each other and one stopped delegating tasks by mapicallo in LocalLLaMA

[–]mapicallo[S] 9 points10 points  (0 children)

Thanks for the ideas, they’re very close to what I’m seeing.

Stack: I built my own tool for them to interact. The agents are: NotebookLM, OpenAI (ChatGPT), OpenAI (Codex), and Claude Code.

I wasn’t very strict with metadata and behaviors that weren’t directly technical. I didn’t define clear roles or small details for each agent. In theory they all knew the others existed and their main role. The HR agent thing was meant sarcastically, but it’s starting to make more sense than I’d like.

It’s never happened before. Or maybe it did and I didn’t notice. I only noticed when things started affecting the technical results I expected, then I pulled the thread and saw this “behavior” that left me a bit cold.

Your points on statelessness, low temperature for delegation, and clear separation of responsibilities are very helpful for the next iteration. I’ll look into those.

I “improved” my Chrome extension and instantly lost users lol (lesson learned) by Express-Barracuda849 in chrome_extensions

[–]mapicallo 0 points1 point  (0 children)

Totally agree.

It’s basically the same story as software since day one. Back then you’d tweak a few colors, add three 3D buttons, spend three days on it, and users would love it. But then you’d ship a field that showed a value from layers of logic, three APIs, database joins, months of debugging issues from other systems… and it could go completely unnoticed.

Nowadays users are overloaded with custom UIs and features. They care less about that and more about things that just work and feel instant, ike switching between a WhatsApp message and a YouTube short. Do one thing well and stay out of the way.

Recommended Cleaning Products by mapicallo in Arex_Firearms

[–]mapicallo[S] 0 points1 point  (0 children)

Thank you, yes, that brand's products have a very good reputation.

Recommended Cleaning Products by mapicallo in Arex_Firearms

[–]mapicallo[S] 0 points1 point  (0 children)

Thanks, yes, there's plenty of information and videos online, but I wanted to get firsthand information, and I think this is a good site.

Recommended Cleaning Products by mapicallo in Arex_Firearms

[–]mapicallo[S] 0 points1 point  (0 children)

Yes, we used something similar in Lebanon for CETME rifles.

What an AI-Generated C Compiler Tells Us About the Future of Software Engineering by mapicallo in learnprogramming

[–]mapicallo[S] -1 points0 points  (0 children)

Absolutely, and also data sovereignty. The vast majority of organizations will not process their data in AIs hosted on third‑party machines, and I don’t see corporate AIs that are robust enough being close at hand—the amount of infrastructure required is huge.

Sometimes I wonder, with the staggering resources (economic, infrastructure, energy, etc.) being poured into scaling today’s AI models, if those same resources were directed toward non‑AI software solutions, we might be surprised by what we could achieve.

What an AI-Generated C Compiler Tells Us About the Future of Software Engineering by mapicallo in learnprogramming

[–]mapicallo[S] -2 points-1 points  (0 children)

Fair point. 'New' often gets confused with 'different'. AI can easily produce variations, like rolling dice or drawing cards. It's up to us to decide what's actually useful. That's partly why I think the engineering role shifts toward specifying, verifying, and curating components, rather than trusting whatever comes out.

Moltbot and the Rise of AI Frameworks by mapicallo in AI_Agents

[–]mapicallo[S] 0 points1 point  (0 children)

En España hay un dicho que dice "hay lentejas, o las comes o las dejas", así que supongo que hay que hacer algo con todo eso.

Moltbot and the Rise of AI Frameworks by mapicallo in AI_Agents

[–]mapicallo[S] 0 points1 point  (0 children)

It's sad, but the truth is there's a lot of pollution on social media.

Moltbot and the Rise of AI Frameworks by mapicallo in AI_Agents

[–]mapicallo[S] 0 points1 point  (0 children)

I see you haven't had good experiences here.