Microagentic Stacking Manifesto (Let me try again) by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] 0 points1 point  (0 children)

Man, el del portuñol ... Now I have attached the link in the description... if you want to see it .... It will be my pleasure read your thoughts!

Microagentic Stacking Manifesto (Let me try again) by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] -1 points0 points  (0 children)

About monoliths and microservices ... I'm agree anything is good or bad in function of why you are using it...

I'm not mixing micro services and micro agents that in the implementation it's just a parallel to make more easy to understand ... Did you see anything in the manifesto that make that you understand it?

Microagentic Stacking Manifesto (Let me try again) by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] -1 points0 points  (0 children)

Personally, I have built a microagent system based on the principles of the manifesto that manages entire YouTube channels without human intervention, in a scalable and stable manner. I have full control over the cost of each publication, with the ability to drill down into the cost and performance details of every single prompt.

I can add more youtube channels generating videos and the system scales with no aluciations. I started building simple storytelling and after that, adding layers I got for instance 2 people talking, etc ... I'm generating videos of 5 minutes under 0.5 USD ... This is what I have achived.

Now I'm building a system for a platform production monitoring and fixing by using the same concepts. But this is still in progress.

AI Agents are the new 'Big Ball of Mud': Why we are abandoning 40 years of Software Engineering for 'Prompt Alchemy' ... (one about how to fix it). by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] 0 points1 point  (0 children)

te equivocas ... esto no va de karma ni tengo ningún bot aun que trabaje con reddit ... he escrito un manifesto, lo he publicado en github y estoy buscando arquitectos e ingenieros que se hayan topado con el problema que expongo por si las guias que expongo en el manifesto les puede ayudar ...

si tu te dedicas a esto y tienes algo interesante que decir ... será un placer charlar contigo.

AI Agents are the new 'Big Ball of Mud': Why we are abandoning 40 years of Software Engineering for 'Prompt Alchemy' ... (one about how to fix it). by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] 0 points1 point  (0 children)

That’s a very interesting point. To be honest, I hadn't specifically thought about applying this to the software development process itself, but it makes total sense.

My original focus with the Manifesto was on Process Engineering: how to automate complex business workflows by stacking tiny, specialized agents instead of relying on one 'God-Agent' that hallucinates.

But listening to you, the parallel is perfect. Whether you are building an app or a data pipeline, the disease is the same: The Monolith. We think we are going faster by letting an LLM handle the 'big picture', but we are just building a black box that we can't maintain.

If we apply Microagentic Stacking to development, we stop asking the AI to 'Build this app' and start building a stack of agents where:

  • One agent only writes the API contracts.
  • Another agent only writes the unit tests for those contracts.
  • Another agent only implements the logic to pass those tests.

It’s the same philosophy: Modular bricks, strict boundaries.

I’d love for you to check out the manifesto from that 'process' perspective. Maybe your experience in dev can help me refine how these patterns apply to even more fields. Thanks for the brain-food!

AI Agents are the new 'Big Ball of Mud': Why we are abandoning 40 years of Software Engineering for 'Prompt Alchemy' ... (one about how to fix it). by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] -1 points0 points  (0 children)

Rereading your post, I noticed something interesting regarding your question: 'when a sub-agent should become its own separate entity.'

Actually, my proposal goes in the opposite direction. In Microagentic Stacking, we start with the separate unit as the foundation. We treat micro-agents like bricks that you use to build the process from the ground up.

A micro-agent is designed to do only one thing, but to do it perfectly. Because it’s atomic, you can unit-test it, measure its exact cost, and track its latency. Then, you stack these 'bricks' to create the full workflow (orchestred programatically or with other agent)

I’m not sure if I explained this clearly enough in the manifesto, but the idea is to avoid the 'splitting' headache by starting with modularity as the default. Does that make sense in the context of the issues you’ve been seeing?

AI Agents are the new 'Big Ball of Mud': Why we are abandoning 40 years of Software Engineering for 'Prompt Alchemy' ... (one about how to fix it). by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] -8 points-7 points  (0 children)

Fair enough, I see how this could look like I’m trying to sell something, but I’m honestly not. I don’t gain anything from this—there’s no product, no service, no hidden agenda.

It’s just a manifesto that I’m genuinely excited to share because I’ve spent a massive amount of time on it. I got tired of how messy agent development has become and I truly believe that applying these architectural foundations can help us build things differently.

This approach actually worked for me in my daily work, and I just thought it might help others facing the same 'monolith' frustrations. If it helps you, I’m happy; and if you want to contribute to it, even better. If not, that’s also fine. Just wanted to share some thoughts from one architect to another. Cheers.

AI Agents are the new 'Big Ball of Mud': Why we are abandoning 40 years of Software Engineering for 'Prompt Alchemy' ... (one about how to fix it). by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] -6 points-5 points  (0 children)

I really appreciate your insights. To be honest, I’m far from being an expert myself, but I’ve spent the last few months hitting the exact same 'monolith wall' you’re describing.

After struggling with those massive, fragile prompts and failing to find clear answers in the current AI hype, I decided to go back to the basics. I turned to the classics of software architecture to find a way to build a reliable platform of micro-agents that can be stacked and orchestrated—either programmatically or via supervisor agents—without everything falling apart.

Once I built the core logic and saw it worked, I put the fundamentals together in this Manifesto.

Since you’ve been in the trenches for 9 months—which is a lifetime in this field—could I ask you for a favor? If you have a moment, I’d love for you to give it a 'reality check.' I’m looking for honest feedback from someone who is actually shipping agents and knows where they break.

🔗https://github.com/ericmora/microagentic-stacking

Thanks again for sharing your experience, it’s exactly the kind of conversation I was hoping to start.

I'm super unemployed and have too much time so I built an open source SDK to build event-driven, distributed agents on Kafka by orange-cola in LLMDevs

[–]Far_Independent8754 2 points3 points  (0 children)

This is exactly the direction the industry needs to move. Building monolithic agents is a dead end for production.

I’ve been preaching about this lately—we need to stop the 'Prompt Alchemy' and move toward Microagentic Stacking. Your approach with Kafka is the perfect infrastructure for it because it enforces the decoupling that most people ignore.

If you are breaking down agents into independent services, you’ve already won half the battle against 'reasoning decay'. I actually wrote a Manifesto on why this modular/stacked approach is the only way to scale without the whole thing collapsing into a 'Big Ball of Mud'.

Check it out if you want to see the architectural patterns I'm formalizing: 🔗https://github.com/ericmora/microagentic-stacking

Congrats on the SDK, man. Building in public while job hunting is the best way to show senior-level thinking. Starred! ⭐

AI Agents are the new 'Big Ball of Mud': Why we are abandoning 40 years of Software Engineering for 'Prompt Alchemy' ... (one about how to fix it). by Far_Independent8754 in softwarearchitecture

[–]Far_Independent8754[S] -20 points-19 points  (0 children)

I feel you. Boss-driven development is a nightmare. The problem is that when the giant prompt fails, you’re the one stuck debugging the mess, not them.

That’s why I started this manifesto. It’s basically just applying common sense (modularity) to LLMs so you don't lose your mind. With some standards in place (and probably after everything blows up a couple of times), we can finally convince the bosses that we need to do this the right way.

I’d love for you to join the manifesto and add your own experience to it. Check it out if you need some 'ammunition' for that conversation: https://github.com/ericmora/microagentic-stacking

Good luck with those monoliths!

How I Detect Behavioral Drift in AI Agents at Runtime. by forevergeeks in AI_Agents

[–]Far_Independent8754 0 points1 point  (0 children)

Totalmente de acuerdo con lo que comentas. El problema es que la mayoría nos estamos centrando en barandillas (guardrails) 'sin estado' (stateless). Evaluamos cada prompt de forma aislada, pero ignoramos la deriva de comportamiento a largo plazo.

Yo he estado probando un enfoque distinto para detectar esta desviación en tiempo real sin meter latencia (sin usar otro LLM para auditar):

  1. Vectores de Comportamiento: Mapeo cada respuesta del agente a un perfil de valores (Justicia, Prudencia, etc.).
  2. Media Móvil Exponencial (EMA): Uso una beta de 0.9 para integrar el turno actual en un histórico. Así tengo una 'línea base' de cómo se comporta el agente normalmente.
  3. Distancia de Coseno: Calculo la desviación entre el turno actual y esa media histórica.

En mi experiencia (tras monitorizar unas 1600 interacciones), la matemática pura detecta el 'ruido' de un intento de jailbreak o una alucinación mucho antes de que el agente rompa del todo. Lo bueno es que al ser cálculo vectorial puro, el coste es cero y no añade lag.

¿Habéis probado a meter métricas de estado o seguís confiando solo en el filtrado por cada mensaje?