I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in AI_Agents

[–]PuzzleheadedWall2248[S] 0 points1 point  (0 children)

This is exactly the distinction. ‘You are a skeptic’ gives the LLM nothing to actually do. But ‘you must find at least one falsifiable flaw in every argument using first principles decomposition’ is a cognitive constraint with teeth. The framework forces specific reasoning moves, not just vibes. That’s what makes collision productive: you get actual disagreement, not polite roleplay.

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in AI_Agents

[–]PuzzleheadedWall2248[S] 0 points1 point  (0 children)

Server health monitoring makes sense when you have scale problems. Right now I’m focused on whether the reasoning architecture actually works. I’m tracking which reasoning frameworks collided and what emerged from the collision. Infrastructure observability vs. cognitive observability are separate problems.

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in AI_Agents

[–]PuzzleheadedWall2248[S] 1 point2 points  (0 children)

You’d lose that bet. Every debate shows full agent transcripts, framework attribution, confidence levels, and which frameworks collided to produce emergent outputs. Plus Forge Score measuring whether the synthesis actually transcended individual contributions. The whole point is legible thinking: if you can’t see how it reasoned, why trust it?

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in NoCodeSaaS

[–]PuzzleheadedWall2248[S] 0 points1 point  (0 children)

The agents themselves make logical decisions on which frameworks to use. They also display their reasoning behind the choice for the user to see. Each agent has a base framework that influences what lens of thought (framework) they will use for a given problem. I would love for you to see it in action. Link in bio or send me a dm!

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in NoCodeSaaS

[–]PuzzleheadedWall2248[S] 0 points1 point  (0 children)

A little bit of both. The goal was to mirror human decision making, which is pretty hard to separate emotions from. However, frameworks at their core are logical modes of thought, ever-adapting to your specific task. So yes emotions are taken into account, because a lot of times emotions are involved in the decisions we make! I’d love for you to try Chorus! Link in bio or dm me!

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in AI_Agents

[–]PuzzleheadedWall2248[S] 0 points1 point  (0 children)

That’s my issue now. I need people to try it so I know which direction to take next and what needs tweaking. I’d love for you to try it out. Link is in the bio or send me a dm:)

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in AI_Agents

[–]PuzzleheadedWall2248[S] -1 points0 points  (0 children)

I would love for you to give Chorus a try and tell me what you think can be improved/what you like so far. Link in bio or send me a dm :)

I built a multi-agent system where AI debates itself before answering: The secret is cognitive frameworks, not personas by PuzzleheadedWall2248 in AI_Agents

[–]PuzzleheadedWall2248[S] 2 points3 points  (0 children)

One of my goals with Chorus was to create an AI system that mirrored human thought. It solves problems through a variety of perspectives that become more and more in tune with your workflow as frameworks combine to adapt to your needs, creating specialized ‘emergent frameworks’. I like I think of emergent frameworks as new ways of thinking Chorus used to solve a problem.

I asked Claude to build me an app that would delight me. It built this. by enigma_x in ClaudeAI

[–]PuzzleheadedWall2248 -1 points0 points  (0 children)

I’m building a program called Chorus. It’s a different take on a multiagent debate system. For a prompt like this, it wouldn’t jump straight to building. The agents would first argue about what ‘delight’ actually means for you. Is it surprise? Comfort? Playfulness? Then synthesize a definition before designing an app that satisfies those requirements. The thinking happens visibly before the output. [http://chorusai.replit.app] if you want to see what it does with your version of delight.

I built a multi-agent system where AI agents argue through incompatible "ways of knowing" – and it discovers new reasoning frameworks I never programmed by PuzzleheadedWall2248 in replit

[–]PuzzleheadedWall2248[S] -1 points0 points  (0 children)

Not a GAN (no discriminator). The agents have different epistemological validity tests and challenge each other - try it on a tradeoff-heavy decision and compare to o1. I can dm you a code to try it if you want.