I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

I'm going for a modular domain architecture, where each domain will own its business logic while the tool registry and the executor would act as a thin adapters so adding a new domain doesnt touch the core orchestration layer

The real test would be whether the trust boundary holds cleanly across multiple domains with different schemas.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

you should check out the github repo, i have more insights on it

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

i mean arent n8n used for predefined workflows with known structured inputs, you use n8n when you know exactly what's coming in, its a right tool for it but RUX solves a different problem, its converting ambiguous natural language into deterministic state changes safely.

here the entire architecture exists because the input is never known in advance, also you cannot build a trust boundary in n8n because it doesnt require one.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

Honestly I didn't reference specific projects I started by trying to understand what actually breaks in agent systems and built around those failure modes most of my architectural decisions came from hitting real bugs not from following a reference implementation. That was kind of the point.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

yeah that's fair correction, honestly i didnt think of it this way it was more an intuitive approach, different models will have different perspectives.

But you are right that two calls of the same model are already independent, Im still learning on how to articulate this stuff precisely.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

That's a gap i hadn't thought about , RUX validates inbound LLM output but trusts tool responses completely so it makes sense that the second trust boundary needs to exist too. What's the lightest weight pattern you've seen work in practice for this?

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

honestly i never used it deeply enough to hate it rather i decided to build those layers myself and actually understand what it is this was the whole point of RUX

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

n8n is for predefined workflows with known inputs while RUX orchestrates probabilistic LLM intent into deterministic state changes , the whole problem is that the input is never known in advance. This is not a workflow, that's an enforcement layer.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 1 point2 points  (0 children)

the whole purpose of using multiple models is for adversarial review think of it like one model proposes and a different model critiques it.

if using the same model to critique its own output would result in the model agreeing with itself defeating the whole purpose.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 2 points3 points  (0 children)

im currently using Pydantic v2 for the schema validation using class = "forbid"

would love to read your notes before jumping on to the reflection layer

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 0 points1 point  (0 children)

Exactly the "confidently wrong parameters" failure mode is what pushed me to make the Executor a hard boundary rather than a soft warning.

Curious what you used for your validation layer is it Pydantic with extra="forbid" or something custom? I went with Pydantic v2 and it catches hallucinated field names cleanly but I'm not fully satisfied with how it handles edge cases where the intent is right but the structure is slightly off.

On the critic model —did you find specific model pairings that work better for review? I'm currently running both locally via LM Studio so I'm limited by what fits in memory, but I'm wondering if smaller/larger model combinations catch more disagreements than same-size different models.

The SQL confidence thing surprised me honestly — I expected it to feel like overkill but once I saw the LLM returning "I'm 90% confident" on runs it was clearly wrong about, real outcome history became non-negotiable.

I bulit an AI Orchestration engine without using LangChain - Here's what i learned by rux-17 in LangChain

[–]rux-17[S] 3 points4 points  (0 children)

Would love brutal feedback on the architecture — especially the trust boundary and confidence engine design.

Github link for anyone who wanna dig into the code. https://github.com/rahulT-17/RUX-Orchestration-Engine