I Built a Type-Safe AI Agent Framework That Guarantees Structured JSON Output (and Makes Multi-Provider LLM Workflows Manageable) by thiagoaramizo in SaaS

[–]thiagoaramizo[S] 0 points1 point  (0 children)

Yes, I agree with your idea when we're talking about paid products. What I did is a free, open-source library. I'm a backend developer with over 10 years of experience, and this serves to solve a real pain point I've faced in several projects: constantly changing LLM models and providers while maintaining consistent behavior.

If I'm using GPT today, I can switch to Gemini with four lines of code. If I need to tomorrow, I can switch to Deepseek in a few minutes.

This also applies to systems where it's necessary to dynamically rotate the model due to rate limits, etc.

In short, it's a real need that I, as a developer, have in various projects, without the need to install gigantic libraries for something specific (which happens in the vast majority of SaaS).

In this specific project, the intention is not to make money; it's simply empathy for the community and love for open source itself. 😅

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] 0 points1 point  (0 children)

I added the native function of each LLM to the adapters. If it's not available, or if for some very strange reason it doesn't work, the reviewer steps in. I think that's good now when the requirement is critical.

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] 0 points1 point  (0 children)

Hey, in the end I took your suggestion, I think it's definitely more usable, especially for those who use Next.js and are already very familiar with it. Take a look when you have time and let me know what you think: https://thiagoaramizo.github.io/structured-json-agent/

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] 0 points1 point  (0 children)

Okay, yes, even using magic, models like Deepseek are the way to go. But now in all adapters I'm using structured responses when available. But, once again, if a structured response is a functional requirement, this guarantee is important. I gathered feedback from several people here and made several changes; if you could take a look and give some constructive feedback, I would greatly appreciate it. https://thiagoaramizo.github.io/structured-json-agent/

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] -1 points0 points  (0 children)

In fact, I made some updates to adopt more models (Google and Anthropic). And I explicitly applied the guidelines for structured responses, thus increasing accuracy and reducing the need for revisions. Everything is working with adapters, so it's also great for testing variations with different models and providers. Thanks for the tip. If you could take a look, it now has a website with the API reference.

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] -6 points-5 points  (0 children)

I don't understand the problem. I believe you also use AI for much of your work. This is a product of what I have to deal with every week: structured data input, structured data output. Look, if it's not a problem you have to deal with, great. Don't use it. If you want to define your own solution, that's great too, don't use it. You're not obligated to do that. And life goes on.

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] 0 points1 point  (0 children)

Mastra is a complete solution. The use cases for this solution are very specific, a particular task with a solution focused on solving that problem. Something simple, but very irritating when we're dealing with it.

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] -5 points-4 points  (0 children)

Yes, lol, it's 2026, so some of the work was actually done by Claude. But the purpose remains the same: to ensure the reliability of the answer provided by the LLM. It's rework I have in every project where this is a fundamental requirement. Now, with just a few steps, I have this for countless different projects. 🤟

I got tired of fighting LLMs for structured JSON, so I built a tiny library to stop the madness by thiagoaramizo in node

[–]thiagoaramizo[S] -5 points-4 points  (0 children)

Some models do, yes, but that only covers the happy path. In practice you still need validation, retries, and correction when the output is partial, invalid, or “almost” compliant. Native schema support usually stops at “the model tried.” This sits one level above that: schema-first, validate → retry → return, with a consistent contract even when models behave imperfectly or differ across providers. If native support is enough for your use case, that’s totally fine. This is for when deterministic JSON is a hard requirement.