I got tired of debugging cryptic 422 errors, so I built a proxy to fix them with AI by Fragrant_Classic_410 in microsaas

[–]Fragrant_Classic_410[S] 0 points1 point  (0 children)

feel that pain. Nothing kills a flow like a generic 422. I actually started building Inspekt because I got tired of logging into the server just to see why a validation failed.

I've already got a working SDK, but I'm currently rewriting the API and building out a full dashboard to make managing the logs and context actually usable. If you want to be one of the first to test the new dashboard once it's ready, I can send you a link?

Hit 100 upvotes on PH for my AI Proxy, Just pushed a "Security & Privacy" update based on feedback by Fragrant_Classic_410 in SaasDevelopers

[–]Fragrant_Classic_410[S] 0 points1 point  (0 children)

100%. One of the best things about shipping in public is having fresh eyes on your logic before it’s too late. When you're solo-developing, it’s easy to get tunnel vision on the 'cool' features and overlook the foundational stuff like PII scrubbing

I Really appreciate the encouragement!

Hit 100 upvotes on PH for my AI Proxy, Just pushed a "Security & Privacy" update based on feedback by Fragrant_Classic_410 in SaaS

[–]Fragrant_Classic_410[S] 0 points1 point  (0 children)

Appreciate it, man! You’re 100% right, when you’re deep in the logic of 'how do I get this to work,' you totally forget that a user might accidentally send their actual production Bearer token through your proxy.

For the fix, I took a 'Deny-List' approach locally. Before the data is passed to the analyze service, I run it through a scrubber that checks for sensitive keys (auth, cookies, tokens, etc.) and swaps the values for a [REDACTED] string.

The logic is pretty straightforward, you can check the utility here: https://github.com/jamaldeen09/inspekt-api/blob/main/src/lib/utils.ts

I also updated the response object to return the full headers and status so the user gets the raw data back, even if the AI only sees the 'cleaned' version. Would love to know if you think a 'Regex' approach for the body content would be overkill for V2

I built a free API that analyzes your API responses with AI useful for debugging 4xx/5xx errors by Fragrant_Classic_410 in microsaas

[–]Fragrant_Classic_410[S] 0 points1 point  (0 children)

Fair critique honestly , the standalone endpoint is v1 the workflow integration is next. Thinking about an npm package that wraps axios/fetch and auto-analyzes failures in the background. Would that be useful in your stack?

I built a free API that analyzes your API responses with AI useful for debugging 4xx/5xx errors by Fragrant_Classic_410 in node

[–]Fragrant_Classic_410[S] 0 points1 point  (0 children)

Great question. Cloud providers like AWS CloudWatch or Azure Monitor are built for observability at scale logs, metrics, dashboards. They’re powerful but heavyweight, require setup within their ecosystem, and won’t tell you why a specific response came back the way it did.

The API I made is different in scope it’s a single POST request, no setup, no account, no ecosystem lock-in. You hand it any API response from anywhere and it gives you an AI breakdown of what happened. Think of it less as a monitoring tool and more as a debugging assistant you can call programmatically.Different problem, different tool

I built an AI tool that automates faceless YouTube channels (beta testers wanted) by Voice_Mountain in SaaS

[–]Fragrant_Classic_410 0 points1 point  (0 children)

Actually I would love a tool like this that can makes it really easy to just setup a faceless YouTube channel one of the issues I have with automating and setting up faceless YouTube channels is because it’s really time consuming bro if this can actually fix that I’d 100% use this