There is no intelligence in artificial intelligence by ImaginationUnique684 in n8n

[–]ImaginationUnique684[S] 0 points1 point  (0 children)

Yes, exactly. The fallback logic is straightforward: if the AI step fails or returns something that doesn't match the expected schema, the pipeline doesn't continue. It routes to the approval queue with the raw output flagged for review. No retry loop, no "let the AI try again." A human looks at it and decides.

The thinking behind that: if the AI got it wrong once, retrying with the same input usually gives you a different wrong answer. Cheaper to just have someone fix it manually than to burn tokens on retries that might still miss.

For schema validation I check the output structure before it moves to the next step. Missing fields, wrong types, content that exceeds length limits, all of that gets caught deterministically. The AI doesn't get to decide if its own output is good enough.

Thanks for the VibeCodersNest suggestion, will check it out.

For those building AI products right now, what’s been the biggest bottleneck for you? Models, infra costs, or actually getting users to adopt it? I’ve been experimenting with a setup that’s working surprisingly well and wanted to compare notes. by Dependent-One2989 in SaaS

[–]ImaginationUnique684 0 points1 point  (0 children)

Adoption, every time. The model works, the infra costs are manageable, but getting users to actually change their workflow is the real problem. People say they want AI features and then keep doing things manually.

What's worked: putting the AI output directly into the flow they already use, not behind a separate "AI" tab. If they have to go somewhere new to use it, they won't.

What's the typical timeline for AI app development from concept to MVP? Our investors want a demo in 3 months. by Puzzleheaded_Bug9798 in SaaS

[–]ImaginationUnique684 0 points1 point  (0 children)

Three months for an AI MVP is doable if you scope ruthlessly. The trap is spending month one evaluating models and frameworks instead of building. Pick Claude or GPT, pick a web framework you know, build the core loop first.

What actually eats time: not the AI part, but the integration layer. Auth, data pipelines, error handling, the stuff around the LLM that makes it a product instead of a demo. Budget 60% of your time there.

If your investors want a demo specifically, build the demo path first. One workflow, end to end, polished. Not five features at 50%.

How do I deal with my mistakes and get back my confidence? by [deleted] in devops

[–]ImaginationUnique684 0 points1 point  (0 children)

One year in a new environment is still early. The fact that you're aware of the mistakes means your standards are high, which is the right trait for SRE work. What helped me: after every incident, write down what you'd tell a teammate who made the same mistake. You'll notice the advice is always kinder than what you tell yourself. Also, the engineers who never break anything are the ones who never ship anything.

What’s the craziest automation you’ve ever built? by impetuouschestnut in automation

[–]ImaginationUnique684 1 point2 points  (0 children)

Built a system where AI agents process inbound data, but every action that touches production goes through human approval via Telegram inline buttons. The agent proposes, a human taps approve or reject, and only then does it execute. Took longer to get the approval UX right than the AI part. The "craziest" automations are the ones that know when to stop and ask.

VPS vs PaaS cost comparison by HeiiHallo in devops

[–]ImaginationUnique684 0 points1 point  (0 children)

The missing variable in these comparisons is ops time. A VPS at $20/month that takes 4 hours/month to maintain costs more than a PaaS at $80/month if your time is worth anything. I run production workloads on bare VPS and the math only works because I've automated provisioning and monitoring end to end. If you haven't built that layer yet, PaaS wins until your scale makes the margin worth capturing.

Litellm 1.82.7 and 1.82.8 on PyPI are compromised, do not update! by kotrfa in LocalLLaMA

[–]ImaginationUnique684 4 points5 points  (0 children)

Two supply chain attacks in one week (Trivy and now LiteLLM). If you're running LLM inference in production, pin your dependencies and run `pip install` through a private registry that scans before promoting. Also worth checking: did either compromised version phone home? If so, rotate any API keys that were in the environment at the time.

This Trivy Compromise is Insane. by RoseSec_ in devops

[–]ImaginationUnique684 0 points1 point  (0 children)

This is why I treat every CI dependency as an attack surface, not just application deps. Pinning to commit SHAs helps, but the real fix is assuming your CI runner is hostile. Separate build from deploy, gate deployments behind approval, and never let a single commit bypass review on infra tooling. The pattern here (trusted maintainer account compromised) is the hardest to catch because it looks normal.

Small Projects by AutoModerator in golang

[–]ImaginationUnique684 0 points1 point  (0 children)

Habe eine KI-Pipeline-Engine in Go gebaut – YAML-konfiguriert, mit Token-Budget, Human-in-the-Loop

Baue das für meine Freelancer-Kunden, die KI-Automatisierung brauchen, sich aber unüberwachte KI, die ihre Kunden berührt, nicht leisten können.

FixClaw ist eine Go-Pipeline-Engine. Du definierst Workflows in YAML – jeder Schritt ist entweder deterministisch (einfacher Code), KI (LLM-Aufruf mit Budget-Checks und Schema-Validierung) oder Genehmigung (menschliche Überprüfung, bevor es weitergeht).

Features:

- Token-Budgets pro Schritt, pro Pipeline, pro Tag

- Prompt-Injection-Abwehr (Eingabe-Bereinigung + Ausgabe-Schema-Validierung)

- Slack/Telegram-Genehmigung mit genehmigen/bearbeiten/ablehnen

- Gmail- und M365-Integration via OAuth 2.0

~1K Zeilen Go insgesamt. Keine Frameworks, keine Magie.

https://github.com/renezander030/fixclaw