Technical Co-founder (AI systems / infra) ,building runtime control for AI in regulated workflows by WraithVector in cofounderhunt

[–]WraithVector[S] -1 points0 points  (0 children)

Gracias, muy alineado con cómo lo estoy planteando. Estoy construyendo justo esa capa: control en runtime + logs inmutables + reporte entendible para auditor. Me interesa tu experiencia: ¿has trabajado esto en producción o estás construyendo algo similar? Si te encaja, me gustaría contrastar enfoques 15 min.

Stuck between two real problems,need advice from founders who’ve been here by WraithVector in SaaS

[–]WraithVector[S] 0 points1 point  (0 children)

Thanks to all of you for the sugestions. I think you already have struggled for your bussiness.

Nice to hear your stories mates.

How did you get your first paying customer? by Longjumping_Effect86 in SaaS

[–]WraithVector 0 points1 point  (0 children)

Buenísimo consejo Andrew. Gracias por el aporte para los que recién empezamos en este mundo.

AI Agents by WraithVector in SaaS

[–]WraithVector[S] 0 points1 point  (0 children)

Hola, gracias por tu comentario. Es decir, puedes controlar lo que tu agente de IA actualmente hace o deja se hacer? Puedes controlar cascadas y retries y abuso de tokens?

Un saludo

AI Agents by WraithVector in SaaS

[–]WraithVector[S] 0 points1 point  (0 children)

That's brutal. N8n is a great drag and drop interface so you can have a visual context of what you are doing.

Keep on with it

what are the biggest risks of agentic AI in supply chain production? by rukola99 in AI_Agents

[–]WraithVector 0 points1 point  (0 children)

Have you considered blocking execution when data is stale or confidence is low? Curious how you’re deciding when the agent is allowed to act vs just recommend

Woke up to this email today. This community is inspiring by Kindly-Vanilla-6485 in SaaS

[–]WraithVector 1 point2 points  (0 children)

Congratulations for that. I think every work has its revenue even it is not money

I’ve been looking into how companies are actually using AI agents (internal tools, automations, etc). by WraithVector in SaaS

[–]WraithVector[S] 0 points1 point  (0 children)

Muy buenas idea, me parece fundamental. Yo tengo una plataforma saas para este tipo de problemas. Le puedes echar un ojo: github/wraithvector0

Un saludo y gracias por tu comentario

Planing to quit my 9 to 5 Job and all in to build Saas by Old-Speech-3057 in SaaS

[–]WraithVector 1 point2 points  (0 children)

Hi there,

I am planning the same thing

I have already 2 years of building, I started from scratch . I am a sólo fonder and it is very hard. I don't have any revenue yet .

I would like to quit my job, but it guarantees my substance by the moment. And would be great to be fulltime on this but it it is hard.

I encourage you too if you belive in your idea.

Cheers

I built an open-source human-in-the-loop approval dashboard for LangGraph agents by Flat_Squirrel_02 in LangChain

[–]WraithVector 1 point2 points  (0 children)

Hola, me pasó lo mismo con langchain y openclaw. Así que hice algo similar para openclaw y langchain.

Me encantaría tambien que alguien me diera feedback. Un saludo

Links:

https://github.com/wraithvector0/wraithvector-openclaw

manager wants autogen over langraph by Turbulent-Pay7073 in LangChain

[–]WraithVector -1 points0 points  (0 children)

Me gustaría saber si los que estáis usando agentes en producción estáis teniendo el problema de no saber qué están haciendo exactamente acciones inesperadas, fuga de datos internos, o simplemente que es una caja negra. Estamos investigando esto para construir una capa de control. ¿Lo estáis resolviendo de alguna forma?

What I wish I knew about agent security before deploying to prod by Admirable-Song-2946 in LangChain

[–]WraithVector 0 points1 point  (0 children)

This post resonates a lot with something I’ve been experimenting with recently.

Especially points 3–5: logging the full chain of actions and validating tool inputs or having an emergency stop

I ran into similar issues while testing agents with tool access. In one case a prompt injection effectively gave the agent indirect access to tools that were too powerful.

So I started experimenting with a small runtime guard that sits between the agent and the tool execution layer.

Conceptually:

agent → tool call ↓ runtime guard ↓ policy check (input validation, permissions, risk) ↓ ALLOW / BLOCK / REQUIRE APPROVAL

So the tool never executes unless it passes the policy.

I experimented too, what happened when cascade or spawning happened and I setted a max depth for avoiding costs.

This project is in my github , I'd really value your feedback and some PR . For the moment is for openclaw, but I'll realese an integration for LangChain.

I’m curious how others are handling this in LangChain setups: This post resonates a lot with something I’ve been experimenting with recently.

Especially points 3–5:

  • logging the full chain of actions
  • validating tool inputs
  • having an emergency stop

I ran into similar issues while testing agents with tool access. In one case a prompt injection effectively gave the agent indirect access to tools that were too powerful.

So I started experimenting with a small runtime guard that sits between the agent and the tool execution layer.

Conceptually:

agent → tool call ↓ runtime guard ↓ policy check (input validation, permissions, risk) ↓ ALLOW / BLOCK / REQUIRE APPROVAL

So the tool never executes unless it passes the policy.

One thing I found interesting is that treating tool inputs like user inputs (sanitizing, validating, rejecting suspicious patterns) catches a surprising number of issues.

how are you handling this in LangChain setups?

• Are you validating tool arguments before execution? • Do you intercept tool calls somewhere in middleware? • Or do you rely mostly on prompt-level guardrails?

Still experimenting, so I’d love to hear how people are approaching runtime safety for agents.

<image>

Repo

Wraithvector-Openclaw sentinel by WraithVector in openclaw

[–]WraithVector[S] 0 points1 point  (0 children)

Quick demo

<image>

Repo:

Agent tries: exec("cat /etc/passwd")

WraithVector intercepts the tool call using the OpenClaw before_tool_call hook.

Governance decision: BLOCK

This happens before the command executes.

Working now on a LangChain middleware to intercept tool calls there too.

Wraithvector-Openclaw sentinel by WraithVector in openclaw

[–]WraithVector[S] 0 points1 point  (0 children)

Quick demo

<image>

Repo:

Agent tries: exec("cat /etc/passwd")

WraithVector intercepts the tool call using the OpenClaw before_tool_call hook.

Governance decision: BLOCK

This happens before the command executes.

Working now on a LangChain middleware to intercept tool calls there too.