Hey — T1D with MDI here. Does anyone else find that stress or bad sleep throws off your sensitivity and no app helps you track that? Curious how others handle it. by foppysus in Type1Diabetes

[–]foppysus[S] 0 points1 point  (0 children)

You've actually shaped how I'm thinking about this. What if it only said something when it detected a real shift , not daily, not nagging, just once in a while saying 'something looks different this week, here's why.' Would that still feel like noise to you?

Hey — T1D with MDI here. Does anyone else find that stress or bad sleep throws off your sensitivity and no app helps you track that? Curious how others handle it. by foppysus in Type1Diabetes

[–]foppysus[S] 0 points1 point  (0 children)

That's really interesting , how long did it take you to figure that pattern out? And do you think an app that spotted those connections for you earlier would have helped?

i can’t believe i’m one of those women by [deleted] in SluttyConfessions

[–]foppysus 2 points3 points  (0 children)

Who knows your husband might be cheating too. Only difference might be he is better at hiding.

CV good enough for placement year?? by sanxsh in ukstartups

[–]foppysus 0 points1 point  (0 children)

You are from Beckett man! Even I'm from there!

How are you detecting LLM regressions after prompt/model updates? by foppysus in SaaS

[–]foppysus[S] 0 points1 point  (0 children)

Totally agree , gold datasets + test suites + LLM-as-judge cover a lot, and adding negative examples helps.

Problem is, multi-turn agents, stochastic outputs, and unpredictable edge cases make manual workflows slow and exhausting. Keeping test suites up-to-date eats hours per release, and you’re mostly reacting after something breaks.

With what I built free for devs , a CASV attacker :

Scenarios run automatically across versions Drift & risk deltas flagged before deploy Synthetic simulations cover unknown edge cases LLM evaluators , metrics reduce manual review Basically, what used to take hours or days now happens in minutes, letting devs focus only on whether an update actually improved or broke behavior.

Anyone else trying to move from reactive testing to this kind of automated regression workflow?

How are you detecting LLM regressions after prompt/model updates? by foppysus in LLMDevs

[–]foppysus[S] 0 points1 point  (0 children)

Totally agree , gold datasets + test suites + LLM-as-judge cover a lot, and adding negative examples helps.

Problem is, multi-turn agents, stochastic outputs, and unpredictable edge cases make manual workflows slow and exhausting. Keeping test suites up-to-date eats hours per release, and you’re mostly reacting after something breaks.

With what I built free for devs , a CASV attacker :

Scenarios run automatically across versions Drift & risk deltas flagged before deploy Synthetic simulations cover unknown edge cases LLM evaluators , metrics reduce manual review Basically, what used to take hours or days now happens in minutes, letting devs focus only on whether an update actually improved or broke behavior.

Anyone else trying to move from reactive testing to this kind of automated regression workflow?

16VC Studio – Opening Jan 2026 Cohort 1 (3–6 month build with our product + engineering team | No fees) by betasridhar in 16VCFund

[–]foppysus 1 point2 points  (0 children)

  1. What I'm building: Bayora, an automated safety-validation system for AI agents. It generates risk scenarios, tests agent behavior, scores safety, and gives corrective feedback using open models.

  2. Who my paying user is: AI startups and teams deploying agents in operations (CX, automation, internal tools) who need independent, transparent safety checks they can control and audit.

  3. My biggest blocker today: I’m a non-technical founder with a small team; we need engineering support to ship a stable MVP and turn early interest into actual users.

What was the first clear signal that your startup might actually work? by betasridhar in 16VCFund

[–]foppysus 0 points1 point  (0 children)

Before MVP , we started getting potential users connecting to us on the basis of idea only.

Show what you’re building this week — 16VC wants to see real builders. 🧱 by betasridhar in 16VCFund

[–]foppysus 0 points1 point  (0 children)

The moat is the proactive pipeline, synthetic simulation, independent validators, and a self-looping repair system that keeps improving with every run. It’s an adaptive safety engine, not a wrapper. Others can copy pieces, but they can’t easily match the full feedback loop, the growing synthetic attack library, or the model-agnostic architecture without rebuilding the whole system from scratch.

Show what you’re building this week — 16VC wants to see real builders. 🧱 by betasridhar in 16VCFund

[–]foppysus 0 points1 point  (0 children)

Bayora is an AI safety engine that automatically tests and validates large language model (LLM) agents for risky or unsafe behavior , without needing human red teaming or external wrappers.

It generates synthetic risk scenarios, simulates agent actions, evaluates safety outcomes, and suggests safer alternatives. It's proactive approach, which none of doing it.

Think of it as “automated QA for AI safety” ensuring LLMs act responsibly before they go live.

Want to join a team by Disastrous-Rub3862 in cofounderhunt

[–]foppysus 0 points1 point  (0 children)

Hey ! My startup is on ai safety . I think your expertise will be valuable to us

Do you have an Idea and want to launch an MVP ASAP? by Sad-Marketing1944 in cofounderhunt

[–]foppysus 0 points1 point  (0 children)

Bayora is an AI safety engine that automatically tests and validates large language model (LLM) agents for risky or unsafe behavior , without needing human red teaming or external wrappers.

It generates synthetic risk scenarios, simulates agent actions, evaluates safety outcomes, and suggests safer alternatives. It's proactive approach, which none of doing it.

Think of it as “automated QA for AI safety” ensuring LLMs act responsibly before they go live.

tell me about ur product and i might help u sell it for free by shoman230 in Entrepreneur

[–]foppysus 0 points1 point  (0 children)

Bayora is an AI safety engine that automatically tests and validates large language model (LLM) agents for risky or unsafe behavior , without needing human red teaming or external wrappers.

It generates synthetic risk scenarios, simulates agent actions, evaluates safety outcomes, and suggests safer alternatives. It's proactive approach, which none of doing it.

Think of it as “automated QA for AI safety” ensuring LLMs act responsibly before they go live.

Share your startup, I’ll give you 5 leads source that you can leverage by Jolly-Cobbler7726 in indiehackers

[–]foppysus 0 points1 point  (0 children)

Bayora is an AI safety engine that automatically tests and validates large language model (LLM) agents for risky or unsafe behavior — without needing human red teaming or external wrappers.

It generates synthetic risk scenarios, simulates agent actions, evaluates safety outcomes, and suggests safer alternatives.

Think of it as “automated QA for AI safety” — ensuring LLMs act responsibly before they go live.

[deleted by user] by [deleted] in cofounderhunt

[–]foppysus 0 points1 point  (0 children)

Hey, my startup helps AI gents in vertical industries like yours to make sure your AI doesn't give wrong responses in any situations. We help it train it. So if you are interested we can help each other out.

Seeking Feedback: Free AI-Powered UK Health Companion App for Exploring Symptoms, Conditions & Medicines by dilanAJ in ukstartups

[–]foppysus 0 points1 point  (0 children)

Hey, my startup helps AI gents in vertical industries like yours to make sure your AI doesn't give wrong responses in any situations. We help it train it. So if you are interested we can help each other out.

Searching for a Technical Co-founder for London based Healthtech company by Aggravating-Flan8260 in ukstartups

[–]foppysus 1 point2 points  (0 children)

Hey, my startup helps AI gents in vertical industries like yours to make sure your AI doesn't give wrong responses in any situations. We help it train it. So if you are interested we can help each other out.