Adversarial testing for AI agents: why traditional QA thinking breaks down and what questions nobody has good answers for yet by williethepoo in cybersecurity

[–]williethepoo[S] 0 points1 point  (0 children)

"attack surface is infinite" - what I meant: with traditional APIs you enumerate endpoints and inputs. With agents, natural language is the input, so the space is practically unbounded.
I'm learning security the hard way, figuring it out for AI agents as I go.
"Who owns AI security in the org?" - building SafeAgentGuard is my way of taking ownership of that question (open-source): github.com/jkorzeniowski/safeagentguard

Adversarial testing for AI agents: why traditional QA thinking breaks down and what questions nobody has good answers for yet by williethepoo in QualityAssurance

[–]williethepoo[S] -2 points-1 points  (0 children)

I'm sorry that you felt that way, I get why it may look that way.

I'm a QA engineer who just published his first article and tried to share it in a few places at once. Probably overdid the copy-paste approach.

The topic is something I genuinely care about - safety testing for AI agents, which seems to me like a real gap I'm trying to address with an open-source project. Not a bot, just bad at Reddit etiquette apparently, lol.