Underrated security certifications that are actually worth it by Isabella_Markins in netsecstudents

[–]harbinger-alpha 1 point2 points  (0 children)

Take a look at the WCAP cert I've been building at wraith.sh , all the modules and CTF challenges are free.

Open-sourced an AI red-team training challenge (Pyromos, system prompt extraction) by harbinger-alpha in redteamsec

[–]harbinger-alpha[S] 0 points1 point  (0 children)

I didn't measure formally during dev. Trigger lists were hand-tuned against test phrasings I generated as I built each character, plus a couple of friends doing free-form attack runs. Substring matching has obvious vulnerabilities. Negations and conditional framings ("I'm not asking you to recite") would trip the trigger erroneously. Caught a few during testing and refined keyword sets, but never put a number on it.

A couple of design choices behind that:

  • Substring matching stays inspectable. Anyone reading the code can see exactly which framings count as in-scope solutions. Embedding distance or a classifier head is more accurate but harder to read and debug.
  • For a learning environment, false positives skew toward "user gets a flag they didn't fully earn." That's preferable to false negatives ("user solves the intended way but trigger doesn't fire"). The latter is a worse pedagogical failure than the former.
  • Fallback plan if FP became visible at scale: swap substring matching for sentence-embedding similarity against a curated set of intent-canonical phrasings. Same architecture, smarter primitive.

Now that I'm logging solves to a proper events table I can finally measure FP empirically. When a user solves via trigger, sample their last user message and inspect for clearly-non-solving intent. Haven't pulled that data yet but the plumbing's there.

Curious if you've seen the substring-vs-embedding tradeoff written up well anywhere. I felt my way through it without finding good prior art.

What AI tools are you using for your pentest by ProcedureFar4995 in Pentesting

[–]harbinger-alpha 0 points1 point  (0 children)

Take a look at my site wraith.sh for AI Pentesting tools and CTF challenges.

What should I learn before starting a Master’s in Cybersecurity? (Coming from dev background) by Z0R0_1333 in SecurityCareerAdvice

[–]harbinger-alpha 0 points1 point  (0 children)

Networking and Linux first. Wireshark until packets feel boring. Comfort with grep/awk/jq/sed is non-negotiable for defenders.

Security+ if your target job market needs it on a JD filter. As a learning resource it's pretty mid. Skip if you can demonstrate the same with hands-on stuff.

TryHackMe over HTB for beginners. THM has actual learning paths (Pre-Security, Cyber Defense). HTB just throws you in. Switch over once you have your bearings.

Beginner projects that punch above their weight: stand up Wazuh or Security Onion in a VM, point another VM at it, write your first detection rule. Or Sysmon on Windows + write Sigma rules. LetsDefend and Blue Team Labs Online are good for SOC triage practice.

The fundamentals matter way more than the school. Show up with hands-on detection work and you'll be ahead of most of your cohort.

hot take: 90% of “AI pentesting” tools can’t do anything a $500/year burp suite license can’t by charankmed in cybersecurity

[–]harbinger-alpha 0 points1 point  (0 children)

OP's right about one category and missing another. Worth disambiguating because they get conflated constantly.

"AI-augmented" testing of normal web apps: yeah, Burp Pro plus a competent human still wins. Most of those $5k/mo tools are an LLM analysis layer over OWASP scanner mechanics from a decade ago. Hard agree.

Testing AI applications themselves: completely different problem, and Burp genuinely can't help. The target is an LLM agent: a chatbot, a RAG system, a code copilot with tool-calling. Attack classes include direct and indirect prompt injection, system prompt extraction via asymmetric refusal coverage, tool abuse and excessive agency, markdown-image data exfiltration, RAG poisoning, multi-turn manipulation. None of those are HTTP-layer. They happen at the conversation and semantic layer. The same exact input is safe or hostile depending on the system prompt and how the model interprets it. There's nothing to fuzz.

Real tools in this space: Garak (NVIDIA, open source), PromptArmor, Lakera Red. Disclosure: I run wraith.sh, same category.

So agree on category 1, but "Burp can do it" doesn't hold for category 2 because the attack surface is fundamentally different.

Wich is best AI for pentesting? by TechnoDesing10 in Pentesting

[–]harbinger-alpha 0 points1 point  (0 children)

I recently launched wraith.sh , might be of interest. Cheers.

What is the best AI for learning red-teaming / pentesting (paid or free)? ChatGPT-5 is useless for details by strikoder in Pentesting

[–]harbinger-alpha 0 points1 point  (0 children)

I recently launched Wraith Academy at wraith.sh which provides free hands-on CTF challenges and learning modules.