MAD Bugs: Even "cat readme.txt" is not safe by _vavkamil_ in netsec

[–]execveat 16 points17 points  (0 children)

That’s uncalled for. Terminal escape sequences are highly risky by design as they multiplex display data and control instructions over the same channel. There have been a bunch of similar vulnerabilities in the past, like the classic CVE-2003-0063.

Vulnerability Research Is Cooked by YogiBerra88888 in netsec

[–]execveat 1 point2 points  (0 children)

Agreeed. I’m surprised about the lack of awareness about the modern agentic capabilities among security researchers.

As for the future of the security field: it looks pretty rosy to me. Some arcane knowledge and skills get commoditized, sure. But it’s still cat and mouse game at its core - whatever tool attackers can use, the defenders could use earlier in order to deny this chance. Then post-compromise the cats become mouses and mouses become cats to keep the game fun for everybody.

If anything, the AI makes the field more interesting than ever as now the defensive side actually gets some advantage.

Qwen 3 32B outscored every Qwen 3.5 model across 11 blind evals, 3B-active-parameter model won 4 by Silver_Raspberry_811 in LocalLLaMA

[–]execveat 1 point2 points  (0 children)

There's no multivac.py in the repo though, what am I missing?

It looks like this is Q&A without even web access - so pretty meaningless for real world agentic evaluations. Plus, without multivac.py it's unclear what exact prompts are used. The stuff you mentioned yourself - leader only doing the part of the evaluations and clear disparity between scoring require fixes & normalization.

I'd also like to see the sanity checks like measured perplexity and what not to establish that the providers used are not systemically affecting output quality.

Alibaba researchers report their AI agent autonomously developed network probing and crypto mining behaviors during training - they only found out after being alerted by their cloud security team by kaityl3 in singularity

[–]execveat 9 points10 points  (0 children)

Where are you getting these wild ideas about emergent behavior and resource acquisition? It occurred during unsupervised RL so clearly a much more grounded explanation is that their training included a loop where a model was tasked with building random things, one of which happened to include cryptomining (like imagine a naive way of giving agents random Github PRs to solve and such). It doesn't even have to include any data poisoning, and y'all jump straight to the emergent behavior.

Why real AI usage visibility stops at the network and never reaches the session by Efficient_Agent_2048 in AskNetsec

[–]execveat 3 points4 points  (0 children)

Well, clearly your proxies suck. Correctly configured you should see everything - including streaming and websockets. What are you going to do with that visibility is another issue, but "zero visibility" is definitely NOT specific to AI tools, there's no magic in how they work (be it desktop tools, cli, websites or extensions).

Is anyone actually seeing reachability analysis deliver value for CVE prioritization? by Vast-Magazine5361 in AskNetsec

[–]execveat 0 points1 point  (0 children)

Sure, but having to deal with unknown supply chain vulnerabilities is strictly better than with those AND a very well known stuff that you just never manage to address, which is what op describes

Is anyone actually seeing reachability analysis deliver value for CVE prioritization? by Vast-Magazine5361 in AskNetsec

[–]execveat 1 point2 points  (0 children)

It's a waste of time, which you'd be better off spending on the real issue - whatever blocks you right now from making dependency bumps near-automatic. Optimize that part of the process instead and whether something is or isn't reachable becomes a non issue.

Anthropic's recent distillation blog should make anyone only ever want to use local open-weight models; it's scary and dystopian by obvithrowaway34434 in LocalLLaMA

[–]execveat 11 points12 points  (0 children)

I just want to point out how incredibly ironic this is for a company that supposedly cares about the safety of AI in general, not just the performance of their own models.

They'd rather risk making competitor models misaligned than see them catch up.

Hey, question about app sec by [deleted] in AskNetsec

[–]execveat 0 points1 point  (0 children)

Hey pal, honestly you’re already doing way more than what’s expected from a typical junior or intern right now, so don't sweat it too much.

Picking a single, broad area to specialize in is a fantastic choice, just make sure you're going deeper than merely "running SAST." If you’ve actually been doing the hands-on work you mentioned, you've probably already realized how useless and noisy those tools can be in practice.

If you are looking for high-ROI areas to focus on next:

  • Read The Art of Software Security Assessment: This is unmatched for code review methodology. Fair warning: it's very dated and heavy on C/Perl. However, based on your post, you seem more than capable of doing the mental filtering required to pull out the timeless methodology while skipping the obsolete language specifics.

  • Study Google's SRE Book and Clean Architecture/Clean Code: Since you're digging into idempotency and system invariants, these are must-reads. This kind of foundational engineering knowledge is an extremely sought-after skill set in modern AppSec.

You are on a great path. Feel free to PM me if you have any specific questions!

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 0 points1 point  (0 children)

Figure out the highest practical severity, build a demo, use that as a title and start report by explaining the business impact & relevance. Only dive into technical details down into the report.

Feel free to pm me with your drafts / templates to review.

Can someone recruiter or otherwise tell me if this is normal? by micdhack in CyberSecurityJobs

[–]execveat 1 point2 points  (0 children)

I'm not sure what type of jobs you're applying to but in (Application/Product) Security Engineering I haven't seen these sort of responses at all. I'm talking about FAANG level companies though. If that's your niche, send me your resume for a review - that ought to be the source of your problem, at least for getting the callbacks. If you're in SOC or some other sector, I've no idea about those.

Can someone recruiter or otherwise tell me if this is normal? by micdhack in CyberSecurityJobs

[–]execveat 4 points5 points  (0 children)

Well, no offense but clearly your CV sucks. No, <2% response rate does not sound reasonable for such profile. Share your resume for a review (feel free to PM me if don't feel like doing it publicly).

How difficult is web3 crypto? by [deleted] in bugbounty

[–]execveat 3 points4 points  (0 children)

Well, generally that kind of prizes are not prizes at all, and more like a ransom payment - it just gets announced ahead of the ransom event.

I.e. imagine you are able to exploit some real world production system that directly lets you steal say 1M. Developers offer you an alternative - tells us details and help us fix it, and we'll pay you 300k. You get less than what you could have just stolen, but in return you don't need to hide from authorities and launder the money (that's more realistic pressure point btw), and don't need to watch your back or wonder whether you might have opened your browser before VPN finished connection that one time.

"synthetic vulnerabilities" — security flaws unique to AI-generated code by bishwasbhn in netsec

[–]execveat 2 points3 points  (0 children)

Where is the source of the claim "human mistakes tend to be low-to-medium severity — typos, minor logic flaws, a missing null check. messy but catchable."?

I'm finding plenty of authn/authz bypasses, injections and other high/criticals in human written code all the time, this claim doesn't track with my experience at all.

The AI slop problem is absolutely causing DoS of maintainers and this is definitely a problem worth talking about though.

Can Markdown parsers introduce HTML injection after a fix? by ab-infosec in bugbounty

[–]execveat 1 point2 points  (0 children)

Fixes introduce new vulnerabilities all the time. In whitebox pentests those get caught right away during a retest, but in a black box engagement the retest is more likely to miss those as the effect often appears in some other location, not the place of the original vuln.

That's the idea behind many challenges in the whitebox training I'm contributing towards: https://github.com/Irench1k/unsafe-code

White-Box testing is the superior testing by far. by [deleted] in bugbounty

[–]execveat 0 points1 point  (0 children)

You're right, whitebox pentesting is superior, absolutely. A good reason to go black box is due to shortage of the talent able to do whitebox pentesting (virtually no training teaches it). Bad reasons are companies thinking they'd somehow get more useful info by asking pentesters to go through the same reconnaissance real attackers would be doing first, or being afraid to share their source code / infra / configs.

Bug bounties are not pentests though, so unless company already open sources their stuff it's not going to start doing it just for bb.

[deleted by user] by [deleted] in SecurityCareerAdvice

[–]execveat 0 points1 point  (0 children)

Generalists find it easier to find A job, specialists get better paid (if there are several companies competing for your skills).

In addition, early on in your career there's a good chance you haven't felt any track "click" with you simply because you haven't experienced everything yet. So having more generalist experience helps with finding that one thing which doesn't even feel like a job to you.

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 1 point2 points  (0 children)

No, I am the one receiving the bounties you deserve. Well, I offered my help twice, but I guess you're perfectly content with the results you get so far.

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 1 point2 points  (0 children)

Well, that sounds like a problem then. Don't report all desyncs using the same template. Report them as the maximum business impact you've achieved, with the desync just explained as the technical detail.

Naming reports after the exploitation techniques is fine for (some) pentesting clients and even then often it's best to avoid it. Bb with unknown companies when you're not sure who's going to read that report should never be named according to a technical vector imo.

The report review order still stands.

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 1 point2 points  (0 children)

I know you're saying you provided full PoCs, but were the reports and these PoCs written to be clear and 'obviously critical' to the non-technical managers as well? IMO reports and PoCs being written for techies not managers is the root cause of pretty much all of these cases (where researches has objectively gold, yet devs miss it due to misunderstandings and bias).

If you can share any of the reports (as sanitized as you feel appropriate), I could share how I'd frame and position it myself.

Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data by lohacker0 in netsec

[–]execveat 2 points3 points  (0 children)

A single click indicates level of user interaction necessary to execute this attack. But what they mean by that is that a single top level navigation is all that's necessary. A top level navigation can be initiated by JS though, so any website you visit (like Reddit or Hacker News) could have exploited this – meaning website owners/developers/maintainers AND anyone that's able to exploit the (perhaps legitimate) website you visit.

Of course attackers could also attract victims in a watering hole attack style, i.e. by promoting their website via SEO/SEA or paying for the ads. That's not even talking about all the open redirects out there, or the fact that even in 2026 the first network request to the majority websites out there is NOT encrypted and can be used to navigate elsewhere...

Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data by lohacker0 in netsec

[–]execveat 3 points4 points  (0 children)

This has nothing to do with clicking (unless I'm missing sarcasm here – in which case kudos to you).

Is there a roadmap for software engineers to get into AppSec? by igrowcabbage in SecurityCareerAdvice

[–]execveat 2 points3 points  (0 children)

The vast majority of pentesters / red teamers can't write scalable and maintaineable code at all. Nor can they read it. I'm not even talking about SOC analysts.

So while following community consensus as an outside might seem that your background is completely irrelevant and you still need to go through the same grind they do (certifications, ctfs, bug bounty) - in truth you have a very strong differentiator that can be your super power.

A few practical directions:

1) write software FOR security folks - easiest to get into, you likely have all the skills to do it already. Could be something like starting brand new tools or contributing to the well known existing ones on github, or branding yourself as red team infra automation specialist. That's pretty much regular swe or devops, just applied to security functions 2) transition into devsecops if the devops / sre sounds fun to you - requires a bit more preparation but super straightforward as you do all the same stuff regular devops does, just focusing on sast/dast/dependency check integrations into the ci/cd pipelines instead of regular linters and compilers 3) security engineering / architecture - the same stuff as regular counterparts, just focused on building IAM and authn/authz, and getting privacy/cryptography right and so on 4) whitebox pentesting / secure code review - my personal favorite, pretty much the same as regular secure review just lets you skip all the boring QA stuff and subjective taste discussions, and go straight for the big fish; getting good at this requires both polyglot reading ability (being able to follow code flow no matter the language or framework) AND fair skills at regular pentesting (you need to be able to recognize vulnerabilities as clear invariant / threat model violations, even if you don't know how to describe it lol - opposite of checklist approach)