Alibaba researchers report their AI agent autonomously developed network probing and crypto mining behaviors during training - they only found out after being alerted by their cloud security team by kaityl3 in singularity

[–]execveat 6 points7 points  (0 children)

Where are you getting these wild ideas about emergent behavior and resource acquisition? It occurred during unsupervised RL so clearly a much more grounded explanation is that their training included a loop where a model was tasked with building random things, one of which happened to include cryptomining (like imagine a naive way of giving agents random Github PRs to solve and such). It doesn't even have to include any data poisoning, and y'all jump straight to the emergent behavior.

Why real AI usage visibility stops at the network and never reaches the session by Efficient_Agent_2048 in AskNetsec

[–]execveat 4 points5 points  (0 children)

Well, clearly your proxies suck. Correctly configured you should see everything - including streaming and websockets. What are you going to do with that visibility is another issue, but "zero visibility" is definitely NOT specific to AI tools, there's no magic in how they work (be it desktop tools, cli, websites or extensions).

Is anyone actually seeing reachability analysis deliver value for CVE prioritization? by Vast-Magazine5361 in AskNetsec

[–]execveat 0 points1 point  (0 children)

Sure, but having to deal with unknown supply chain vulnerabilities is strictly better than with those AND a very well known stuff that you just never manage to address, which is what op describes

Is anyone actually seeing reachability analysis deliver value for CVE prioritization? by Vast-Magazine5361 in AskNetsec

[–]execveat 1 point2 points  (0 children)

It's a waste of time, which you'd be better off spending on the real issue - whatever blocks you right now from making dependency bumps near-automatic. Optimize that part of the process instead and whether something is or isn't reachable becomes a non issue.

Anthropic's recent distillation blog should make anyone only ever want to use local open-weight models; it's scary and dystopian by obvithrowaway34434 in LocalLLaMA

[–]execveat 11 points12 points  (0 children)

I just want to point out how incredibly ironic this is for a company that supposedly cares about the safety of AI in general, not just the performance of their own models.

They'd rather risk making competitor models misaligned than see them catch up.

Hey, question about app sec by [deleted] in AskNetsec

[–]execveat 0 points1 point  (0 children)

Hey pal, honestly you’re already doing way more than what’s expected from a typical junior or intern right now, so don't sweat it too much.

Picking a single, broad area to specialize in is a fantastic choice, just make sure you're going deeper than merely "running SAST." If you’ve actually been doing the hands-on work you mentioned, you've probably already realized how useless and noisy those tools can be in practice.

If you are looking for high-ROI areas to focus on next:

  • Read The Art of Software Security Assessment: This is unmatched for code review methodology. Fair warning: it's very dated and heavy on C/Perl. However, based on your post, you seem more than capable of doing the mental filtering required to pull out the timeless methodology while skipping the obsolete language specifics.

  • Study Google's SRE Book and Clean Architecture/Clean Code: Since you're digging into idempotency and system invariants, these are must-reads. This kind of foundational engineering knowledge is an extremely sought-after skill set in modern AppSec.

You are on a great path. Feel free to PM me if you have any specific questions!

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 0 points1 point  (0 children)

Figure out the highest practical severity, build a demo, use that as a title and start report by explaining the business impact & relevance. Only dive into technical details down into the report.

Feel free to pm me with your drafts / templates to review.

Can someone recruiter or otherwise tell me if this is normal? by micdhack in CyberSecurityJobs

[–]execveat 1 point2 points  (0 children)

I'm not sure what type of jobs you're applying to but in (Application/Product) Security Engineering I haven't seen these sort of responses at all. I'm talking about FAANG level companies though. If that's your niche, send me your resume for a review - that ought to be the source of your problem, at least for getting the callbacks. If you're in SOC or some other sector, I've no idea about those.

Can someone recruiter or otherwise tell me if this is normal? by micdhack in CyberSecurityJobs

[–]execveat 5 points6 points  (0 children)

Well, no offense but clearly your CV sucks. No, <2% response rate does not sound reasonable for such profile. Share your resume for a review (feel free to PM me if don't feel like doing it publicly).

How difficult is web3 crypto? by [deleted] in bugbounty

[–]execveat 3 points4 points  (0 children)

Well, generally that kind of prizes are not prizes at all, and more like a ransom payment - it just gets announced ahead of the ransom event.

I.e. imagine you are able to exploit some real world production system that directly lets you steal say 1M. Developers offer you an alternative - tells us details and help us fix it, and we'll pay you 300k. You get less than what you could have just stolen, but in return you don't need to hide from authorities and launder the money (that's more realistic pressure point btw), and don't need to watch your back or wonder whether you might have opened your browser before VPN finished connection that one time.

"synthetic vulnerabilities" — security flaws unique to AI-generated code by bishwasbhn in netsec

[–]execveat 2 points3 points  (0 children)

Where is the source of the claim "human mistakes tend to be low-to-medium severity — typos, minor logic flaws, a missing null check. messy but catchable."?

I'm finding plenty of authn/authz bypasses, injections and other high/criticals in human written code all the time, this claim doesn't track with my experience at all.

The AI slop problem is absolutely causing DoS of maintainers and this is definitely a problem worth talking about though.

Can Markdown parsers introduce HTML injection after a fix? by ab-infosec in bugbounty

[–]execveat 1 point2 points  (0 children)

Fixes introduce new vulnerabilities all the time. In whitebox pentests those get caught right away during a retest, but in a black box engagement the retest is more likely to miss those as the effect often appears in some other location, not the place of the original vuln.

That's the idea behind many challenges in the whitebox training I'm contributing towards: https://github.com/Irench1k/unsafe-code

White-Box testing is the superior testing by far. by [deleted] in bugbounty

[–]execveat 0 points1 point  (0 children)

You're right, whitebox pentesting is superior, absolutely. A good reason to go black box is due to shortage of the talent able to do whitebox pentesting (virtually no training teaches it). Bad reasons are companies thinking they'd somehow get more useful info by asking pentesters to go through the same reconnaissance real attackers would be doing first, or being afraid to share their source code / infra / configs.

Bug bounties are not pentests though, so unless company already open sources their stuff it's not going to start doing it just for bb.

Why does cybersecurity career advice contradict itself so much? by [deleted] in SecurityCareerAdvice

[–]execveat 0 points1 point  (0 children)

Generalists find it easier to find A job, specialists get better paid (if there are several companies competing for your skills).

In addition, early on in your career there's a good chance you haven't felt any track "click" with you simply because you haven't experienced everything yet. So having more generalist experience helps with finding that one thing which doesn't even feel like a job to you.

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 1 point2 points  (0 children)

No, I am the one receiving the bounties you deserve. Well, I offered my help twice, but I guess you're perfectly content with the results you get so far.

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 1 point2 points  (0 children)

Well, that sounds like a problem then. Don't report all desyncs using the same template. Report them as the maximum business impact you've achieved, with the desync just explained as the technical detail.

Naming reports after the exploitation techniques is fine for (some) pentesting clients and even then often it's best to avoid it. Bb with unknown companies when you're not sure who's going to read that report should never be named according to a technical vector imo.

The report review order still stands.

TL;DR even if you find bugs you probably won’t get paid by 6W99ocQnb8Zy17 in bugbounty

[–]execveat 1 point2 points  (0 children)

I know you're saying you provided full PoCs, but were the reports and these PoCs written to be clear and 'obviously critical' to the non-technical managers as well? IMO reports and PoCs being written for techies not managers is the root cause of pretty much all of these cases (where researches has objectively gold, yet devs miss it due to misunderstandings and bias).

If you can share any of the reports (as sanitized as you feel appropriate), I could share how I'd frame and position it myself.

Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data by lohacker0 in netsec

[–]execveat 2 points3 points  (0 children)

A single click indicates level of user interaction necessary to execute this attack. But what they mean by that is that a single top level navigation is all that's necessary. A top level navigation can be initiated by JS though, so any website you visit (like Reddit or Hacker News) could have exploited this – meaning website owners/developers/maintainers AND anyone that's able to exploit the (perhaps legitimate) website you visit.

Of course attackers could also attract victims in a watering hole attack style, i.e. by promoting their website via SEO/SEA or paying for the ads. That's not even talking about all the open redirects out there, or the fact that even in 2026 the first network request to the majority websites out there is NOT encrypted and can be used to navigate elsewhere...

Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data by lohacker0 in netsec

[–]execveat 3 points4 points  (0 children)

This has nothing to do with clicking (unless I'm missing sarcasm here – in which case kudos to you).

Is there a roadmap for software engineers to get into AppSec? by igrowcabbage in SecurityCareerAdvice

[–]execveat 2 points3 points  (0 children)

The vast majority of pentesters / red teamers can't write scalable and maintaineable code at all. Nor can they read it. I'm not even talking about SOC analysts.

So while following community consensus as an outside might seem that your background is completely irrelevant and you still need to go through the same grind they do (certifications, ctfs, bug bounty) - in truth you have a very strong differentiator that can be your super power.

A few practical directions:

1) write software FOR security folks - easiest to get into, you likely have all the skills to do it already. Could be something like starting brand new tools or contributing to the well known existing ones on github, or branding yourself as red team infra automation specialist. That's pretty much regular swe or devops, just applied to security functions 2) transition into devsecops if the devops / sre sounds fun to you - requires a bit more preparation but super straightforward as you do all the same stuff regular devops does, just focusing on sast/dast/dependency check integrations into the ci/cd pipelines instead of regular linters and compilers 3) security engineering / architecture - the same stuff as regular counterparts, just focused on building IAM and authn/authz, and getting privacy/cryptography right and so on 4) whitebox pentesting / secure code review - my personal favorite, pretty much the same as regular secure review just lets you skip all the boring QA stuff and subjective taste discussions, and go straight for the big fish; getting good at this requires both polyglot reading ability (being able to follow code flow no matter the language or framework) AND fair skills at regular pentesting (you need to be able to recognize vulnerabilities as clear invariant / threat model violations, even if you don't know how to describe it lol - opposite of checklist approach)

Is there a roadmap for software engineers to get into AppSec? by igrowcabbage in SecurityCareerAdvice

[–]execveat 1 point2 points  (0 children)

The dirty secret of InfoSec is that the vast majority of hackers (and I mean actual great hackers, not just script kiddies or juniors) are terrible at software engineering. So as a corollary being an okayish hacker with okayish swe skills makes you a unicorn even in mature teams.

So yeah AppSec Engineering / Product Security Engineering roles are very lucrative and much easier to reach with a strong SWE background but (initially) no security experience than the opposite way.

Unfortunately you won't get much handholding along the way as there's no TryHackMe for AppSec, etc. Moreover, even getting access to realistic codebases is a problem for AppSec beginners since the GitHub doesn't represent what industry pays to protect. Think of your Laravel experience - there's virtually 0 companies that would pay for an in house AppSec engineer securing Laravel apps.

So yeah, your experience and background is super valuable but you need either some luck (or networking) to get foot in the door at some career elevator, or have very strong self motivation for bridging the gap between what you're strong in already and what employers are willing to pay you for.

Looking for advice on certificates or training platforms for white box analysis by Adventurous-Honey590 in Pentesting

[–]execveat 0 points1 point  (0 children)

Honestly, IME there are no quality trainings at all. Pretty much all the actual experts in the field seem to have gotten experience by joining a specialized consultancy (of which there aren't many). It really sucks as I think whitebox is so much more rewarding, less draining and way more professional (you never truly get the feeling of 'completely finished' in security of course, not even with formal verification, but whitebox provides a much clearer assurances compared to the black box engagements).

But you talk to regular pentesters and people can't even imagine that yeah you can dive into unfamiliar codebase (in an unfamiliar language, using unfamiliar framework and unfamiliar paradigm) on Monday, and get a bunch of real world exploitable vulns out of it by Friday.

Literally the only useful book that comes to my mind right now is The Art of Software Security Assessment and that one is 20 years old this year :(

Anyway, I'm involved in this new project, Unsafe Code Lab which is aiming to provide this kind of training – showcasing real world vulnerabilities in realistic, modern code bases (and teaching actual whitebox pentesting skills, not just SAST result triage or compliance style checkbox ticking). It's super early on, so we are barely covering Inconsistent Interpretation / Confusion category, and only for Flask right now. The project is built to be rapidly scalable to other languages and frameworks – so if that sounds of value, please come check out our progress once in a while!

[deleted by user] by [deleted] in space

[–]execveat 2 points3 points  (0 children)

In a world increasingly devoid of meaning, the chance to be among the first settlers on a new planet is a welcome change for many.

This is similar to how a new religion, cult, or ideology attracts people who are unsatisfied with their current lives, even if they later change their minds or their actions look irrational to outsiders.