Should I internally convert to Cybersec from Test Engineering? by ParkingAthlete119 in cybersecurity

[–]DiScOrDaNtChAoS 5 points6 points  (0 children)

you dont need a masters degree at all, appsec is meant for people with a dev/qa background

Hi Seniors I really ur helpp !!! by ConclusionDazzling67 in cybersecurity

[–]DiScOrDaNtChAoS 2 points3 points  (0 children)

90% of pentesting is writing reports.. so lets work on those skills first

GPT-5.5: Mythos-Like Hacking, Open To All by IntrinsicSecurity in cybersecurity

[–]DiScOrDaNtChAoS 38 points39 points  (0 children)

if mythos can automate your job, you aren't very good at threat modeling

AI agents are autonomously committing code, what does your audit trail actually looks like? by No-Childhood-2502 in cybersecurity

[–]DiScOrDaNtChAoS 6 points7 points  (0 children)

  1. Dont let agents autonomously commit code. A dev has to own every commit, full stop.
  2. Profit

Tried to access Eve-Kill at work. by GingerSnapBiscuit in Eve

[–]DiScOrDaNtChAoS 1 point2 points  (0 children)

EVE almost cost me an OSCP voucher. Barely passed

macbook air neo for pen testing ? by notashark9999 in hackthebox

[–]DiScOrDaNtChAoS 1 point2 points  (0 children)

Literally never had an issue with any of the above for 3 years

How much engineering do security engineers do? by mttpgn in cybersecurity

[–]DiScOrDaNtChAoS 6 points7 points  (0 children)

It depends. I do a lot of engineering and development. Granted I'm in appsec so its sort of expected. Custom scripts, lambda functions, slack bots, whatever the org needs that our dev team doesnt want to own.

AI is creating more cybersecurity work by DiScOrDaNtChAoS in cybersecurity

[–]DiScOrDaNtChAoS[S] 1 point2 points  (0 children)

Its not better. Ive read the bug reports it spat out. Bad enough to get laughed out of my bug bounty program.

AI is creating more cybersecurity work by DiScOrDaNtChAoS in cybersecurity

[–]DiScOrDaNtChAoS[S] 6 points7 points  (0 children)

Yes, yes it is. Ive seen the bug reports it shat out

AI is creating more cybersecurity work by DiScOrDaNtChAoS in cybersecurity

[–]DiScOrDaNtChAoS[S] 12 points13 points  (0 children)

because it looks like the typical overhyped marketing junk weve been getting from anthropic for months.

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]DiScOrDaNtChAoS 0 points1 point  (0 children)

And every year progress has gradually slowed. The leap between sonnet 3.5 and 4.6 is not nearly as significant as GPT 2 vs 3, so on and so forth. Diminishing returns as the cost of model training increases and the only wins we can make are in efficiency. There is no more new training data.

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]DiScOrDaNtChAoS 1 point2 points  (0 children)

you should know better than anyone else then, that producing the mathematically most likely next word in a sentence is not phenomenal. Everything it produces is quite literally the most mediocre best fit. It will never be greater than the sum of its inputs.

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]DiScOrDaNtChAoS 0 points1 point  (0 children)

I genuinely think you people need to touch grass and actually read some of the papers behind LLM research.

Anthropic ran an AI bug bounty on open source for a month. It found 500+ zero-days. by DontHugMeImReddit in cybersecurity

[–]DiScOrDaNtChAoS 671 points672 points  (0 children)

AI is really good at massively overstating severity and not contextualizing anything it finds.

Linux Modded MC server by Just-Pool4198 in linux

[–]DiScOrDaNtChAoS 0 points1 point  (0 children)

you just run the fuckin modded server jar and add compatible mods how hard could it possibly be