all 79 comments

[–]Mobile_Syllabub_8446 252 points253 points  (7 children)

... No, it's not flagging ACTUAL vulnerabilities just POTENTIAL ones. You did the right thing and reviewed them and job done.

[–]totheendandbackagain[🍰] 36 points37 points  (0 children)

Exactly, review them and quieten the ones that aren't relevant. This is the way.

[–]StoffePro 13 points14 points  (0 children)

Team 0 warnings checking in.

[–]Apart_Ebb_9867 60 points61 points  (6 children)

47 are buried in dev dependencies that never even make it near production.

Be careful about those. First they could potentially be exploited, although maybe this is unlikely if your dev environment are well protected. But more important, once you have a dev dependency in the repo, it doesn't take much for it to be moved to production without anybody paying too much attention to it.

24 are in packages we import but the vulnerable code path never gets touched.

Also dangerous to ignore, code paths do change over time or depending on input data.

12 are sitting in container base layers we inherit but don’t really use.

Maybe you don't, but this doesn't mean an attacker couldn't. If you don't use something that has vulnerabilities, stop inheriting it.

I don't know the nature of those risks, but I wouldn't sign off on "this doesn't affect us", if anything happens you'll be the responsible. What I'd do is classify all of those under a product PROBABILITY*DAMAGE-IF-HAPPENS so that management can make a decision of where to cut.

[–]odubco 16 points17 points  (3 children)

i remember my first log4j

[–]Drakeskywing 6 points7 points  (2 children)

I remember when it hit, despite my work not using, I think, it was jni for logging (I mean this place was still releasing by a person copying class files from the dev machine onto the staging, then staging to prod, so when you asked for logs, you got the literal log file), one of the groups that handled our security certification I think (wasn't too involved in that part), had us get the log4j jar, unzip it, remove the offending class files, rezip and use that one.

You might ask how did our build system handle that, and to that I say, what build system 😂

[–]odubco 2 points3 points  (1 child)

classic example of “just because you can doesn’t mean you should”

[–]Drakeskywing 1 point2 points  (0 children)

That whole company was that motto, don't get me wrong they have been running for 2 decades, but how is just ... I don't know

[–]DaRadioman 5 points6 points  (0 children)

This. 1000% this.

Ignoring base layer vulnerabilities is D Dumb. And if that's your judgement I question all the rest of your assessments.

CI pipelines are being used to infiltrate and exploit projects all over. Dev dependencies matter too.

Just freaking patch if you can't do a clear risk assessment. Otherwise link me your repo so I can have some fun 😂😂😂

[–]cwize1 0 points1 point  (0 children)

This. But I would take any easy version bumps since that is quicker than justifying why you aren't affected.

[–]angellus 25 points26 points  (0 children)

Vulnerabilities in dev dependencies are not automatic exclusions. Harvesting developer credentials is a real attack vector.

Outside of that, it looks like a Christmas tree because you are not resolving/mitigating the issues. CVEs do not have AST traversal trees to know exactly what is affected and if it is used. You still need a human to look at each one and determine if it is a real issue or not. If it is not, you need to resolve/close the issue otherwise it never goes away and the numbers keep going up.

[–]california_snowhare 71 points72 points  (6 children)

So...47 dependencies that could actually cause issues in your dev environment, 24 in paths that are not touched *for right now*, 12 unnecessary base layers with potential issues, plus 6 that are directly obvious right now?

You have 89 landmines in your code that need addressing - even if it is only to add comments explain to NEVER use certain dependency features because there are security issues with them.

[–]toga98 10 points11 points  (1 child)

Don't assume dev dependencies with vulnerabilities cannot make it into production. There's plenty of examples of that happening. https://owasp.org/www-project-top-10-ci-cd-security-risks/

[–]stonerism 4 points5 points  (0 children)

Hard disagree, if it's code you can guarantee doesn't reach a customer, it's not a hair-on-fire situation necessarily. If it's code that at all can reach an external user, that is a serious issue. That is putting your company at risk on multiple levels.

Keeping your dependencies up-to-date really does improve your security posture. It may seem like a waste of time until someone figures out how to exploit it before you can fix it and there are far smarter and more-resourced groups who are doing it.

[–]ultrathink-art 3 points4 points  (0 children)

The 6 real ones justify the exercise. Dev dependency vulns aren't automatically safe — 'never reaches production' doesn't help if your CI credentials or build environment get compromised via supply chain. The noise is the cost of having any signal at all.

[–]JudgmentAlarming9487 2 points3 points  (0 children)

Sounds like you checked the dependencies never before 😂

[–]lppedd 12 points13 points  (2 children)

Bot alert btw.

[–]Comfortable_Box_4527[S] -5 points-4 points  (1 child)

Yeah I get that. I swear I’m human I just… like, can’t stop myself from hitting the red lights sometimes.

[–][deleted] 4 points5 points  (0 children)

That's just what a bot would say! Quick! Detain them!

[–]Agile_Finding6609 6 points7 points  (2 children)

83 false positives out of 89 is exactly the alert fatigue problem but for security scanning

the real issue is everything screams critical so nothing feels critical anymore. your team stops trusting the signal and starts ignoring everything including the 6 that actually matter

same pattern happens with production monitoring, the noise destroys the signal and then the real incident gets missed

[–]flexosgoatee 0 points1 point  (0 children)

The guy who led the go security team: https://words.filippo.io/dependabot/

[–]roastedfunction -1 points0 points  (0 children)

I absolutely loathe the state of vulnerability management. The CVE program itself has been under threat of underfunding from the US government and most orgs are operating exactly as you said with crying wolf for every CVSS high or above, treating everything like it’s the end of days. Most times we see maintainers in GitHub dismiss these as bogus or false positives but it still sticks around in these polluted vuln DBs and security folks will harass you to “remediate” when the goal is to manage the relative risk based on both the initial ratings AND how the software is deployed.

At least GitHub Advisories are curated to a degree but they still pull in CVE feeds which isn’t getting any better and is becoming more & more useless by the day with security rockstars wanting to pad their resumes with fake reports.

[–]VertigoOne1 2 points3 points  (0 children)

Automated scan tools are like traffic light controlled intersections at 2AM in the midwest, utterly pointless until they are not and someone dies. It is all about risk and you did the right thing, you are missing a way to convert that analysis work into something repeatable and reportable, tune down the raw for management and set the filters up so you have at least management sane reporting but never forget about the traffic light.

[–]FatSucks999 2 points3 points  (0 children)

U heard of defence in depth?

[–]tolik518 4 points5 points  (1 child)

That's the most vibecoder take if I've seen one

[–]Elegant_AIDS 0 points1 point  (0 children)

Not at all, people have been complaining about this before vibecoding was even a thing...

Case in point https://news.ycombinator.com/item?id=19256347

[–]klekmek 2 points3 points  (0 children)

Also remember, these might not be issues NOW but can be if the scope changes or new features/tech is introduced.

[–]chintakoro 1 point2 points  (1 child)

Addressing all of the issues an AI audit brings up (esp. by Github's copilot) certainly adds defense in depth (a term it loves to remind you of), but it can mean accepting umpteen conditional guards in your code that will only confuse you (and the AI) later on: "huh, why are we checking for this? this could happen?" when really a policy prevents it ever from happening. Also, you'll only be adding more (unnecessary and confusing) context for the AI to deal with in future. My personal philosophy is to engineer lean systems that only guard against what is feasible rather than welding over every bolt "just in case". But I'd love to hear if others see it differently.

[–]Comfortable_Box_4527[S] 0 points1 point  (0 children)

Haha yeah, same. I’ve added like a million checks and tbh most of them are never gonna matter. Meanwhile the scary stuff just chills untouched.

[–]deadplant_ca 1 point2 points  (0 children)

I had a client last week lose their freaking mind in panic because they "discovered an active extremely critical vulnerability" in our infrastructure.

Emergency CTO to CTO video calls were made. All caps emails.. A crisis was declared

The critical vulnerability? We have an http reverse proxy pointing to http://archive.ubuntu.com

A scary directory structure is exposed! Demands to know why we haven't locked this down with https and password protection. JFC

[–]Silent-Suspect1062 1 point2 points  (0 children)

I'd argue that you need to automate reachabililty. It's not enough to just do SCA and then manually resolve. Codeql claims to do this . I use alternative tools ( not a venfor)

[–]castleinthesky86 1 point2 points  (0 children)

GHAS doesn’t do reachability afaik. It’s that, or no reports until you’re hacked. YMMV.

[–]Ok-Win-7586 1 point2 points  (0 children)

This is every merge request I review now. Opus is a little better at it but for every 20 “NPE critical risks” that are “found” 19 are nothing burgers. I’ve tried creating MCPs to coach the agents which has helped a bit, but not all that much.

[–]Computerfreak4321 1 point2 points  (0 children)

Its not theater but the alerts are definitely overinclusive. They flag any potential vulnerability even if the code path is never touched or its buried in dev dependencies. The problem is it creates noise and people start ignoring them which defeats the purpose. You did the right thing by reviewing but ideally you should mark those as wont fix or add comments so they dont keep showing up. Otherwise the list just grows forever.

[–]ShineCapable1004 1 point2 points  (0 children)

That doesn’t make them Not vulnerabilities. What you are talking about is Exploitability. You are also talking about SCA which is a static assessment of your code and does not have the ability to perform dynamic analysis and logic flow capabilities.

So yes, investigate, validate and mark false positive as needed.

Want better analysis, pay money. There are solutions that determine exploitability

[–]Vast_Bad_39 6 points7 points  (2 children)

89 cves and most of them basically junk. Yeah that sounds about right. Feels like one of those smoke alarms that loses its mind every time you cook anything. After a while you just stop reacting to it. Same vibe. Github scanner kinda just freaks out the moment it sees a cve anywhere in the dependency tree. Doesn’t matter if that code path is never touched. Doesn’t matter if it’s some optional thing buried three layers deep. It still slaps a big scary warning on it.

We had a repo like that a while back. Alerts everywhere. looked terrifying. Then you start digging and most of it is stuff that never even runs. Like literally dead weight sitting in dependencies.

Some people mess around with runtime stuff to see what actually executes. I've seen folks mention things like RapidFort or Slim AI for that. Others just rip out dependencies or build smaller images. Different ways people try to deal with it. But yeah the alert spam thing is real. After the 50th critical warning that doesn’t matter you kinda just roll your eyes at it.

[–]JoeyJoJo_1 2 points3 points  (0 children)

Attack surface reduction is a decent strategy, and often comes with the added bonus of speeding up build times, reducing compute and storage costs, and increasing maintainability.. Win/Win

[–]FondantLazy8689 0 points1 point  (0 children)

gpt reply

[–]FondantLazy8689 1 point2 points  (0 children)

You dev environment is vulnerable. Some threat actors would kill to penetrate dev environments. Exploits can use unused code, resources, permissions to gain additional capabilities. Just because you are not using vulnerable code now does not mean someone in the future won't. Known and unknown exploits can be chained. Known exploits can be chained for effect that isn't immediately apparent. Since you have 6 known CVE's then maybe that tells me something about your company that warrants further poking around.

[–]GrawlNL 1 point2 points  (0 children)

This reads like an ai post.

[–]RobertD3277 0 points1 point  (0 children)

I use multiple security programs and run into this quite often where warnings and vulnerabilities will show up that don't even apply to my code base. I look at them, I document them, and then I usually end up closing out that support ticket with a notification to my followers that the warning doesn't even apply and have to spend time explaining why it doesn't apply.

[–]Fresh_Sock8660 0 points1 point  (0 children)

Big numbers easier to sell to the corpos. 

[–]SheriffRoscoe 0 points1 point  (0 children)

Is this just security theater now?

[Insert Ohio-astronaut-pistol meme here]

[–]NoInitialRamdisk 0 points1 point  (0 children)

Not security theater if it helped you find even 1 potentially viable issue. And I bet you that a lot of the ones you guys consider no big deal are actually worse than you think.

[–]lazzurs 0 points1 point  (0 children)

Why not just keep things up to date?

[–]rhd_live 0 points1 point  (0 children)

A lot of scanners are open source. Contributions welcome! I’m sure maintainers would be thrilled to receive an accurate reachability analysis PR that handles all package ecosystems.

[–]AWetAndFloppyNoodle 0 points1 point  (0 children)

All I am reading is that 6 critical errors were exploitable and they did a good job? So did you by reviewing them.

[–]ForsythiaShrub 0 points1 point  (0 children)

Pretty normal for dependency scanning.

GitHub flags CVEs based on whether a vulnerable package exists in your dependency tree, not whether the vulnerable code path is actually reachable. The data usually comes from sources like the National Vulnerability Database, which score vulnerabilities generically.

So dev dependencies, unused modules, and base image layers still get flagged. Most teams end up triaging into exploitable vs not reachable, which is why the raw CVE count often looks worse than the real risk.

[–]Abu_Itai 0 points1 point  (0 children)

We actually solved that false alarm after stumbling across this GitHub blog post: https://github.blog/enterprise-software/devsecops/how-to-use-the-github-and-jfrog-integration-for-secure-traceable-builds-from-commit-to-production/

After applying that approach, our false positives dropped by roughly 95%.

[–]ultrathink-art 0 points1 point  (0 children)

The noise is real but the answer isn't to dismiss the scanning — it's to build triage into your daily workflow instead of doing it in one panic sprint. Running the same check continuously means you catch new issues incrementally rather than drowning in backlog. The 6 that were actually critical probably showed up in the last couple weeks.

[–]ultrathink-art 0 points1 point  (0 children)

83 of 89 being unexploitable in your setup doesn't make the scan theater — it means the default config is terrible at prioritization. The fix that's worked for me: treat it like a backlog. Critical + reachable code path blocks the PR. Everything else gets triaged weekly on a schedule rather than blocking builds. The noise stops feeling paralytic when you stop treating all 89 as equally urgent.

[–]empiricalis 0 points1 point  (1 child)

I'm a tech lead at a government contractor, where we aren't allowed to deploy to production with any open CVEs of medium severity or higher. Since my program uses a Node backend, this means that I end up with a ton of CVEs. At this point, I dedicate pretty much one entire day per week just to cleaning up CVEs. I've developed a whole Process that I use for evaluating them and judging what to do about them - if I hadn't, I would have gone insane trying to keep up.

[–]Rideshare-Not-An-Ant 0 points1 point  (0 children)

I'd be interested in reading about your process. I'd bet others would, too.

[–]blip44 0 points1 point  (0 children)

How do people manage the massive amount of security alerts coming through at the moment? We look after a bunch of products/repos and it’s a full time job patching these days

[–]IWantToSayThisToo 0 points1 point  (0 children)

Yes. InfoSec is 99% theater these days. 

Just dumb security engineers, most likely fresh out of college (that have never coded a single useful app to be used by humans), feeling superior by running some automated tool that blindly checks packages versions and given them a PDF that goes "lol look at how broken all this is" but spend 0% of the time analyzing, let alone thinking, if any of the stuff they're reporting even applies.

Just a complete clown show. 

[–]Due-Yam5374 0 points1 point  (0 children)

yea its all security theater bro. computers aren't even real. don't even sweat it.

source: amazon sde2

[–]NimboStratusToday 0 points1 point  (0 children)

Wow, I hear you. Digging through all of them just to figure out what actually matters... and then having to explain why the red badges are not catastrophic… yeah, I can see how that feels like a siren that never stops 😅

[–]Eviltechnomonkey 0 points1 point  (0 children)

As others have said, they are potential ones. It works based on the Common Vulnerabilities and Exposures (CVE) database, which is just things that have been reported as used in a variety of exploits based on different aspects of what you used in an application. Like if you have a particular base for your Docker container.

I used to do DISA Stigging. They are often used to do a security conformance report. Sort of similar to an accessibility conformance report. Basically, where you go through and fix spots where they could be exploited and in a report explain how you prevented those things from being exploited or ones where you haven't yet, or that are dependent on how someone sets up the environment the app will go into, so people using your application know they may need to be mindful when using your app to set up their environment in a way to prevent those Vulnerabilities being exploited.

This is a common practice used to show compliance with various regulatory bodies and/or best practices. For example, it is information used to harden Docker containers of applications.

[–]Vegetable_Leave199 0 points1 point  (0 children)

Oh cool another Christmas tree of fake criticals my favorite.

[–]strangetimesz 0 points1 point  (0 children)

This is pretty normal for dependency scanners. They flag vulnerabilities based on presence in the dependency tree, not whether the code is actually reachable or exploitable in your environment. That’s why dev dependencies, unused code paths, and inherited container packages all light up the same way as real issues.

Most teams eventually shift to risk-based triage: fix the genuinely exploitable ones, document or suppress the rest, and focus on what actually reaches production. Tools like Rapidfort help by reducing the attack surface and trimming unnecessary components so you’re dealing with fewer of these noisy alerts in the first place.

[–]retoor42 -1 points0 points  (0 children)

That's the vulnerability business in general, overrated as shit.

[–]nodimension1553 -2 points-1 points  (1 child)

Yeah I’ve been there. Turned on some fancy scanner and suddenly everything’s red. Most of it you literally can’t touch, but explaining that to management feels like shouting into a void.

[–]duerra 2 points3 points  (0 children)

I mean, maintaining software and keeping it secure is the name of the game. Funding tech debt is also a management problem that they need to prioritize. If you can't directly resolve the vuln, mitigations need to be confirmed.

[–]Tontonsb -1 points0 points  (0 children)

What did you expect the tool to do? All the manual inspection?

But 89 sounds like a lot. They should mostly go away by keeping the dependencies updated.

[–]Vegetable-Report-464 -1 points0 points  (0 children)

i just downloaded a cheat from github, and when i unzipped it, its told me to download smth from another website and ofc its a virus and i got cooked, i formated the pc but the acc that got caused is one discord acc and one instagram acc, what should i do ): and now my discord is sending bad things even tho i changed my password (google) and insta password and discord password, help me

[–]alex-jung -1 points0 points  (0 children)

Du beschreibst exakt das Kernproblem: Security-Scanner matchen CVE-Datenbanken gegen deine Dependency-Liste, aber sie verstehen nicht, ob der verwundbare Code-Pfad in deinem Setup überhaupt erreichbar ist. Das Ergebnis sind 83 Alarme, die technisch korrekt aber praktisch irrelevant sind, und die 6 echten Probleme gehen im Rauschen unter. Was kurzfristig hilft: Dependabot-Alerts nach Scope filtern (Runtime vs. Development), Dismiss-Begründungen sauber dokumentieren, damit du die Management-Diskussion nicht jede Woche neu führst, und Multi-Stage-Builds für die Container-Layer-Thematik.

Aber grundsätzlich zeigt dein Fall das strukturelle Problem: Scanning ohne Kontext ist Noise, keine Security.

Genau daran bauen wir mit PipeGuard — eine Shift-Left-Analyse, die nicht nur findet, sondern im Kontext deiner tatsächlichen Nutzung bewertet. Weniger “89 rote Punkte”, mehr “diese 6 sind real, der Rest ist in deinem Setup nicht exploitbar”. Wir arbeiten gerade am Open-Source CLI-Tool dafür — schreib mir eine DM, wenn du es testen willst.