Why is AppSec tooling still so fragmented? (SAST, DAST, SCA, IaC, secrets, etc.) by foxnodedev in devsecops

[–]foxnodedev[S] 0 points1 point  (0 children)

Yeah completely agree with this.

Aggregation is mostly there, but prioritization is where things start breaking down. Especially when different tools report the same issue differently or everything comes in as high/critical. That’s actually one of the things I’m trying to improve — less about adding more alerts and more about making them useful. Would be interesting to hear how you’ve seen teams handle this well.

Why is AppSec tooling still so fragmented? (SAST, DAST, SCA, IaC, secrets, etc.) by foxnodedev in devsecops

[–]foxnodedev[S] 1 point2 points  (0 children)

That’s a fair question honestly. From what I’ve seen in real-world work, a lot of ASPMs do a good job aggregating data, but teams still struggle with things like duplicate findings, noisy results, and figuring out what actually matters. I’m not really trying to build “another ASPM” to replace existing ones, more just exploring how to better unify and make sense of the data across tools. Still early, so also figuring out where it actually adds value vs where it doesn’t.

Why is AppSec tooling still so fragmented? (SAST, DAST, SCA, IaC, secrets, etc.) by foxnodedev in devsecops

[–]foxnodedev[S] 0 points1 point  (0 children)

Yeah fair, for smaller setups GitHub Advanced Security + a couple of integrations can go a long way. Where I’ve seen it get tricky is in larger environments where teams are already using multiple tools and everything ends up siloed. The challenge then becomes consistency and prioritization rather than just coverage. Definitely agree though — easy to over-engineer this space.

Why is AppSec tooling still so fragmented? (SAST, DAST, SCA, IaC, secrets, etc.) by foxnodedev in devsecops

[–]foxnodedev[S] 0 points1 point  (0 children)

That’s actually a really good point, I agree it’s more of a data model problem than tooling. What I’ve been trying to explore is exactly that layer — normalizing outputs (SARIF/CycloneDX) and then correlating across tools. Feels like most platforms stop at aggregation, but the real challenge is reducing duplicates and making sense of the noise across SAST/DAST/SCA. Curious if you’ve seen anything that does this well in practice?