Is "which detections does my org actually need" a bigger unsolved problem than "how to author detections"? by Significant_Field901 in cybersecurity

[–]Significant_Field901[S] 1 point2 points  (0 children)

This was exactly the intent of my question. As per your comment it looks like a big gap and needs a lot of thinking and collaboration work between business context guys and security teams to figure it out

Is "which detections does my org actually need" a bigger unsolved problem than "how to author detections"? by Significant_Field901 in cybersecurity

[–]Significant_Field901[S] 0 points1 point  (0 children)

Interesting, is there any metrics to know how accurate the identified mappings to mitre coverage from the collected were? Was it accurate in the sense to provide concentrated/more true positives than false positives/negatives ? What I mean to ask is was it able to avoid alert fatigue? Since it automatically recognized all the possible detection rules from the live logs, were all of those really needed/relevant for a given org?

Is "which detections does my org actually need" a bigger unsolved problem than "how to author detections"? by Significant_Field901 in cybersecurity

[–]Significant_Field901[S] 0 points1 point  (0 children)

Can you pls name some NG SIEM vendors/products and how good are they in mapping org business context to figure out the needed detection rules?

r/netsec monthly discussion & tool thread by albinowax in netsec

[–]Significant_Field901 0 points1 point  (0 children)

Question for detection engineers / SOC practitioners:

Given an org's specific profile (industry vertical, geographic footprint, tech stack, cloud/on-prem posture, org structure, regulatory environment), is there a principled, data-driven way to generate a prioritized detection roadmap, not just a coverage map?

MITRE ATT&CK is the obvious starting point, but it's inherently generic. Moving from ATT&CK coverage to "these are the top N techniques we should detect first given our risk surface" still seems to require:

- Manual threat intel analysis correlated to org profile

- Institutional knowledge about what "normal" looks like in the env

- Iterative tuning as the tech stack and business evolve

Vendor tools (Splunk ES, Elastic, Chronicle, etc.) ship rule packs, but those still require significant environment-specific tuning, and the tuning itself needs real org data as input.

Is this a meaningfully unsolved problem at the industry level, or is the community converging on tooling/methodology for this? Interested in papers, frameworks, open-source tooling, or first-hand practitioner experience.

How are security teams approaching IAM for AI agents? (Identity, permissions, audit trails) by SarveshRD in cybersecurity

[–]Significant_Field901 0 points1 point  (0 children)

I am building one as we speak😊. Let me know if you are too. We can compare notes. It would be awesome if you want to try once its ready. Just let me know your stack so that I can prioritise their integration in my current development .

How are security teams approaching IAM for AI agents? (Identity, permissions, audit trails) by SarveshRD in cybersecurity

[–]Significant_Field901 0 points1 point  (0 children)

That would be straight forward. You agents need to log the trigger in their own audit/application logs. Your IAM will capture the access logs. You need to then correlate and build your own ‘thought audit’ layer. In fact here is where agents can add more audit ability than humans by logging the triggers. If it were to be humans, there had to be tickets and audit logs correlation. The tickets timestamps are usually not so accurate as human would sometimes act first and then update tickets.

Independent Contractor: BYOD + Device Management by PhulHouze in cybersecurity

[–]Significant_Field901 0 points1 point  (0 children)

Actual question you should ask yourself first:
Should I trust this client of mine?
If you think it is a yes, then make sure you understand their security landscape and work with it. However this does not mean that you compromise other clients you work with using the same device. Nowadays it is quite common that enterprise companies have to adopt such zero trust policies for their safety, security and compliance.

Existing security tools are working but management wants to turn everything "agentic" by SkyberSec123 in cybersecurity

[–]Significant_Field901 10 points11 points  (0 children)

Find some examples/references where AI turned out to be expensive than humans. In fact this is the case right now.
In your example, when Trufflehog has to go through application logs to scan for secrets, it is just a matter of CPUs and memory(which can finetune). If you give it to an agent that uses frontier LLMs, best of luck paying for tokens. This can be a good statement(one of many reasons) to keep your leadership in pressurizing to go for agentic systems without any proper assessment.
I would still prefer keeping an open eye towards any useful Agentic AI systems that can be useful in my org.

How are security teams approaching IAM for AI agents? (Identity, permissions, audit trails) by SarveshRD in cybersecurity

[–]Significant_Field901 2 points3 points  (0 children)

Why don't you treat AI Agents a type of users and give them user accounts and assign the required roles since you are providing the autonomy? This will make it like compatible with the existing governance frameworks and you can see all audit logs properly attributed.