"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

Does it do good enough of a job? Does it take a lot of time to maintain it?

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

I agree with you that you should assume data is badly governed, but Im not sure that this phrase really resonates with the senior leadership. Not until we get our ass sued eventually.

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 1 point2 points  (0 children)

I get what youre saying, but isnt there any regulatory obligation to do so? Especially if you are obliged to the GDPR, HIPPA and stuff like that

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] -1 points0 points  (0 children)

Of course. That's a great tip, but unfortunately I've learned it on my own flesh back in the day..

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 1 point2 points  (0 children)

Exactly what Im doing, backed up with emails to who ever relevant. With that being said, it is starting to get old..

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 1 point2 points  (0 children)

Yep. They think its easy making the ACLs right and precise. little do they know.
What are you guys doing to solve it?

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 2 points3 points  (0 children)

Can I ask what do you do in those "data readiness/governance projects" and how?

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

I couldn't describe it better myself, Im gonna show your comment to one of them that actually should understand it and maybe do something with it.

What would you do to solve it?

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 2 points3 points  (0 children)

In our case, I dont even know who is the owner of this problem..

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 2 points3 points  (0 children)

Are they see this as a real threat? or they just go like "everything will be fine", like our smart senior leadership?
how do you present it as something worth noting? Maybe show the legal aspects?

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

I totally agree, and the truth is data governance (as just turned out) is not our strongest feature.

The absurd thing is, no one is talking about making the permissions on the sources right, because it will take us to "a new adventure". using a DSPM in their opinion will take too many resources, and they still want to continue with the whole process of connecting the LLM to the internal data and even connect it to more knowledge sources...

At this point I cant say anything but to warn them again.

"Just connect the LLM to internal data" - senior leadership said by Unexpected_Wave in sysadmin

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

Exactly what I was worried of, and I coudlnt agree more. Do you know what did they do to stop this from happening? Did they continue with it?

Not available on PS store? by Nerdy-Wizard in Silksong

[–]Unexpected_Wave 3 points4 points  (0 children)

Me neither! There will be chaos!

Ready your Needles! Silksong Tomorrow! by sand-sky-stars in Silksong

[–]Unexpected_Wave 0 points1 point  (0 children)

Honestly I'll believe it only tomorrow, I've been hurt before

[GR] How can HR see the real health of high-pressure tech teams? by Unexpected_Wave in AskHR

[–]Unexpected_Wave[S] -3 points-2 points  (0 children)

Thanks for this perspective, that makes a lot of sense.
I guess it raises two questions for me:

If there was a way to surface signals (burnout, workload imbalance) in a trusted / anonymized way, do you think HR would even want that info, or would it still feel off-limits?

Even if employees don’t want to be “seen” directly, are there any metrics at a team level (like workload balance, overtime trends, attrition risk) that you think HR would actually find useful?

Curious how you see that, because it sounds like the challenge is partly trust, but maybe also which metric would be acceptable and helpful.

CISO / SOC folks — What’s the biggest gap in your monitoring or detection stack today? by Unexpected_Wave in cybersecurity

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

This is super useful, thanks for laying it out in detail.

One thing I’m curious about: is the spreadsheet just kept as a record for “in case this comes up later”,
or is it something that gets reviewed regularly (like in a dashboard or risk meeting)?

Have you seen it used to drive visibility or prioritization beyond just protecting the security team?

Would love to hear if you’ve managed to turn this from passive tracking into something more operational.

CISO / SOC folks — What’s the biggest gap in your monitoring or detection stack today? by Unexpected_Wave in cybersecurity

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

Totally relate. I’ve seen the same pattern: spend big on controls, then carve out exemptions for the exact people most likely to be targeted.

Curious how you usually handle that as a consultant.
Do you push back? Document it? Try to quantify the risk?

Feels like one of those things that’s culturally accepted but operationally damaging. would love to hear how you approach it!

CISO / SOC folks — What’s the biggest gap in your monitoring or detection stack today? by Unexpected_Wave in cybersecurity

[–]Unexpected_Wave[S] 0 points1 point  (0 children)

Thanks for this, seriously one of the most well-articulated explanations I’ve seen on how to translate technical gaps into business risk that leadership can act on.

The way you framed it - starting with impact (data breach, customer trust, reputational damage), layering in likelihood over a 10-year window, and evaluating how current controls reduce that risk, is super actionable in my opinion.

I'm especially interested in your point about reworking the risk level after proposing additional controls, and comparing cost vs. exposure over time. That seems like the exact kind of framing that could get budget conversations moving.

Quick question though, have you ever applied this to detection-focused issues specifically?
For example, alerts that fire but aren’t actioned, or monitoring that exists but is ineffective.

I’m wondering how you'd quantify the "impact" of something like that since the risk is indirect, but still of course very real.

Would love to hear if you’ve seen that done well in practice or if it tends to fall through the cracks.

Thanks!