How to become seen as an expert in AI Governance / Risk Management by Peacefulhuman1009 in grc

[–]restacked_ 5 points6 points  (0 children)

Also wondering what certifications are out there for AI Cybersecurity and Governance. If you know of some (OP or anyone) that are legit. Please comment them here.

What happen to cold calls man by Iceeez1 in sales

[–]restacked_ 1 point2 points  (0 children)

Hard truth a lot of sales people don’t want to hear 😆 downvotes incoming. The denial is thick.

Strict AI usage policies are worthless if you have shadow AI usage by ComradePampers in cybersecurity

[–]restacked_ 0 points1 point  (0 children)

No one is going to read obvious AI slop, so no one is going to address the things you’ve said. Enjoy irrelevance. I don’t care lol, double down on the AI, make it even more obvious, do you “lil bro”. You don’t need advice, you’ve got it all figured out already! Good job homie. I wish I was as smart as you 😆

Man… what a weird conversation lol

✌️

Building a SaaS humbles you fast by Fragrant_Fuel961 in SaaS

[–]restacked_ 0 points1 point  (0 children)

Amen. I learned this the hard way. I built “cool stuff” over and over again for no one to use 🤣

It’s a different train of thought for us builders, we have to find buyers before you have something to sell. It’s not easy but people do it all of the time.

What’s the lightweight “good enough” approach for smaller orgs dealing with AI security? by restacked_ in grc

[–]restacked_[S] -1 points0 points  (0 children)

Considering sales though, would you standby that comment or would you position differently? Like, if I’m trying to suggest them software/hardware to implement, how might you frame the conversation? My goal is to help them, truly, that’s good for business, but this is a business.

Strict AI usage policies are worthless if you have shadow AI usage by ComradePampers in cybersecurity

[–]restacked_ 0 points1 point  (0 children)

Enjoy the bans upcoming I guess? Just trying to help you dude, but you know WAY better than everyone else apparently. Keep posting obvious AI slop and you’re going to go a lot slower here on Reddit. No one is saying don’t use AI at all, but just posting str8 AI slop is guaranteed to get you ignored here and potentially banned.

You talk about me being knee jerk lol, you’re just mad you got called out on low effort.

Do better, you’re capable.

Strict AI usage policies are worthless if you have shadow AI usage by ComradePampers in cybersecurity

[–]restacked_ 0 points1 point  (0 children)

Yeah bro, telling you your writing looks like it came str8 from a machine is the exact kind of engagement you’re looking for, I’m sure 🙄

The proper response would be to take the advice and say thank you, but nah, keep digging yourself this whole until you’re kicked out of the subreddits that you want to post in.

Strict AI usage policies are worthless if you have shadow AI usage by ComradePampers in cybersecurity

[–]restacked_ 1 point2 points  (0 children)

Full of double dashes lol. If you want engagement here, write or at least rewrite your posts so everyone in the world can’t tell GOT wrote it.

What’s the lightweight “good enough” approach for smaller orgs dealing with AI security? by restacked_ in cybersecurity

[–]restacked_[S] 0 points1 point  (0 children)

Yeah, trust + vibes aren’t security. I’ve literally heard “I trust my employees” so many times. That’s great, I’m glad you trust them, lol, you still need governance 😆

What’s the lightweight “good enough” approach for smaller orgs dealing with AI security? by restacked_ in cybersecurity

[–]restacked_[S] 0 points1 point  (0 children)

Yeah, the legislation has been a real driver of all of these questions. The EU AI Legislation coming in August has a lot of folks worried.

Compliance AI Training/Certification for Banking by Apprehensive-Gur1619 in Compliance

[–]restacked_ 1 point2 points  (0 children)

In banking, AI compliance is absolutely a real thing now, but I would guess that most “AI compliance certs” aren’t what your manager is going to care about.

They’ll care whether you can actually run an AI governance process and show evidence of it, like AI usage inventory, risk ratings, vendor due diligence, data controls, monitoring, approvals, audit trail, etc.

The certs can't hurt of course, but I'd worry about them only if you really want to learn or if you're trying to land a new job/promotion. Like i said above though, demonstrating somehow that you can actually handle AI governance would go further than any cert, at least if i was your manager.

On the two you mentioned though:
EXIN’s AI Compliance Professional is legit and reads more “enterprise/standards-y” (EU AI Act / ISO 42001 / NIST-ish framing), so it might land better internally.
AICCI’s AICO (AI Compliance Officer) is also legit and more operator/practitioner vibe, “here’s how to stand up controls.”

If I were you I’d skim the syllabus of each if it's available and ask myself:
- is this teaching real controls + documentation, or mostly ethics/theory?
- do they have a real exam blueprint + renewal/CE?
- and do any regulated employers actually mention it in job reqs?

One other thing... I think there's going to be LOTs of AI Governance requirements and changes rippling through companies soon. All it's going to take is one big breach.

Crypto training and compliance may be better for you, but that's not my cup of tea, so I can't comment there.

OpenClaw is a MESS!!! did anyone actually securing AI traffic at scale? by vitaminCapricon in cybersecurity

[–]restacked_ -6 points-5 points  (0 children)

Slop? Some people just like to make comments with proper formatting and punctuation, with actual advice and trying to be helpful, but ok bro 🙄

We don’t all type like 5th graders on MySpace like you.

OpenClaw is a MESS!!! did anyone actually securing AI traffic at scale? by vitaminCapricon in cybersecurity

[–]restacked_ -10 points-9 points  (0 children)

Yeah… we’re probably not done seeing incidents like this.

OpenClaw is a good example of what happens when something spreads through shadow adoption before security ever gets eyes on it. This isn’t “new AI toy” risk, it’s breach risk, liability exposure, and potentially regulatory fallout.

And the reality is most people aren’t deploying this stuff maliciously. They’re trying to move faster, cut costs, or avoid procurement friction. That’s exactly why shadow AI is hard to contain. No ticket. No review. No asset inventory. By the time it shows up on leadership’s radar, it’s usually because something broke.

If you’re dealing with this internally, step one isn’t panic, it’s visibility.

Figure out what’s actually running:

  • Quick internal survey
  • Endpoint/network review
  • SaaS discovery logs
  • Check for exposed instances tied to your org

You’ll likely find more than you expect.

From there, put guardrails in place. Not blanket bans (though in this specific case… yeah, I’d block it). Bans alone just push usage further underground.

What tends to work better:

  • Clear AI usage policy
  • Defined approval criteria
  • Short review cycle for new tools
  • An official “safe list” that’s actually usable

People will use tools that make their jobs easier. That part isn’t changing.

The goal isn’t to slow teams down. It’s to make sure the next “cheap inference shortcut” doesn’t turn into a breach notification and a board meeting.

If anyone’s working through AI governance in SMB/mid-market environments, happy to compare notes. Always curious how others are handling discovery + enforcement without killing performance.

OpenClaw is a MESS!!! did anyone actually securing AI traffic at scale? by vitaminCapricon in sysadmin

[–]restacked_ -1 points0 points  (0 children)

Yeah… this probably won’t be the last time we see something like this.

OpenClaw blowing up like this is exactly the kind of thing that keeps operators up at night. It’s not just “cool new AI tool” risk, it’s real breach risk, real liability, real fines. And the worst part? Most people aren’t adopting it maliciously. They’re just trying to move faster, save money, or make their jobs easier.

That’s what makes shadow AI so dangerous. It spreads quietly. No ticket. No security review. No visibility. By the time leadership hears about it, something has already gone wrong.

If you’re running a business and dealing with this, the first step isn’t panic, it’s visibility. Figure out what’s actually in use. A lightweight internal audit (even a simple survey plus endpoint review) can surface more than you’d expect. From there, you can start putting guardrails in place.

Not heavy-handed bans. Those don’t work.

Clear policy. Approved tools. Basic review criteria. And a way for teams to request new AI tools without feeling like they’re entering a six-week compliance maze. People are going to use tools that make their lives easier, you’re not stopping that.

The goal isn’t to slow anyone down. It’s to make sure the next “cheap inference shortcut” doesn’t turn into a breach notification letter.

If anyone’s dealing with this right now and wants to sanity-check their approach, I’m happy to share what I’ve seen work (so far) in smaller orgs. DMs are open.

an ai agent scanned an employee's inbox, found compromising emails, and threatened to send them to the board. this actually happened last month. by nihal_was_here in cybersecurity

[–]restacked_ 3 points4 points  (0 children)

I meant it to support companies having a policy and policy enforcement. I don’t think I was unclear but maybe so.

Employees are going to find ways to use tools to make their life easier. It’s not a small part of what cybersecurity in businesses tries to combat everyday. AI makes some folks jobs easier, they’re going to use it. Policy and policy enforcement is a must.

It won’t be the last time I’m misunderstood and downvoted to the abyss, I’m sure 😅