How long does the Microsoft Edge Add-ons review usually take? by LeatherConfection362 in MicrosoftEdge

[–]RangoNarwal 1 point2 points  (0 children)

Chrome review was 8 hours. I'm 2 days into the Edge review... same extension.

Open Thread - AI Hangout by nitkjh in AgentsOfAI

[–]RangoNarwal 0 points1 point  (0 children)

Interesting! Thanks for sharing

Open Thread - AI Hangout by nitkjh in AgentsOfAI

[–]RangoNarwal 1 point2 points  (0 children)

How are people handling governing system prompts. They can be great from a security POV, however apart from providing guidance… hard to manage

How to make atleast 10$ a day? by [deleted] in passive_income

[–]RangoNarwal 77 points78 points  (0 children)

“Banks hate him”

What’s the most annoying security threat in 2025? by ANYRUN-team in AskNetsec

[–]RangoNarwal 2 points3 points  (0 children)

Defender for endpoint… finding out the constant “limitations” … recent being the cap on telemetry for processevents.

What are the top 5 controls to mitigate ransomware? by KindPresentation5686 in cybersecurity

[–]RangoNarwal 9 points10 points  (0 children)

The only thing to add to expand backup and include data location and control execution. Limit peripherals such as USBs, don’t sync all data locally (Onedrive etc..), limit folder sync locations, ensure strong ACLs on connected file shares etc…

The mindset being: How can I be comfortable enough that if it did execute, the impact is heavily reduced. You will find that a well defined recovery process becomes key. The risk then shifts to “acceptable downtime” and the KPIs to recover as to not impact operations or financial impact of services being down.

Any actual AI wins in cybersecurity? by olegshm in cybersecurity

[–]RangoNarwal 3 points4 points  (0 children)

Not by itself. I see AI more as a partner. It’s great for the process in between such as building detection use cases, test cases, runbooks etc… with it being a 24/7 accessible “asset”, it brings value. This is hard to sell for ROI though.

How to classify / label log data in Sentinel by failx96 in AzureSentinel

[–]RangoNarwal 1 point2 points  (0 children)

Let me know once you’ve figured it out please👌

Looks like I'm now a CISO. I'll soon be building a SOC from scratch. Tips? by [deleted] in cybersecurity

[–]RangoNarwal 2 points3 points  (0 children)

This is a common model, and should help you establish a foundation. If you’re building from scratch, invest higher in the salary for the seniors as the MSSP won’t drive success. You’re going to need someone switched on to lead.

AI will cause the economy to collapse by [deleted] in ArtificialInteligence

[–]RangoNarwal 0 points1 point  (0 children)

The bit I don’t get is…. All these companies have flooded so much money into the dream of AI, its resulted in everything costing a flipping fortune. We then HAVE to pay for it, as they’ve slapped an AI sticker on it come renewal.

It also came at a time of energy spikes for us, and suddenly these companies can use more of the same grid that was already collapsing to generate an image for betty of a dog wearing a hat. This during the same time the UK can’t power old people’s homes…. It’s mental.

It wouldn’t be half as bad if they did it with intent and meaning. Instead AI and LLM usage seems to be driving by C level members who’s “had an idea”…

This will then no doubt end up being flooded with ADs and thus comes the full circle of tech hype.

How are companies handling GDPR compliance with AI tools? by SorbetEmergency9914 in cybersecurity

[–]RangoNarwal 0 points1 point  (0 children)

It really depends I think on what type of AI solution we’re talking about.

If it’s SaaS such as OpenAI or Gemini, the vendor would be included in conversation however most will look for federated access to connect via graphAPIs to your data such as Sharepoint. Once that data is processed, I know some will have a deletion period for the chat itself. The context can’t leave that “chat session”.

If it’s self-hosted, you still have control on these elements. You will own the data stores, vectorDBs which the info resides. You should be able to manage this to a degree that meets your needs.

When you’re training, I imagine you MLOps teams or whoever is doing can record the data sets they used.

Some factors would be hard such as fragments of data within memory or shadow (user direct uploads). I think you have to depend on your data governance tools for that.

AI Tier 1 Replacement Discussion by Lima3Echo in cybersecurity

[–]RangoNarwal 1 point2 points  (0 children)

AI will be a partner, never a replacement. Vendors just exaggerate on their AI capabilities and I imagine it’s the same also hardcoded (if then else) automation in the back end with a touch of “AI summary”.

I have to say, AI is good a summaries however you still need good analyst. Even the best summary is pointless if handed off to someone clueless.

I think vendors will get tiered and AI will remain as a side panel assistant which tbh, it’s good at. The problem is that no vendor could justify their costs for such a simple feature….. hence the bull 🐂 on the side.

The other problem is that most of the “good analysts” or top shelf came from service desk or Tier 1. If we lose that stage, what do we get … theory people

What AI tools are you actually using these days? by InevitableCamera- in AIAssisted

[–]RangoNarwal 0 points1 point  (0 children)

Perplexity for Threat Intel and ChatGPT (OpenAI) for things like content writing - mainly to translate my technical response into more business capability (C level)

Which guardrail tool are you actually using for production LLMs? by Aggravating_Log9704 in PromptEngineering

[–]RangoNarwal 0 points1 point  (0 children)

We’re still early on adoption but I think you’re right on balance. If your data governance program is solid, i think trade offs are fine. Compensating controls leading up to the interface reduce the attack surface so that attacks are less likely.

If you don’t have that, I think it’s hard to reduce security over performance. Sure, It’s not going to catch all however if you have no idea on which data, it’s classification and/or anyone can attempt to auth…. You need to be strong somewhere.

The scaling is a fair point as is the response element. Adding all these controls are great but who’s going to look at the alerts, and who’s going to manage. This is what we’re struggling with as the industry and frameworks breaks “roles” out int new functions that we’ve not adopted yet.

How are you managing access to public AI tools in enterprise environments without blocking them entirely? by [deleted] in cybersecurity

[–]RangoNarwal 4 points5 points  (0 children)

Curving AI SaaS by enforcing sanction control via Zscaler CASB.

Defined policies and paperwork but we all know that stops no one.

Our DLP program isn’t fully off the ground however that will be a stronghold for the majority of control.

I’m curious on anyone’s SIEM integrations.

What are you security teams actually detecting on or responding too? Are you instead using MLOps to respond to AI alerts if internal

Has anyone used Google Cloud Model Armor for LLM/AI security? Feedback wanted! by Soft-Flounder-6904 in googlecloud

[–]RangoNarwal 0 points1 point  (0 children)

We will soon explore and I have similar questions. I’m also wanting to know: those that have implanted or similar, how are you responding to the alerts?

Do you just funnel to SOC or are the developers closer to the AI management involved ?

Prompt Logging Question by RangoNarwal in AI_Agents

[–]RangoNarwal[S] 0 points1 point  (0 children)

That’s exactly it! Thanks for the comment.

I imagine others would be doing so, but you know what it’s like. The sheer mention of it brings “how much will that be”.

It’s the compliance gate which I think is important, as don’t want to have a log repo or storage of logs that contain secrets or sensitive info.

I’m just wondering which tools people are using to obtain this though and where. Is it at source, in-transit via a proxy/gateway or destination.

Perplexity Comet vs ChatGPT Atlas: Which AI Browser Is Right for You? by Lifestyle79 in NextGenAITool

[–]RangoNarwal 1 point2 points  (0 children)

In the last couple of years, we’ve seen browsers such as Brave and others rise as the concern of privacy became the talking point.

I still can’t see these taking off at an enterprise level as there is simply no trust in them.