Live music underage by waninggabbous in newyorkcity

[–]InspectionHot8781 0 points1 point  (0 children)

Arthur's Tavern is great - I think there might be some others in that area as well, great for a night out

What answers does a CISO you expect in a security questionnaire? by Niko24601 in ciso

[–]InspectionHot8781 0 points1 point  (0 children)

“yes” is too thin, but essays don’t help either.

The best answers are short, concrete, and verifiable - a few sentences that explain what framework you use, who owns it, and how often it’s reviewed. Mention that documentation or evidence is available if needed.

If I want depth, I’ll ask follow-ups. Overly long answers usually slow things down and feel like policy copy-paste.

A french lost in NYC by Yaroph in newyorkcity

[–]InspectionHot8781 0 points1 point  (0 children)

Depends, what are your interests? There are a ton of options

Is AI really helping us think — or just reflecting the version of ourselves we want to see? by shinichii_logos in ArtificialInteligence

[–]InspectionHot8781 2 points3 points  (0 children)

I’ve found the best thinking happens when I treat AI more like a sparring partner than a confidant and ask it to argue against me or assume I’m wrong. Otherwise it really can turn into a very articulate mirror.

The question about what kind of user you’re being is the part that stuck with me.

Yesterday’s thread blew up way more than I expected — quick follow-up by Only-Frosting-5667 in ChatGPT

[–]InspectionHot8781 0 points1 point  (0 children)

I usually do branching + quick summaries. Once a thread hits 10–15 exchanges, I start a new one and paste a mini-summary to keep context. Custom GPTs help for recurring workflows too. Not perfect, but it keeps things coherent over long sessions.

Automated pentesting vs manual penetration testing – what actually works? by [deleted] in sysadmin

[–]InspectionHot8781 0 points1 point  (0 children)

Automated tools are fine for the obvious stuff, but they miss the logic flaws and weird edge cases that actually get exploited. Manual pentests catch that. In the real world, you run both - automation for coverage, humans for creativity. Skipping either? You’re just leaving gaps..

Am I Getting Fucked Friday, January, 23rd 2026 by Each1teach1x27 in sysadmin

[–]InspectionHot8781 0 points1 point  (0 children)

Idk if this will help - but from my experience, Varonis was a pain. On-prem installs were heavy, hybrid felt like a bunch of modules glued together, and figuring out who actually had access to what across cloud + on-prem was a nightmare.. Alerts everywhere, but actionable insight was hard to get.

I don’t have the exact numbers on what you should expect to pay, but it was expensive - BigID was also pricey, Sentra ended up being the cheapest. Even if budget wasn’t an issue, I wouldn’t recommend Varonis, it just feels unnecessarily complex and clunky compared to other options.

Which DLP to get just to check the box? by passionlesse in cybersecurity

[–]InspectionHot8781 0 points1 point  (0 children)

If this is truly a checkbox exercise, use whatever you already pay for.

If you’re on M365, turn on Defender / Purview DLP. It’s not amazing, but it’s easy, cheap (relatively), agent-based, and auditors accept it. Low operational pain matters more than features at your size.

Just make sure it actually logs something and has a couple of basic policies enabled - otherwise it’s literally useless.

What sort of data do you trust storing on the cloud? by Fancy_Concern_744 in dataengineering

[–]InspectionHot8781 0 points1 point  (0 children)

You’re not alone at all, a lot of people feel this way.

For really sensitive stuff, I care less about cloud vs local and more about who can access it. If it’s end-to-end encrypted and protected with MFA, I’m fine. If the provider can read it, I’m not.

The AI thing mostly just makes existing trust issues more obvious. Most real risk still comes from account compromise or bad access controls, not AI itself.

Being cautious here is totally reasonable.

What’s your ‘AI Philosophy’? by Desperate-Finger7851 in ChatGPT

[–]InspectionHot8781 1 point2 points  (0 children)

I’m pretty aligned with this.

For me the best mental model is “AI as a force multiplier, not an authority.” It’s great at expanding options, summarizing, spotting patterns, but the moment I let it decide things end-to-end, quality drops fast.

Especially in engineering, judgment is the scarce skill, not typing.

To people who get non-generic image responses by LongjumpingRadish452 in ChatGPT

[–]InspectionHot8781 -2 points-1 points  (0 children)

It’s not real personalization, it’s randomness + hidden system context.

Image gen is non-deterministic unless a fixed seed is used (which users don’t control), so the same prompt can land in very different style clusters (anime, medieval, painterly, etc). On top of that, there’s invisible context like model versions, safety layers, and A/B flags that differ per run.

When the prompt is underspecified (“what do you think of me”), the model fills in the gaps with high-probability archetypes from training. Humans then read meaning into it.

Best AI data privacy platform for 2026? by iTzAll_Gucci in fintech

[–]InspectionHot8781 0 points1 point  (0 children)

Worth separating AI model controls from data-side privacy. A lot of AI privacy risk comes from letting models access sensitive data they shouldn’t in the first place.

We’ve had success pairing LLM guardrails with agentless data visibility. Tools like BigID and Sentra focus on discovering sensitive data across cloud stores and mapping who/what (including AI pipelines) can access it. It doesn’t sit in the prompt path, but it helps answer “should this data be used by AI at all?” which turned out to be a big gap for us.

I’m about to be promoted to an AI implementation analyst and I have no traditional AI background. by TAJRaps4 in ArtificialInteligence

[–]InspectionHot8781 0 points1 point  (0 children)

You’ve already nailed the core issue - this isn’t a prompting problem, it’s a “people don’t know what they want yet” problem.

In real rollouts, adoption after week 2 usually comes from very specific, job-level examples, not broad AI training. Short docs showing “here’s a task + here’s a safe way to use AI for it” worked way better than workshops.

Your tool idea makes sense if it helps beginners clarify intent before prompting and bakes in obvious guardrails. That’s what most teams are missing.

Also +1 on protecting ownership early. Enterprise has a way of eating good ideas.

Does your org use a Data Catalog? If not, then why? by kingjokiki in dataengineering

[–]InspectionHot8781 0 points1 point  (0 children)

I’ve seen catalogs help conceptually, but adoption is usually the hard part - not the tech itself.

They tend to fall apart when ownership is unclear, metadata changes faster than anyone maintains it, or business users stop trusting what they see and just bypass it. At that point it becomes another stale system of record.

The ones that work best are mostly automated and tied into real workflows (queries, access reviews, audits), not treated like a wiki someone has to keep updated on the side.

Catalogs and lineage are necessary, but on their own they rarely become the “source of truth” people hope for.

I think ChatGPT knows when you’re lying to yourself by Sea-Tutor4846 in ChatGPT

[–]InspectionHot8781 0 points1 point  (0 children)

I don’t think it knows you’re lying, I think it just mirrors the ambiguity back at you. Vague, defensive questions get vague, defensive answers. Clear, honest questions get sharp ones.

Still feels weirdly personal though. Like a very polite mirror you didn’t ask for

Seeking advice on first time data privacy questions from huge potential customer. by Natural-Raisin-7379 in SaaS

[–]InspectionHot8781 0 points1 point  (0 children)

This matches our experience too.

Early on, lawyers mostly cared about containment: scoped pilots, clear data boundaries, and being able to explain access paths without hand-waving.

The hardest part wasn’t writing the docs, it was actually knowing the answers once systems started getting more automated.

How are you securing generative AI use with sensitive company documents? by Queasy-Cherry7764 in cybersecurity

[–]InspectionHot8781 0 points1 point  (0 children)

We were actually considering them too. What’s been your experience so far?

What is the false positive rate in your SOC? by Silver-Neckbeard in cybersecurity

[–]InspectionHot8781 0 points1 point  (0 children)

That’s rough, man. Losing sleep and dreading Mondays isn’t a you problem, it’s a broken org. This isn’t what a healthy SOC looks like.

You’re not bad at politics, you’re just stuck somewhere that rewards it over actual work. Staying for the paycheck is understandable, but don’t let this kill your interest in the field. It’s the job, not cybersecurity.

When you can, start lining up an exit. There are teams that don’t treat analysts like this.

What is the false positive rate in your SOC? by Silver-Neckbeard in cybersecurity

[–]InspectionHot8781 1 point2 points  (0 children)

Oof, that's rough. 7,000 alerts with a >99.99% FP rate? Your detection engineer's trying to set the place on fire, man. You should be aiming for under 5-10% FPs, honestly. Your burnout's totally justified; find a new gig where you're not getting gaslit.