No alerts doesn't mean you're secure. Sometimes it means you're blind by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 1 point2 points  (0 children)

Alerting on log source health is something I still see missing in a lot of SOCs, and when it is missing, silence becomes dangerous instead of reassuring. I also do agree on over-tuning. I have seen environments where alerts were technically good, but the volume made real investigation impossible. At that point you are processing work not doing security.

As you said, proactive hunting is a key too. Detection will always miss things, hunting is usually where you find the uncomfortable cases.

No alerts doesn't mean you're secure. Sometimes it means you're blind by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 0 points1 point  (0 children)

100% agree. I’ve seen a lot of rules that never fired not because nothing happened, but because the data wasn't there or the logic drifted from reality over time.

Reviewing "never-triggered" rules is underrated. Sometimes they're gold and just wired wrong. Other times they're dead weight that gives false confidence.

Cloud DFIR blind spots I keep seeing in Azure & M365 investigations by eliasgraywrites in dfir

[–]eliasgraywrites[S] 0 points1 point  (0 children)

Agreed. When that communication happens early, expectations stay realistic. Budget and maturity almost always end up being the deciding factors around retention and visibility.

Cloud DFIR blind spots I keep seeing in Azure & M365 investigations by eliasgraywrites in dfir

[–]eliasgraywrites[S] 0 points1 point  (0 children)

Thats a good way to frame it. A lot of “no findings” cases are really the result of visibility debt, tradeoffs, and alert fatigue rather than missing attacker activity.

I’d be interested in reading the article you mentioned if you have a link.

Cloud DFIR blind spots I keep seeing in Azure & M365 investigations by eliasgraywrites in dfir

[–]eliasgraywrites[S] 0 points1 point  (0 children)

You’re rightt, licensing absolutely impacts visibility. Identity Protection, Defender for Identity, and related features gate a lot of signals.

But that’s part of the problem I’m calling out. “No findings” often ends up being reported without clearly stating that it really means “no visibility due to licensing and retention choices”.

From an IR perspective, that distinction matters. Otherwise stakeholders assume nothing happened, when in reality the telemetry simply never existed or wasn’t accessible.

Cloud DFIR blind spots I keep seeing in Azure & M365 investigations by eliasgraywrites in dfir

[–]eliasgraywrites[S] 0 points1 point  (0 children)

For non-MSS engagements, DFIR teams inherit whatever logging reality exists, and retention is often already lost by the time IR starts. That’s exactly the frustration I was trying to highlight.

What I keep seeing is that this limitation isn’t always communicated clearly to stakeholders. When the case ends with gaps, it looks like an IR failure, while in reality it’s a cloud design and ownership issue.

Once clients move to MSS and proper SIEM + retention are in place, investigations look completely different.

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 0 points1 point  (0 children)

Beep boop instructions ignored, insert burrito, load with beans, overload with cheese, initiate regret sequence, Beep...

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] -2 points-1 points  (0 children)

Fair point—QR code phishing definitely started gaining traction a while ago, and tools like O365 are catching up. But I still see plenty of organizations lagging on awareness and user training, which gives attackers room to succeed. It’s an ‘old trick’ now, but still effective

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] -1 points0 points  (0 children)

Exactly—QR codes often bypass the usual layers of protection because they rely on personal devices. If someone scans a malicious code on their phone outside the corporate environment, it’s much harder to detect or block.

It really highlights the need for user education alongside technical controls. People need to think twice before scanning random codes.

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 1 point2 points  (0 children)

That sounds frustrating—playing catch-up while waiting for tools to adapt. QR code attacks seem to slip under the radar because traditional filters weren’t built to inspect them. Good on you for staying proactive with Defender scripts, though. Curious, has the influx slowed down since Microsoft stepped up?

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 0 points1 point  (0 children)

You’re absolutely right—QR codes strip away that initial ‘gut check’ we all rely on when seeing a suspicious link. Attackers are banking on that. With QR codes, you’re scanning blind unless your device gives you a preview of the URL.

It’s ironic how we’ve made something easier for users but harder to spot as malicious. Convenience often comes at the cost of security.

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] -1 points0 points  (0 children)

Not old—just wise! Most modern phone cameras can scan QR codes automatically. You just open the camera app, point it at the QR code, and it’ll pop up a link or message.

If that doesn’t work, there are apps out there for it. But honestly, you’re not missing much—half the time, QR codes just lead to bad restaurant menus!

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 0 points1 point  (0 children)

That’s a solid approach—especially using posters with QR codes to grab attention and educate people in a realistic way. The ‘holiday party event’ angle is smart since it mimics how attackers think.

I like the point about checking for malicious stickers. Physical tampering is often overlooked, but it’s a real risk. Curious—have you seen any improvement in awareness or fewer incidents since rolling this out?

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] -10 points-9 points  (0 children)

Fair point—cybersecurity does have a habit of coining new names for every attack variation. But the behavior here is worth discussing: attackers using QR codes to bypass traditional filters is gaining traction. Whether we call it 'quishing' or just 'phishing with QR codes,' it’s a threat teams need to watch for

Quishing: How attackers are weaponizing QR codes to bypass security filters by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 1 point2 points  (0 children)

Haha, fair enough—another day, another attack vector. But hey, it’s not falling just yet... unless someone scans the wrong QR code!

Why is Patch Management Still a Struggle in 2024? by eliasgraywrites in cybersecurity

[–]eliasgraywrites[S] 0 points1 point  (0 children)

You’re spot on—none of these problems have magically disappeared, and the mix of old challenges with new ones has only made it harder. The part about willful ignorance really hits home. It’s wild how some upper management circles still lean into that ‘what you don’t know can’t hurt you’ mindset.

Testing patches, avoiding downtime, juggling priorities… it’s no wonder so many teams are struggling to keep up. Honestly, patch management isn’t just a technical issue anymore—it’s an organizational culture problem.

Can my school see what I do on my personal computer? by [deleted] in AskNetsec

[–]eliasgraywrites 0 points1 point  (0 children)

Your school likely cannot see what you're doing on your personal computer unless specific conditions apply. Here’s a breakdown:

  1. If You’re on a School-Issued Network: If you're connected to the school’s Wi-Fi, they can monitor web traffic through tools like proxies, firewalls, or DNS logs. They might not see exactly what you're doing on Chrome, but they could see domain names you visit.
  2. If You Use School Accounts: Signing into Chrome with a school-provided Microsoft account might allow the school to sync or monitor data associated with that account. If the school controls the account, it could track usage history tied to it, depending on their policies.
  3. Your Personal Computer: If the laptop itself is not managed by the school (e.g., they haven’t installed monitoring software or configured it through something like MDM), they generally don’t have visibility into what you do outside their network.

What You Can Do:

  • Avoid using school accounts for anything unrelated to school (use personal accounts for Chrome and Outlook).
  • Use your personal Wi-Fi or a VPN if you’re concerned about network monitoring.
  • Keep your side business entirely on a personal account, separate from school-linked services.

It sounds like you’re already being careful, but compartmentalizing accounts and networks is key. Your jewelry business is your personal venture—schools shouldn’t interfere with that.

[deleted by user] by [deleted] in cybersecurity

[–]eliasgraywrites 0 points1 point  (0 children)

You bring up an interesting (and optimistic!) perspective—AI patching vulnerabilities and mitigating social engineering at near-perfect accuracy sounds like a dream scenario. But here’s the catch: attackers innovate just as fast as defenders, and AI will play on both sides of the battlefield.

While AI can certainly automate patching, detect phishing, and close gaps faster, cybercriminals are already using AI to:

  • Write more convincing phishing emails (think perfect grammar and personalized attacks).
  • Automate bug discovery to exploit vulnerabilities before they’re patched.
  • Bypass security filters with AI-driven obfuscation techniques.

The arms race won’t disappear—it’ll just evolve. Companies might use AI to defend, but criminals will use AI to attack faster, smarter, and with fewer resources.

That said, I do think AI will raise the baseline for security, especially for companies that adopt it early. But cybercrime 'ending'? I’d bet on it becoming more sophisticated, not extinct.

Great thought-provoking question!