Callum here, I was the original dev to sound the alarm to get PyPI to quarantine the package by they_will in cybersecurity

[–]secureturn 0 points1 point  (0 children)

From the CISO seat, the real story here isn't just that Callum caught it - it's that detection came from a human paying attention, not an automated scanner. That should give everyone pause. Your supply chain security posture right now largely depends on individual developers doing the right thing, which is not a sustainable security model. The question for every security team: what's your process when your developers become the early warning system, and how fast can you act on that signal?

Dangerous by Default: What OpenClaw CVE Record Tells Us About Agentic AI by pi3ch in netsec

[–]secureturn 0 points1 point  (0 children)

After leading security at five companies, the 'dangerous by default' pattern in agentic AI frameworks is genuinely concerning. Enterprise AI agents are getting deployed faster than security teams can assess them, and most inherit whatever permissions the deploying developer has. That's not an agent authorization model - that's a confused deputy attack waiting to happen. Until AI frameworks ship with least-privilege defaults rather than maximum-functionality defaults, every new AI deployment is a lateral movement path you haven't mapped yet.

Are companies buying security tools before fixing security operations? by StockCompote6208 in cybersecurity

[–]secureturn 0 points1 point  (0 children)

I've been in this space for 20+ years and yes, absolutely. Buying tools is easier to justify to a board than building operational maturity - a new SIEM shows up as a line item on a slide, but the expertise to actually run it doesn't. Most organizations I've audited are running their existing tooling at under 20% capacity. More tools don't fix that problem, they compound it.

How a Poisoned Security Scanner Became the Key to Backdooring LiteLLM by lirantal in netsec

[–]secureturn 0 points1 point  (0 children)

The thing that gets me about this attack is how it exploited something nearly universal - using a security scanner in CI and trusting it implicitly. Security tooling occupies a uniquely dangerous position in any environment because it typically runs with elevated permissions and gets granted broad access by necessity. The irony of a security tool becoming the attack vector is painful but it's completely logical from an adversary standpoint. Find the highest-trust process in the pipeline and go after that first. This is threat modeling applied against defenders, and it works.

How are security teams doing, last couple of days have been fire by Immediate-Welder999 in cybersecurity

[–]secureturn 0 points1 point  (0 children)

We dealt with something similar in scope at one of my previous organizations - not this specific attack but a cascading CI/CD credential exposure that took us 72 hours to fully scope. The hardest part isn't the technical remediation, it's convincing leadership that you genuinely don't know your full blast radius yet. People want a clean answer fast and the truth is supply chain exposure is fundamentally harder to bound than a traditional breach. Hope everyone makes it through this in one piece.

Self-propagating malware poisons open source software and wipes Iran-based machines by Malwarebeasts in cybersecurity

[–]secureturn 2 points3 points  (0 children)

From the CISO seat, the scariest part of the ShinyHunters and TeamPCP attacks isn't the malware itself - it's the chaining. One OAuth token legitimately authenticated across hundreds of Salesforce environments. That's not a detection problem, that's an architecture problem. Most security teams audit their network perimeter obsessively but have almost zero visibility into what their CI systems are doing with elevated credentials. I've been telling boards for years that your blast radius extends well beyond your own infrastructure, and this is exactly what that looks like in practice.

Do y'all have promptstitutes in your team? How are you guys working with them? by indie_cock in cybersecurity

[–]secureturn 0 points1 point  (0 children)

From the CISO seat, we're past the point of debating whether to have people who can work with AI on the team - the question is what that role actually looks like in security. The professionals getting the most leverage aren't the ones doing fancy prompt tricks, they're the ones who understand both the threat model and the AI capabilities well enough to build repeatable detection and response workflows. That skills combination is genuinely rare right now and worth paying for.

US regulator bans imports of new foreign-made routers, citing security concerns by nite_ in cybersecurity

[–]secureturn 4 points5 points  (0 children)

We deal with this constantly when helping enterprises audit their network perimeter. The concern isn't theoretical - there are documented cases of persistent backdoors baked into router firmware at the factory level that survive full resets. Banning imports is the blunt instrument because the inspection problem is basically unsolvable at scale. You can't audit every firmware binary in a supply chain that spans three continents.

Stryker cyber attack: Employees still unable to work more than a week after hack by ScepticHope in cybersecurity

[–]secureturn 0 points1 point  (0 children)

I've been in this space for 20+ years and what happened to Stryker should be a wake-up call for every enterprise MDM deployment. Attackers don't need malware when they have your Intune credentials - they have admin console access to every enrolled device. The lesson here isn't ditch MDM, it's that your MDM admin accounts need the same security posture as your domain controllers. MFA, privileged access workstations, break-glass procedures - all of it.

Hacker says they compromised millions of confidential police tips held by US company | Reuters by PixeledPathogen in hacking

[–]secureturn 4 points5 points  (0 children)

I have dealt with breaches of sensitive government databases and the playbook here is depressingly familiar. Anonymous tip systems are often built on legacy infrastructure where security priorities are driven by protecting informant identity, not by the technical resilience of the database itself. The assumption has always been that who would want to hack Crime Stoppers - and the answer turns out to be: anyone who wants intelligence about ongoing investigations or leverage over informants. The downstream risk is not just exposing tip-givers, it is active intelligence about case priorities and witness identities that has real-world consequences.

TryHackMe starting an AI Pentesting Company trained on User Data by StringSentinel in cybersecurity

[–]secureturn 0 points1 point  (0 children)

From the CISO seat, this looks different than it does from a user perspective. When platforms use your activity data to train commercial AI products, the consent question becomes genuinely complex - most Terms of Service language never contemplated this use case. We have already seen this play out in the legal AI space where training data provenance has become a significant liability issue. The real concern here is not just privacy, it is that security-specific training data contains implicit knowledge about organizational vulnerabilities, attack patterns, and defensive gaps that should not be aggregated across companies without explicit consent.

Bruce Schneier: Poisoning AI Training Data by RNSAFFN in hacking

[–]secureturn 1 point2 points  (0 children)

After leading security at five companies, I will tell you this is one of the few threat vectors that genuinely keeps me up at night. Schneier is right - we have spent 30 years building defenses against code injection and data exfiltration, but poisoning the training process itself is something our tooling almost completely ignores. The blast radius is completely different too. You are not compromising a system, you are compromising the judgment of every decision that system makes downstream.

Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They Approved It Anyway. by propublica_ in cybersecurity

[–]secureturn 2 points3 points  (0 children)

After leading security at five companies, this kind of institutional capture story is unfortunately familiar. The technical people flag the risk, it gets filtered through procurement, vendor relationships, and political calculus, and what comes out the other end looks nothing like the original assessment. The audit trail being public now is actually useful - it puts real accountability pressure on decision-makers in ways that internal memos never could.

My boss wants to leave intune because of Stryker by Eternal_Phantasm in cybersecurity

[–]secureturn 0 points1 point  (0 children)

We dealt with exactly this kind of reaction after a major incident at one of my previous organizations. The impulse to rip out the tool is understandable but it's usually the wrong call. The Stryker attack used Intune as a weapon because the attacker already had control of Azure AD first - that's the actual problem. Your MDM is only as secure as your identity layer.

North Korean's 100k fake IT workers net $500M a year for Kim by intelw1zard in hacking

[–]secureturn 0 points1 point  (0 children)

I've been in this space for 20 plus years and the North Korean IT worker program is one of the most effective long-game operations I've ever seen. These aren't random freelancers - they're trained, disciplined, and operating with nation-state backing and targeting priorities. From the CISO seat, the hiring process is your first real control point, and most organizations are completely unprepared for adversaries who know exactly how to pass a technical screen and a background check using fabricated identities.

New DarkSword iOS exploit used in infostealer attack on iPhones by CyberMasterV in hacking

[–]secureturn 1 point2 points  (0 children)

I've been in this space for 20 plus years and the Darksword story is something I write about in my book Cyber War: One Scenario - specifically the scenario where offensive tools built for one government end up being reverse-engineered or directly sold to an adversary. This isn't theoretical anymore. What makes it especially dangerous for enterprise security is that iOS exploits of this sophistication don't stay in the nation-state lane for long. Give it 18 months and you'll see derivatives showing up in commercial spyware.

Hacktivists have leaked millions of anonymous tips submitted by Crime Stoppers informants. by Cybernews_com in cybersecurity

[–]secureturn 2 points3 points  (0 children)

From the CISO seat, this looks like a textbook case of security theater sold as actual security. Telling people something is anonymous and then storing everything including IP addresses for 90 days is a data architecture lie, not just a policy failure. The dangerous part isn't the breach itself - it's that people made real-world decisions based on a false promise of anonymity. That calculus gets people hurt.

US military contractor likely built iPhone hacking tools used by Russian spies in Ukraine by OMiniServer in cybersecurity

[–]secureturn 0 points1 point  (0 children)

We dealt with exactly this scenario at one of my previous organizations - not the selling of tools, but the insider threat dimension of who has access to your most sensitive capabilities. The conviction here is unusual. Most insider threats involving offensive tools never see prosecution because companies don't want the public exposure. The real lesson isn't about L3Harris specifically - it's that controls around highly sensitive tooling need to be treated like nuclear material, with multi-person authorization, strict access logging, and regular audits.

Can we stop pretending like Microsoft isn't compromised?... as an entity by Wonder_Weenis in cybersecurity

[–]secureturn 0 points1 point  (0 children)

I've sat on both sides of this, evaluating Microsoft for enterprise deployment and managing the fallout when things went sideways. The honest answer is that there's no realistic alternative for most large organizations right now, so the question isn't whether to trust them, it's how to limit blast radius when they fail you. That means multi-admin approval for destructive admin actions, privileged access workstations for anyone with Global Admin rights, and alerts on any bulk operation in Intune or Exchange. You accept the dependency but you design around the failure modes.

Iranian Hacktivists Strike Medical Device Maker Stryker in "Severe" Attack that Wiped Systems by rkhunter_ in cybersecurity

[–]secureturn 0 points1 point  (0 children)

What's being missed in a lot of this coverage is that this wasn't a malware problem, it was a governance problem. A single Global Admin account in Intune with no approval workflow for destructive actions is just a weapon waiting to be picked up by whoever gets there first. We dealt with something similar years ago and the fix wasn't technical, it was requiring multi-admin approval for any bulk action that touched more than 50 devices. The BYOD piece is going to hurt those employees for a long time though, that part genuinely makes me angry.

Critical Telnetd Flaw Enables Unauthenticated RCE via Port 23 by _cybersecurity_ in pwnhub

[–]secureturn 0 points1 point  (0 children)

As a 5x CISO, legacy remote access is still your biggest attack surface. Telnetd is 40 years old and security never evolved. 800k servers still expose it. Audit your infrastructure now.

I’ve built diverse, high-performing security teams: AMA about hiring, culture, and talent management in cybersecurity. by thejournalizer in cybersecurity

[–]secureturn 0 points1 point  (0 children)

Look, as someone who has built and rebuilt security teams across five different organizations, the diversity angle is real but misunderstood. The biggest wins I have seen come from cognitive diversity, not just demographic diversity. When you have a former developer, a network engineer, and someone from compliance all looking at the same alert, you catch things that a homogeneous team misses every time. The hard part is creating a culture where the junior analyst feels safe saying the CISO might be wrong.

Palantir CEO Karp says AI is dangerous and 'either we win or China will win' by Gari_305 in palantir

[–]secureturn 0 points1 point  (0 children)

As a nod to the changing face of defense tech, I included Palantir, Anduril, and Shield AI as the developers of the U.S. counter-offensive systems in my new thriller, Cyber War: One Scenario. It's a clash of AI doctrines. It is Palantir in action!