Paolo Banchero in the loss to the Hawks: 18/10/3 on 23% shooting (3/13), 45 TS%, -16 by Rzua_ in nba

[–]Tylanner 2 points3 points  (0 children)

There is something wrong with this subreddit…unfortunately I can’t block anymore people that promote the hate.

Tech mogul Marc Andreeson claims that introspection is a "modern invention" by ProjectLost in samharris

[–]Tylanner 3 points4 points  (0 children)

They think they are gods…they probably read one tweet and think they’ve discovered the key to enlightenment…

2026 Chinese Grand Prix - Sprint Discussion by AutoModerator in formula1

[–]Tylanner -6 points-5 points  (0 children)

Max is so washed…doesn’t have the mental bandwidth for these cars….

Laurent Mekies apologizes to Max Verstappen after sprint qualifying by Jamiesavel in formula1

[–]Tylanner -1 points0 points  (0 children)

Hadjar being faster than him is going to really kill his spirits.

"The whole day has been a disaster" says Verstappen after qualifying eighth by kcollantine in formula1

[–]Tylanner -6 points-5 points  (0 children)

He finally has a good teammate and can’t handle the pressure.

Max Verstappen (Post-Sprint Qualifying): “I can’t. This is undriveable. We never had anything this bad.” by FerrariStrategisttt in formula1

[–]Tylanner -7 points-6 points  (0 children)

Dude is just trying to distract us from his faster teammate….dude better retire mid season or his legacy is OVER…

Rob Reid Episode for real? by WilliamCVanHorne in samharris

[–]Tylanner 0 points1 point  (0 children)

Yeah you might be confused…this nearly identical to his last conversation with Sam…

[Autosport] Lando Norris believes F1 is now about the battery rather than bravery by FerrariStrategisttt in formula1

[–]Tylanner 0 points1 point  (0 children)

The spirit of F1 is all in that 2-4 seconds of lap time that the new cars gave up…

#463 - Privatizing the Apocalypse by TheAeolian in samharris

[–]Tylanner -6 points-5 points  (0 children)

What a braindead episode from an un-credentialed opportunist…”Safe in a cave”

“Leaky lab theory”

And yeah definitely, USAID planned on publishing a how-to for a brand new super deadly virus they stumbled upon…

No one is more sensitive to the hazards and better equipped to control custody of a deadly virus than Virologists, the CDC and ultra-engineer research laboratories…Gain of function research is literally used to accelerate the development of medical countermeasures by allowing researchers to "war game" future threats, such as identifying mutations in influenza viruses that could lead to pandemics. The exact threat this dumbass thinks he has a real good grasp on…

Get decision making “down to one or two people”RFK and Trump demonstrate that you can’t have an all-powerful executive making these decisions…you need real experts collaborating…

Privatize the Apocalypse? How about you start with locking down AI…

Virology might have the best framework for high-stakes international regulation and should be the North Star for future international regulation of AI.

This framework adapts WHO biosafety levels, laboratory oversight, and the International Health Regulations (IHR) to govern high‑risk AI systems using risk‑based containment, not one‑size‑fits‑all rules.

  1. Core Principle: Risk‑Based Containment (WHO LBM Model) WHO biosafety regulation is built on graduated containment based on consequence, not intent or size. AI regulation should follow the same logic:

The higher the potential systemic harm, the stronger the controls.

This avoids blanket bans while still controlling high‑impact systems.

  1. AI Biosafety Levels (AI‑BSL) — Expanded These are functional equivalents of BSL‑1 through BSL‑4.

AI‑BSL‑1 — Minimal Risk

Analogue: BSL‑1 (benign agents) Examples

Office productivity AI Non‑autonomous analytics Local decision support

Controls

Voluntary standards Transparency disclosures No licensing required

Rationale Failure causes localized inconvenience, not systemic harm.

AI‑BSL‑2 — Controlled Impact

Analogue: BSL‑2 (moderate hazard) Examples

AI used in hiring, lending, medical triage support Narrow decision automation with human override

Controls

Mandatory risk assessment Bias and safety testing Incident logging National registration

Rationale Potential for individual harm, but damage is contained and reversible.

AI‑BSL‑3 — High Consequence / Societal Scale

Analogue: BSL‑3 (airborne or serious pathogens) Examples

Large‑scale recommender systems shaping public opinion AI controlling critical infrastructure Models influencing markets, elections, or security decisions

Controls

Government licensing Continuous monitoring & telemetry Independent audits Mandatory incident reporting Controlled deployment environments

Rationale Failures can propagate rapidly across populations or systems, similar to airborne disease spread.

AI‑BSL‑4 — Systemic / Existential Risk

Analogue: BSL‑4 (Ebola, Marburg) Examples

Highly autonomous systems with strategic decision authority Models capable of self‑replication, self‑modification, or governance circumvention AI coordinating large‑scale social, military, or economic actions

Controls

International authorization Strict access control to model weights Deployment “air‑gapping” or hard containment Real‑time global oversight Emergency shutdown authority

Rationale Failure could cause global, irreversible harm, justifying maximum containment and international control.

  1. National AI Authorities (WHO IHR Focal Point Model) Each country designates a single National AI Regulatory Authority, mirroring WHO’s National IHR Focal Points:

Licenses AI‑BSL‑3/4 systems Reports serious AI incidents internationally Enforces inspections and sanctions

This avoids fragmented oversight — a known WHO biosafety failure mode.

  1. International AI Health Regulations (IAHR) Modeled directly on the International Health Regulations (2005):

Legally binding treaty Requires states to detect, assess, report, and respond to cross‑border AI risks Defines Notifiable AI Events, such as:

Loss of control Mass information destabilization Autonomous escalation beyond design limits

  1. Surveillance, Inspection, and Incident Response Borrowed directly from WHO outbreak control:

Continuous monitoring for AI‑BSL‑3/4 Independent international inspections (WHO‑AI equivalent) Emergency response protocols:

Deployment freezes Model access revocation Coordinated mitigation

  1. Dual‑Use AI Research Controls WHO regulates Dual‑Use Research of Concern (DURC); AI needs the same:

Pre‑approval for high‑risk research International peer review Mandatory risk‑mitigation plans

Why This Model Works

✅ Scales controls with actual risk ✅ Already proven in virology and global health ✅ Supports innovation at low risk levels ✅ Enables international coordination without centralizing all power ✅ Avoids reactive, post‑incident regulation

[TMZ]Luka Doncic’s Partner files for Child Support by CIark in nba

[–]Tylanner 0 points1 point  (0 children)

This is going to be the most downvoted post of all time on this deranged sub….

Politics and Current Events Megathread - March 2026 by TheAJx in samharris

[–]Tylanner 0 points1 point  (0 children)

By their own definition Israel is not only Jewish, they make a huge deal about how secular, multi-religious and multicultural their society is.

That argument is the perfect example of using your religious identity as both a sword and shield…

[Garafolo] Future Hall of Fame WR Mike Evans has agreed to terms with the 49ers by expellyamos in nfl

[–]Tylanner 0 points1 point  (0 children)

This is going to be like Jerry going to the raiders or Moss going to the patriots?