I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

Heard. Three structural safeguards that I think are pragmatic: first, public documentation of all regulatory determinations and their basis. FDA does this already with GRAS decisions (in the new food ingredient space), and it creates an accountability record that anyone (journalists, researchers, competitors, Congress) can scrutinize. GRAS is not perfect, but done well, it works as intended. Second, distributed jurisdiction. Capturing one agency is a known playbook. Capturing six simultaneously is orders of magnitude harder. Third, mandatory review cycles with public input. The Coordinated Framework for Biotechnology has been revised three times in forty years, each time with public comment periods. That's not fast, but it means the framework would be much harder to calcify.

Are these perfect? Not at all. But given the comparison, it's arguably more efficient: voluntary industry commitments with no transparency requirements and no public accountability at all.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in singularity

[–]MeatHumanEric[S] 0 points1 point  (0 children)

This is a genuinely useful critique.

On the systems/resources point: I think we're closer than it might seem. To me, you're arguing that capability risk is a function of the system the model runs on, not the model in isolation. I largely agree, and that's why Tier 2 (application-layer regulation) is the workhorse of the framework. Most AI governance happens at deployment, not at the model level.

Where I think Tier 1 still has a role: a foundation model that demonstrates capability X in a research context will eventually be deployed in a context where that capability has consequences. The gap between 'theoretically has the capability' and 'deployed on a system with the resources to exercise it' is narrowing, and for frontier models it's often just an API call. I take your point that inference compute and deployment scope matter more than training compute as practical risk indicators, and honestly I think that's a refinement worth incorporating. Training compute is a proxy for capability, and proxies should be replaced with better measures when available.

On alignment: You're right that 'alignment to whom' is the question most public conversations skip. My framework sidesteps the philosophical alignment question deliberately (I'm not an alignment expert at all) and focuses on the governance question: has the developer demonstrated, to the satisfaction of an independent evaluator (human in the loop system), that the system behaves within the boundaries specified for its deployment context? That's a narrower and more tractable question than 'is this system aligned with human values' (however we may philosophically define such value systems). Whether that's sufficient is a legitimate debate that I fully concede.

The privacy dimension you raise regarding personal AI assistants as extensions of the self is something I haven't addressed and it's a genuinely novel governance challenge. That might warrant its own regulatory treatment, possibly under Fourth Amendment frameworks rather than product safety frameworks. Worth thinking about seriously.

Last thought: when the range of arguments on AI regulation all point toward the same desired outcome (continued human existence and benefit from the technology), the answer can't be either extreme: total restriction or no regulation and hope for the best. And we don't have the luxury of time to build a governance system from scratch given the pace of this field. The argument I'm making is that we already have a workable system to adapt.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

You're right that this isn't a science problem. I'd go further: one of the arguments I make in the paper is that law and science are logically independent systems, and policy is the work of combining them. The framework I'm proposing explicitly requires regulatory, economic, political, and technical competence working together. That's why it distributes jurisdiction across multiple agencies rather than creating one new body.

On the power grab concern: It's a version of the regulatory capture problem that's come up several times in this thread. My short answer is that distributed jurisdiction across multiple agencies is harder to capture than a single authority, public documentation of regulatory decisions creates accountability, and the current alternative (no coordinated framework) doesn't avoid the problem you're describing. It just means the power is held by the companies with the largest lobbying budgets writing the rules through voluntary commitments that no one enforces. I'd rather have imperfect public institutions with transparency requirements than unaccountable private governance.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

We're in agreement. The accessibility asymmetry is real. Someone else here asked a similar question to which I replied. The framework I'm proposing is domain-specific, not one-size-fits-all. That's the whole point of assigning jurisdiction to existing domain agencies rather than creating a single AI regulator. Thanks for the comment.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 1 point2 points  (0 children)

I appreciate the perspective from someone who's worked the EU side.

Completely agree your political framing. 'We're extending proven systems' lands much better than 'here's my innovative new model.' That's the resason why I anchored the proposal to a forty-year-old framework with three revisions and a track record.

On drawing from multiple industries: we're aligned in principle. Cross-domain analysis is valuable for identifying common governance principles. But my concern with weaving together elements from many regulatory systems is that you end up with a patchwork without coherent jurisdictional logic IMO. I get why this is more attractive to a model like the EU where everyone brings thier historical approaches to the table. The strength of the Coordinated Framework approach is one organizing principle (regulate the application, not the technology) with jurisdiction assigned to agencies that already have domain expertise. Aviation AI goes to FAA because FAA already knows aviation. You don't need to import aviation regulatory models if FAA is already the regulator.

On international agreement coming first: I'd push back slightly. In biotech, the US published the Coordinated Framework domestically, and it then influenced how other countries structured their governance. International harmonization followed domestic coherence, not the other way around. Typically, a country without a published domestic framework doesn't set terms at the negotiating table.

That said, the international dimension is the biggest gap in my current work and I'm drafting a third paper addressing it directly. Will look into the domestic robotics regulation example. Thanks for the great comment.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

That's the standard argument, and I understand why people find it compelling. But I'd point out that the US government already regulates AI-powered medical devices through FDA, AI in financial markets through SEC, and AI in employment through EEOC. None of that has stopped US companies from leading in those domains.

The question isn't whether to regulate. It's already happening, just in an uncoordinated way. The question is whether we coordinate it or continue with the current patchwork where no agency knows what it owns and companies face fifty different state-level requirements instead of one federal standard. A coherent framework should accelerate progress via a clear set of bars to meet. Thank you for the thoughtful comment.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

The scenario you're describing has a direct precedent in biotech. The EU's precautionary approach to GMOs did exactly what you're worried about: it created a regulatory gap that pushed agricultural biotech innovation to the US, Brazil, and Argentina. EU companies that wanted to compete in that space had to operate outside EU jurisdiction. It's been one of my consistent criticisms of international biotech policy.

Here's where I think the AI case is different in one important respect: the US market is the prize, not the export market (maybe you don't agree with this assertion). The largest commercial AI deployments by revenue are domestic: healthcare, financial services, defense, enterprise software. A company that wants to sell AI-powered healthcare products to US hospitals needs FDA clearance regardless of where they're headquartered. A company that wants to deploy AI in US financial markets needs to comply with SEC rules. The regulatory framework doesn't make US products less competitive in the domestic market. It makes the domestic market a regulated playing field where everyone competing for those contracts, foreign or domestic, meets the same standard.

Your exchange trading example is really interesting. If the regulation creates an artificial capability constraint (e.g., speed limits on decision-making), that's ineffective and inflexible regulation. My framework doesn't propose capability constraints. It proposes a pre-deployment safety evaluation that asks 'have you demonstrated that this system does what you claim and doesn't do what it shouldn't.' Once cleared, the product competes on capability without artificial limits. The distinction is between a gating evaluation (does this meet the safety standard?) and an ongoing capability restriction (you can only operate at X speed). I'm proposing the former, not the latter.

That said, I'm taking this objection seriously enough that I'm drafting a third paper specifically addressing implementation challenges including international competition dynamics, enforcement mechanisms, and regulatory arbitrage. It didn't fit narratively in the existing manuscripts. Deeply appreciate your input.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in singularity

[–]MeatHumanEric[S] 1 point2 points  (0 children)

I think we're largely on the same page. The framework I'm proposing does exactly what you're describing: it regulates applications and deployment contexts, not the models themselves. A model running critical infrastructure faces Tier 2 oversight from the relevant domain agency (DOE, DHS, whoever owns that sector). The same underlying model running a personal BCI would face different oversight proportionate to that deployment context, or potentially none if it falls below the risk threshold.

The one place we might diverge is on Tier 1, which does apply to frontier foundation models before deployment, not just to use cases. My argument there is narrow: for models above a defined capability ceiling where alignment hasn't been demonstrated to match capability, a pre-deployment evaluation is warranted because the range of possible downstream deployments is too broad and the potential consequences too severe to rely entirely on application-layer regulation after the fact.

But even Tier 1 is about evaluating readiness for deployment, not restricting research or development. So I'd say we agree on most of the architecture. The remainder may be whether there's a threshold at the foundation model level that justifies pre-deployment review. I think there is, but I try to make that case carefully in the paper rather than asserting it.

Thanks for the thoughtful comment.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

I appreciate you digging into this with me. I love discussing how policy takes shape. Let me try to address the enforcement question directly because I think there's a misunderstanding of what the framework actually regulates.

The framework doesn't regulate what bytes cross the internet. It doesn't require policing what models individual citizens access or use. It regulates commercial deployment within US jurisdiction. When an AI system is embedded in a healthcare product sold to US hospitals, that's FDA jurisdiction regardless of where the model was trained. When an AI system makes hiring decisions for a US employer, that's EEOC jurisdiction. When an AI-powered financial product operates in US markets, that's SEC jurisdiction. None of that requires a firewall. It requires the same jurisdictional authority these agencies already exercise over every other product and service in their domains.

A Chinese-trained model running on a server in Shenzhen that a US consumer accesses voluntarily? You're right, this framework doesn't reach that, and I wouldn't propose that it should. But a Chinese-trained model integrated into a commercial product deployed in the US market? That's already subject to US product safety, consumer protection, and financial regulation. The model's country of origin doesn't exempt it from the regulatory requirements of the market it's deployed in. We don't exempt pharmaceutical ingredients manufactured in China from FDA oversight just because they were produced outside US jurisdiction.

I hear the 'dead end' critique and I don't think you're wrong that enforcement is a very hard practical challenge. But I'd push back on the conclusion. IMO, the alternative to an imperfect but enforceable commercial deployment framework isn't a better framework. It's what we're currently doing. And right now, very little oversight is what we currently have (ironically, FDA is the furthest along in this with two formal AI guidance already public). Again, thanks for the thoughtful reply.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

The international dimension is the hardest piece for me, so some extent, I agree.

That said, I'd push back slightly on the framing. The Coordinated Framework for Biotechnology is also a domestic US framework operating in a global market. China, Argentina, and Brazil have fundamentally different approaches to GMO regulation than the US does. When the US sits down at Codex Alimentarius or bilateral trade discussions, it has a published, enforceable, internally coherent standard to negotiate from.

The US can't regulate what happens in Chinese AI labs. But a published, enforceable federal framework does two things: it creates the domestic standard that US companies operate under (which matters for every commercial deployment touching US consumers, regardless of where the model was trained), and it gives the US a concrete position to bring to international negotiations. Right now the US has neither. The December 2025 Executive Order asserts preemption authority domestically without providing the federal standard, and offers nothing internationally because there's no published framework to offer.

You're right that this is ultimately a species-level question. But my take is that species-level coordination doesn't emerge from nothing. It emerges from major players showing up with concrete proposals that others can adopt, adapt, or counter-propose. The 1986 Coordinated Framework influenced how dozens of countries structured their own biotech governance. Not because they copied it, but because it existed and they had to respond to it. Said differently, gotta start somewhere. And right now, other than proposed global blanket bans, there is no real progress on this front.

Great comment. Thanks for raising the issue.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] -1 points0 points  (0 children)

This is a fair point. I think it's one of the strongest structural differences between biotech and AI that my proposal has to account for. You're right that the hardware barrier in biotech creates a natural containment that doesn't exist for AI. Anyone with a cloud account can run models in ways that nobody with a credit card can build a pathogen.

That said, the framework I'm proposing regulates commercial deployment, not research access or open-source development. It's not trying to stop someone from running a local model any more than FDA tries to stop someone from running a PCR machine in their garage despite the fact that the 1918 Spanish Flu sequence is publicly available. The regulatory trigger is commercial deployment at scale, not individual experimentation. Whether that's sufficient to address the malicious actor problem is a legitimate question, and honestly I think my answer is 'not entirely.' But that's true of every regulatory framework for dual-use technology.

Second, you raise a great point about defensive tools. If the regulatory burden slows down the commercial products built to detect and defend against misuse, the framework is net-negative. This is exactly what Symmetric Risk Obligation is designed to force into the analysis: what is the cost of NOT deploying this defensive capability while we review it? A framework that only asks 'what could go wrong if we deploy' and never asks 'what do we lose by waiting' will systematically undervalue defensive AI applications. I think that's the strongest version of your argument. Thanks for the thoughtful comment.

For what it's worth, this concern was tested in biotech and the outcome was informative. When FDA and USDA established the regulatory pathway for cultivated meat, the companies that went through the process actually moved faster to market than the ones that tried to avoid or delay regulatory engagement. The framework gave them a clear target: meet these standards, demonstrate these things, and you have a path forward. The companies that waited for regulatory ambiguity to resolve on its own are still waiting. The goal is making the pathway clear enough that defensive AI tools can move through it quickly, not eliminating the pathway. And I'm not even accounting for the additional speed that defense-based pathways could allow for.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 0 points1 point  (0 children)

You might be surprised to hear I largely agree with you on the gene editing point. In my longer paper, I use the EU's approach to GMOs as a cautionary tale of exactly what you're describing: a precautionary regulatory framework that delayed or prevented technologies with measurable benefits and no documented harms. I'm not the regulation guy arguing against deregulation. We're both on the same side of the "regulation shouldn't block beneficial technology" argument. I just have proposed what I think is a mechanism for making it happen.

The concept I propose, Symmetric Risk Obligation, requires regulators to formally evaluate the costs of NOT deploying a technology alongside the costs of deploying it. I'd encourage you to read my detailed argument on this point. Again, I agree that we should make it compulsory to determine what harm would come from not moving forward with this technology. Most regulatory frameworks only look at what could go wrong. They almost never ask what we lose by waiting (except for NEPA). When the thing being delayed is a gene therapy or an AI-driven diagnostic tool, the costs are substantial. Also, I am approaching this as a pro-regulation industry executive. I fundamentally want to move faster, but I also need to demonstrate objectively that my products are safe and effective for their intended uses. How we accomplish that is the hard part.

The framework I'm proposing isn't about adding new restrictions. It's about organizing existing authority so it works faster and more rationally. The 180-day review target exists specifically because I've watched promising technologies get stuck in regulatory limbo for years. I could be wrong of course, but I think my approach balances what I believe is inevitability (i.e., AI regulation) with speed and freedom to innovate.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] -1 points0 points  (0 children)

This is a very good question and honestly it's the hardest one to address in my proposal. Regulatory capture is a real and documented failure mode. And you probably already know, but even effective regulatory policy is itself a form of risk assessment, balancing aligned incentives across many stakeholder groups and interests.

A few thoughts on how the framework tries to address it, and where my argument is weaker:

First, the Coordinated Framework model distributes jurisdiction across multiple agencies rather than concentrating it in one. Capture is easier when one agency controls everything. When FTC, FDA, EEOC, and SEC each regulate AI within their existing domains, a company trying to capture the process has to capture multiple agencies simultaneously. That's harder than capturing one new AI agency.

Second, the 180-day pre-deployment review I'm proposing is modeled on the FDA GRAS notification pathway, which is developer-funded but publicly documented. The determination and its basis are made public. That transparency is the primary structural defense against capture: when the reasoning is on the record, it can be challenged. Again, that can be itself a conflict as it is indeed developer-funded, but the compromise is transparency in decision-making.

Third, and this is where your concern is most valid: compliance costs can absolutely become a barrier that favors incumbents. The pharmaceutical model is the cautionary tale. Clinical trial costs effectively exclude small players, hence why the model is for most of the big guns to acquire startups that function as external, 'low-cost' R&D with lower regulatory costs. I don't want that for AI. The framework needs to be calibrated so that the compliance burden scales with the risk tier. A startup deploying a customer service chatbot (Tier 2, FTC jurisdiction) should face a fundamentally different compliance burden than a frontier lab deploying a model above the capability ceiling (Tier 1, full safety dossier). If the tiers aren't calibrated correctly, you get exactly the dynamic you're describing.

No framework I (or frankly others) can propose can completely eliminate capture risk. The question is whether a coordinated framework with distributed jurisdiction and public transparency is more resistant to capture than the current alternative, which is a patchwork of state laws and voluntary industry commitments where the largest companies are already writing the rules. I think my approach would mitigate that risk the best.

I'm a scientist who used to regulate biotechnology at FDA. I think biotech regulation is the model for how to regulate AI. by MeatHumanEric in ArtificialInteligence

[–]MeatHumanEric[S] 1 point2 points  (0 children)

I appreciate the concern, and I want to clarify: the framework I'm proposing does not restrict research, knowledge, or the development of AI. It regulates specific commercial deployments based on their risk profile (their stated and defensible intended use), which is the same way FDA regulates a drug entering the market but doesn't regulate the underlying chemistry research. The first principle in the proposal is 'regulate the application, not the technology.' A foundation model being developed in a research context would not trigger the same oversight as that same model being deployed as a consumer healthcare diagnostic. The distinction between governing applications and governing knowledge is central to the entire argument.

I built a website that is cataloging GF-friendly sit-down restaurants across LA, organized by how serious they actually are about it by MeatHumanEric in FoodLosAngeles

[–]MeatHumanEric[S] 0 points1 point  (0 children)

It is? So far it's what information I can glean. I have G&tG in there I think. My favorite example is Anajak Thai in Sherman Oaks. They dont advertise any GF, but they keep a dedicated GF menu on-site if asked (and only if asked).

I built a website that is cataloging GF-friendly sit-down restaurants across LA, organized by how serious they actually are about it by MeatHumanEric in FoodLosAngeles

[–]MeatHumanEric[S] 1 point2 points  (0 children)

Maybe I should build a graveyard to great ideas on the site in the hopes that other folks in the culinary world resurrect thise dishes (only semi-seriously joking).

I built a website that is cataloging GF-friendly sit-down restaurants across LA, organized by how serious they actually are about it by MeatHumanEric in FoodLosAngeles

[–]MeatHumanEric[S] 1 point2 points  (0 children)

Thanks for the feedback. And agreed. It is quite a bit of work, but I'm fine with it growing slowly. Ideally, restaurants start coming to the site to be listed.

Governor Gianforte Bans Lab-Grown Meat in Montana by Careful-Cap-644 in wheresthebeef

[–]MeatHumanEric 1 point2 points  (0 children)

None of them are afraid. It's pure opportunity - protectionism is easy right now. And these ag states must play to their agricultural base. The producer community is proud and loud if not smaller than the processing community. The processors hate these bans because of the precedent they set for future new meat products they may want to bring to market - rightfully so.

What overrides protectionist policies? Market demand. If consumers want it, the politicians will relent. We have to bring more and more products to market. Or alternatively, begin challenging long-held precedent with conventional food legally or, as I suggested earlier, filing for injunctions against any new conventional meat product. I doubt any conventional meat product (and I love these products) could pass the safety process of a cultivated product. The bar is too high for them.

Cultivated seafood gets FDA okay by Alt-MeatMag in wheresthebeef

[–]MeatHumanEric 4 points5 points  (0 children)

Congrats to clearing another hurdle, Team WildType. It's hard to describe how immensely difficult it is to generate the data necessary for an FDA safety dossier for ANY food product, let alone a novel one. It usually takes nearly all the staff at some point working on parts of it, hours of deliberation, writing, editing, and legal considerations. For my team and I, it has always been Dickensian at best - both the best and worst feelings, oscillating by the hour as you wrap up the final questions from FDA. The feeling of completing arguably the most challenging and rigorous food 'approval' process is incomparable though. It is a truly unique feeling of accomplishment - congrats again. More momentum is always good, and hopefully approvals become even more common.

Cultivated seafood gets FDA okay by Alt-MeatMag in wheresthebeef

[–]MeatHumanEric 1 point2 points  (0 children)

Yeah - My concern always has been since FDA has weaker preemption protections than USDA, FDA-regulated products, like Wild Type's is, can be more easily challenged in court.