
On March 20, the White House released its long-awaited National Policy Framework for Artificial Intelligence for Congress. The four-page document, described by the White House as a “comprehensive national legislative framework,” is thin on details and breaks no new ground. It clarifies the Trump administration’s regulatory priorities regarding children’s safety, data center infrastructure, intellectual property, censorship, innovation, and workforce development. The framework also instructs Congress to “preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.”
The AI framework marks the Trump administration’s latest move in an evolving campaign against state AI regulation. In the first wave, the AI industry argued that AI risks are already covered by existing laws and do not require AI-specific legislation. State policymakers, however, have roundly rejected this assumption. Nearly 100 measures across 38 states reflect a legislative judgment that existing law is insufficient to manage the misuse and consequences of AI systems.
Having lost that argument, industry advocates have pivoted to preemption, arguing that Congress ought to preempt AI-specific legislation at the state level, and that generally applicable state laws are the only ones worth preserving. But this position draws on the same contestable logic: that AI can be effectively governed on the same terms as other digital technologies. Policymakers who rejected the first-wave argument should also reject this second-wave argument.
Preemption Without Precedent
When Congress expressly preempts state law, it typically includes statutory language specifying which types of state laws are displaced. The scope of preemption turns on the language Congress chooses, and courts have developed interpretive frameworks around recurring statutory formulations.
The White House’s preemption proposal borrows language from other areas of law, such as “undue burden,” with no established meaning in federal preemption jurisprudence. Rather than leaving it for courts to sort out, policymakers should scrutinize the preemption proposals before codifying an AI federalism framework that would handcuff states’ ability to address AI’s sprawling social and economic impacts.
Preempting Unduly Burdensome State Laws
The AI framework targets state laws that impose an “undue burden”—a phrase drawn from Dormant Commerce Clause (DCC) doctrine. Under the Pike balancing test, state regulations that unduly burden interstate commerce in relation to the putative local benefits are constitutionally void. Nothing prohibits Congress from adopting this test as a preemption standard. But it leaves a great deal of uncertainty.
When does a burden become undue? The Court’s DCC jurisprudence itself provides no guidance, for two reasons.
First, under the DCC, the relevant burden is on interstate commerce. Under the White House formulation, by contrast, the relevant burden is on AI development and use. These are not necessarily the same. A state law that imposes compliance requirements might survive a DCC challenge because it does not unduly burden interstate commerce, yet be preempted under the White House’s proposal because it slows AI adoption.
Second, under Pike’s balancing test, courts must weigh the burden on interstate commerce against state regulatory interests. The White House’s undue burden standard contains no comparable offset. Without a balancing component, the standard is a one-sided inquiry into regulatory burdens without regard for state sovereign interests.
The framework extends the “undue burden” test to the “use of AI for activity that would be lawful if performed without AI.” In the abstract, this sounds reasonable. In practice, a preemption standard that treats AI-enabled conduct as equivalent to human conduct would prevent states from regulating AI risks that are different in degree and kind.
Two examples illustrate the problem. Setting prices is lawful. But when competitors feed proprietary data into a shared pricing algorithm, the software can facilitate coordination at a speed and scale that existing antitrust law was not built to reach. Providing emotional support is lawful. But AI systems that simulate therapeutic relationships create well-documented risks of dependency, emotional manipulation, and psychological harm.
Saving Generally Applicable State Laws
The White House framework acknowledges, as it must, that states retain “traditional police powers” to protect the health, safety, and well-being of their citizens. But the framework would limit the scope of that authority to enforce laws of “general applicability,” including “particular laws to protect children, prevent fraud, and protect consumers.”
Like the “undue burden” preemption standard, the preservation of “generally applicable” state laws has no pedigree in preemption statutes. Presumably, it was borrowed or inspired by constitutional doctrines that serve different functions. For example, under the First Amendment’s Free Exercise Clause, a law targeting religious conduct triggers strict scrutiny, but a “neutral, generally applicable” law survives challenge even when it burdens religious practice. In the Free Exercise context, the question is whether a law singles out religious conduct. In the AI preemption context, the question is whether a law singles out AI systems. Again, there is nothing inherently problematic with a sui generis preemption standard. But courts will give it meaning in ways that may upset legislative expectations.
The White House has a clear picture of what gets preempted. Chief among them are state AI safety laws that impose reporting requirements on frontier AI developers and potential liability for catastrophic harms to human life or infrastructure. Colorado’s AI Act would also be preempted because it expressly targets “high-risk” AI systems.
By contrast, generally applicable state laws would be saved. This includes traditional negligence, product liability, and anti-discrimination laws, provided they are not amended to single out AI technology for special treatment. That’s the rub. Generally applicable state laws may not address AI’s unique features, affordances, or risks. Industry advocates know this, and they exploit gaps in existing laws to avoid regulation, responsibility, and liability.
The industry strategy operates in two steps. First, it resists AI-specific legislation by arguing that generally applicable law already provides adequate safeguards. Second, when plaintiffs or regulators attempt to apply that law to AI systems, the industry argues in court that the law was not designed to reach the unique features of AI and therefore does not apply.
Lawmakers can prevent this litigation arbitrage by rejecting the “generally applicable” preemption standard that enables it. For a preview of the bait-and-switch, look to the pending litigation against AI chatbot developers in cases involving suicide and violence against others. The plaintiffs press state-law claims under generally applicable negligence and product liability law. The AI developers deny accountability under both. They argue that chatbots are services, not products, and thus fall beyond the scope of product liability laws. And they disclaim liability under negligence law, arguing they owe no duty of care to users who turn to chatbots for therapy and clinical advice.
Similar gaps and ambiguities pervade every area of law that AI touches. As another example, some states require employers who use automated hiring tools to conduct algorithmic bias audits. These laws target well-documented risks of racial and gender bias in algorithmic decision tools. Assessed at the level of the underlying anti-discrimination norm, the law is generally applicable. Assessed at the level of the statutory text, it is AI-specific. In these and other cases, the White House framework provides no instruction on which level of generality governs. Assuredly, the industry will challenge any such law on the grounds that it is not generally applicable and thus pre-empted.
Shielding AI Developers
The White House preemption framework also singles out an entire category of AI activity to receive immunity—AI development. In practical terms, this would shield leading AI companies, such as Google, Microsoft, OpenAI, Anthropic, and xAI, from regulatory oversight and liability for their design choices and any downstream harms. The solicitude reflects the Trump administration’s judgment that any state regulation of AI development is impermissible “because it is an inherently interstate phenomenon with key foreign policy and national security implications.”
A blanket ban on all state regulation of AI development is considerably overinclusive. AI is indisputably an interstate phenomenon with geopolitical and security implications. But those are reasons why Congress can or should regulate AI—not reasons why states cannot.
The framework would prevent states from penalizing AI developers for a “third party’s unlawful conduct involving their models.” This provision would bar states from holding AI developers responsible when someone, for example, uses the developer’s model to commit fraud or generate illegal content. The practical consequences are significant. Many harms traceable to AI deployment originate in choices made during development. State tort and consumer protection laws have long held manufacturers accountable for defects that make foreseeable misuse more likely. But the White House framework would override that principle for AI. A state that cannot regulate AI development would, in many instances, be constrained to addressing symptoms rather than causes of AI-related harms.
Courts will also struggle to distinguish between laws that regulate AI development and those that regulate deployment. Consider a state law requiring bias audits of training data before a model is released to consumers. Does this regulate development or deployment? What about a transparency law requiring developers to report safety testing, but not mandating such testing? The boundary will only grow harder to maintain as the AI stack evolves. Agentic AI systems that autonomously retrieve data, fine-tune their own parameters, and execute multi-step tasks in real time do not neatly separate into development and deployment phases. Courts asked to classify state regulations along this axis would face line-drawing problems that the framework neither acknowledges nor resolves.
Abdication Is Not Regulation
Preemption is only half the equation. The framework asks states to cede regulatory authority, but what does it offer in return? Remarkably little. Child safety is the sole area in which the framework would impose obligations on AI developers, primarily around age assurance and sexualized content targeting minors.
More telling is what and whom the framework leaves out. It contains no provision for algorithmic discrimination in employment, lending, housing, or insurance. No limits on AI-assisted surveillance. No standards for AI in healthcare diagnosis or treatment recommendations. No reporting requirements for AI-related job displacement. No pre-deployment safety evaluation for frontier models. No provisions addressing AI-generated disinformation. By omitting these subjects, the framework aims to leave the industry minimally regulated while preventing states from filling the void.
Federalism offers better paths. Federal and state governments can cooperate and coordinate in AI regulation. The White House framework forecloses that possibility. It asks Congress to shut down the only governments that are regulating AI, in exchange for a federal regime that would not. Politics points in a different direction. Polls repeatedly show widespread, bipartisan support for AI regulation, even if it means slowing development.
The spate of AI-specific legislation over the past two years reflects a broad political judgment that existing law is insufficient. Congress has shown little appetite for overriding those judgments. Nothing in the White House’s AI framework is likely to change that.
– David S. Rubenstein, Published courtesy of Just Security.
