An evaluation of AI policy recommendations by Anthropic’s CEO may help guide ongoing regulatory efforts at the state and federal level.
.jpg?sfvrsn=c5a4ff6d_5)
As the United States and its 50 states debate how to proceed with artificial intelligence (AI) governance, the CEO of a major AI lab has published a thorough essay on the major risks he sees from continued AI advances.
Anthropic CEO Dario Amodei’s essay, “The Adolescence of Technology,” stresses a few key principles to safeguard against the worst case AI outcomes. Application of these principles at the state and federal level may result in a more reasoned, evidence-driven approach to AI governance. Below, I evaluate Amodei’s approach and consider how it might be further strengthened.
Who is Amodei?
Amodei is the CEO of Anthropic. For those less familiar with the ins and outs of the few key players shaping the direction of AI progress in the United States (and the world), Amodei is near the top of the list. He’s been in high-ranking positions at leading AI firms for more than a decade, and his views on AI policy carry significant weight.
Amodei has been especially vocal about the risks posed by AI. Notably, he left OpenAI because he feared that the lab did not take the downsides of AI seriously enough. Consequently, he and his company have often made headline news:
- “Anthropic CEO Dario Amodei Predicts Half of All Entry-Level Office Jobs Will Disappear”
- “Anthropic’s Chief Executive Acknowledges Risks of Huge Spending on A.I.”
- “Amodei on AI: “There’s a 25% chance that things go really, really badly”
Admittedly, Anthropic is not your average AI company. They seem to hold themselves to different standards—and have different goals—than other labs. Take their word for it:
Anthropic occupies a peculiar position in the AI landscape: we believe that AI might be one of the most world-altering and potentially dangerous technologies in human history, yet we are developing this very technology ourselves. We don’t think this is a contradiction; rather, it’s a calculated bet on our part—if powerful AI is coming regardless, Anthropic believes it’s better to have safety-focused labs at the frontier than to cede that ground to developers less focused on safety.
The upshot is that Amodei is a technically-savvy, thoughtful individual leading a company that is conscious of both the positives and negatives of AI. No where is this more apparent than in his most recent essay (analyzed here), which focuses on AI risks, and his previous essay, “Machines of Loving Grace,” which detailed the brighter future AI could bring about.
As policymakers search for a proper framework for governing AI, they should consider Amodei’s views—while maintaining a degree of healthy skepticism. Amodei has significant skin in the game (Anthropic’s evaluation is skyrocketing) that may directly or indirectly alter his analysis of AI policy. Likewise, he brings a particular perspective to what ought to be an objective exercise. But it is safe to say that his two cents are worth paying attention to.
Principles
Amodei articulates several overarching principles that should guide AI policy:
Evidence-Driven Approach
AI risks ought to be discussed and governed in a “realistic, pragmatic manner,” according to Amodei. This approach—one that is “sober, fact-based, and well-equipped to survive changing tides”—has not always been followed. He notes that AI policy discussions have seemingly swung from an excessive focus on risks from 2023 to 2024 to an inflated celebration of its potential benefits starting in 2025. The essay emphasizes that “Anthropic cautiously advocated for a judicious and evidence-based approach to these risks” regardless of whether addressing AI risks is politically popular or not.
Application of this approach would safeguard against premature action. Amodei observes that earlier AI policy debates were dominated by “some of the least sensible voices,” who managed to “[rise] to the top, often through sensationalistic social media accounts.” He continues, “These voices used off-putting language reminiscent of religion or science fiction, and called for extreme actions without having the evidence that would justify them.”
As I have recounted on a number of occasions (see Scaling Laws podcast), policymakers are rushing ahead with AI regulation that imposes interventions with little to no empirical grounding. For example, legislators in Utah and Washington want to require AI companion companies to notify users that they are using such a tool at specific intervals—sometimes as frequently as every hour. It’s unclear whether this will work, and it’s possible that these incessant banners may actually cause users to disregard important notifications. Critically, many such state laws include inadequate support for data gathering on the efficacy of their interventions and lack sunset clauses that would allow them to meaningfully evaluate if the law is working as intended.
Humility and Acknowledgment of Uncertainty
“Acknowledge uncertainty,” urges Amodei. “There are plenty of ways in which the concerns I’m raising in this piece could be moot. Nothing here is intended to communicate certainty or even likelihood… No one can predict the future with complete confidence—but we have to do the best we can to plan anyway.” He goes on to emphasize that “the hunt for such evidence must be intellectually honest, such that it could also turn up evidence of a lack of danger.”
Again, many state and federal proposals treat AI as a static, uniform technology. Proposed definitions of AI within these bills may not capture tomorrow’s AI models, which are likely to be significantly different in terms of capability and reliability. Amodei notes on several occasions that the technology is complex, evolving, and incapable of being entirely understood. Legislators should heed and match his humility.
Supporting Innovation / Avoiding Harm to Smaller Players
Amodei repeatedly stresses that regulations should reduce hurdles imposed on smaller, nascent AI companies that are not operating on the frontier of AI. He contends that Anthropic has “put a particular focus on trying to minimize collateral damage, for example by exempting smaller companies unlikely to produce frontier models from the law.” He points to SB 53 and the RAISE Act as examples in shielding smaller labs from undue burdens by conditioning the applicability of their provisions on $500 million or more in annual revenue.
Importantly, however, startups contest the idea that these revenue carveouts free them from higher compliance burdens. It follows that adherence to this principle should focus less on any given bill and more on the actual experience of AI actors.
Surgical, Disciplined Intervention
“Intervene as surgically as possible,” advises Amodei. “Addressing the risks of AI will require a mix of voluntary actions taken by companies (and private third-party actors) and actions taken by governments that bind everyone.” Throughout the essay, he limits government intervention to instances of market failure—instances in which labs may face collective action problems. For instance, labs do not have a clear business interest in disclosing certain information about their training practices, yet such information is likely essential to improving AI governance (see the first principle).
Even when the government intervenes, it should strive to “impose the least burden necessary to get the job done.” As noted above, this runs counter to many state laws and bills that may have been crafted without consulting AI labs of all sizes, and that lack sunset clauses to ensure a chance to amend laws that are more burdensome than intended. Amodei speaks to this point when he cautions against “drawing lines that seem important ex-ante but turn out to be silly in retrospect. It is just very easy to set rules about the wrong things, when a technology is advancing rapidly.”
Avoiding “Doomerism”
Bluntly, Amodei directs policymakers to “[a]void doomerism.” “Doomerism,” as defined by Amodei, refers not just to “the sense of believing doom is inevitable (which is both a false and self-fulfilling belief), but more generally, thinking about AI risks in a quasi-religious way.” Doomer-driven policy advocates have “called for extreme actions without having the evidence that would justify them,” he notes.
Given my experience testifying before congressional committees on AI governance, this lesson has not yet been heeded on the Hill. I have received numerous questions from lawmakers across the political spectrum that regurgitate exaggerated policy positions advanced by AI skeptics, such as the immediate demise of entire jobs. Yet, as Amodei himself notes, AI adoption is likely to occur on a much slower timeline than doomeristic perspectives commonly suggest.
Reading Between the Lines
I applaud Amodei for this willingness to put his principles on paper. It’s far easier for CEOs to stay quiet on important policy debates than it is to affirmatively outline views and policy suggestions. My hope is that Amodei will continue to share similar essays and perhaps issue shorter blog posts that apply these principles to current bills (at 38 pages, this essay is likely too long for popular consumption). As AI stakeholders respond to this essay and otherwise debate AI policy, I’d encourage them to take two practices into account.
First, distinguishing between “Powerful AI”—defined by Amodei as AI that, “in terms of pure intelligence, is smarter than a Nobel Prize winner across most relevant fields—biology, programming, math, engineering, writing, etc.”—from what I refer to as “Boring AI,” or AI that is not powerful. Though he makes this point early in the essay, the distinction between Powerful and Boring AI is lost later in the essay. This increases the odds of people coming across excerpts of the essay and thinking that Boring AI warrants the same treatment as Powerful AI.
Second, calling out bad AI policy. Amodei mentions a focus on AI water usage as a poor use of legislative resources and public attention. This is helpful in directing legislators’ attention away from distractions. It would be valuable to learn more about the areas on which Amodei thinks legislators should spend less time, either because the current issues with the technology will likely be addressed or because there’s simply no there there.
My Own Two Cents
If AI policy stakeholders are to handle the adolescence of AI like adults, they must avoid clumping all AI tools together, indiscriminately treating all AI as the source of looming catastrophes. Amodei’s call for an evidence-driven approach is a necessary rebuke to the vibes-based policymaking that characterizes the legislative hearings I’ve been a part of at many state capitals.
As Amodei makes clear, there’s a real risk of poorly-crafted laws inadvertently smothering the very innovation required to solve our most pressing societal challenges. True “adult” governance requires the courage to prioritize permissionless innovation for the vast majority of “Boring AI” applications, ensuring that regulator intervention is reserved only for proven, empirical risks at the frontier. By embedding humility into our statutes—through sunset clauses and rigorous data-gathering requirements—legislators can replace static, stifling mandates with a dynamic legal infrastructure that evolves alongside the technology.
Ultimately, the litmus test for any AI policy should be whether it strengthens or subverts our core democratic values. A regulatory environment that favors incumbents through high compliance costs or nebulous “safety” standards can exacerbate some of the risks at the top of Amodei’s list, such as concentrations of power and economic inequality.
My own two cents is that we should seek a “Republic of Innovation” where the law provides the predictable guardrails necessary for investment and discovery, rather than a thicket of untested or imprecise laws such as hourly notifications and revenue-based hurdles that fail to move the needle on actual safety. If we listen to knowledgeable voices in the room, like Amodei, we can move past the era of doomer-driven reactionary laws and toward a sophisticated legal reform agenda.
– Kevin Frazier is a Senior Fellow at the Abundance Institute, Director of the AI Innovation and Law Program at the University of Texas School of Law, a Senior Editor at Lawfare, and a Adjunct Research Fellow at the Cato Institute. Published courtesy of Lawfare.

