Beyond a Manhattan Project for Artificial General Intelligence

The Apollo program, as a whole-of-society initiative, is a better model for an AGI mega-project than the narrowly focused Manhattan Project.

Beyond a Manhattan Project for Artificial General Intelligence
An artist’s illustration of artificial general intelligence. (Domhnall Malone, https://tinyurl.com/ykpzmwmt, Free to Use).

In November 2024, the U.S.-China Economic and Security Review Commission recommended that Congress establish a Manhattan Project-like program for developing artificial general intelligence (AGI). While U.S. leadership in AGI is vital to U.S. economic and national security, indexing an AGI mega-project to something as narrowly focused as the Manhattan Project would be a strategic mistake. Instead, modeling the approach on the broader Apollo program would provide a better template for whole-of-society competition, ensuring U.S. safety, security, and prosperity. While a large national program of this type might seem unrealistic in light of current efforts to shrink the federal government, in reality a whole-of-society effort is essential for accomplishing the Trump administration’s goal of enhancing America’s global AI dominance.

AGI’s potential to dramatically accelerate problem-solving across scientific, economic, and defense domains makes it a strategic imperative for maintaining America’s global leadership position. The United States needs AGI that achieves four critical goals: It must be trustworthy, reflect American values, broadly benefit Americans, and enhance national and economic security. Trustworthy AI is a necessary foundation for success across these goals. People will only adopt AI technologies they trust, and loss of public trust typically leads to over-regulation and poor adoption. People trust technology when they understand its risks and benefits, know how to mitigate the risks, and believe the benefits outweigh the costs.

The Manhattan and Apollo programs represent starkly different approaches to technological advancement. Both were massive government initiatives consuming approximately 0.4 percent of U.S. gross domestic product annually—equivalent to $100 billion today. However, the Manhattan Project was a classified, military-led effort focused on a single-use case: building an atomic bomb. Its success ensured U.S. superpower status but was also marked by fear and destruction. In contrast, the Apollo program was a public, civilian-led, whole-of-society initiative that developed dual-use technologies—such as advanced guidance and propulsion techniques—that benefited both civilian and military applications. AI dominance will not come from leading in a single use case. Instead, the United States needs to lead in a wide range of use cases along the jagged frontier of AI.

The Dangers of a Manhattan-Project Approach

The nuclear power industry provides a cautionary tale for technological advancement: Despite causing only 11 deaths in the United States over 70 years, it lost public trust due to associations with nuclear weapons and fears about the invisible effects of radiation, dramatically slowing adoption. Indeed, the Manhattan Project frame would fundamentally undermine the trust goals articulated above. Democracy is predicated on the diffusion of power, and secretive, centralized AGI development would concentrate power in the hands of the few. There is no better way to amplify public suspicions about AI than by pursuing secret, militarized, and highly centralized AGI development.

Additionally, a Manhattan Project approach would contribute to a technological race with China that the United States might not win. Current AI development follows scaling laws, requiring massive investments in compute power, data centers, and energy infrastructure—precisely the kind of mega-projects in which China excels. China’s aggressive infrastructure development, including plans for 150 new nuclear reactors between 2020 and 2035 and annual power generation expansion five times greater than the U.S., demonstrates its capability to rapidly scale AI resources. While U.S. export controls on advanced semiconductors impose costs on Chinese AI development, China can compensate through volume deployment of older chips, greater efficiency in GPU use, and its abundant data resources, enabled by looser intellectual property restrictions and its network of surveillance data. Export controls provide the U.S. a clear advantage in compute, but the People’s Republic of China has workarounds. The United States faces an arguably more difficult challenge in dramatically boosting energy output for AI.

The notion that AGI represents a “wonder weapon” that will, through its very existence, deliver decisive strategic advantage is both dangerous and misguided. This thinking—a problematic inheritance from the Manhattan Project framing—fails to recognize that AGI will be more like a general-purpose technology, valuable across countless fields and contexts.

As others have discussed, there is no doubt that AGI will be critical to national security and defense. That said, leading in defense applications is necessary but not sufficient for U.S. success. Ultimately, the safety, security, and prosperity of the United States will come not just from inventing transformative AI, but from becoming the country that most effectively applies transformative AI to solve problems and create value across society.

Since the emergence of transformer-based advances in 2017, global diffusion has been rapid and irreversible. Trying to lock up AGI knowledge will likely fail and stifle U.S. progress. Instead, the winning strategy is to lead in inventing and diffusing these technologies throughout society in ways that enhance trust, promote productivity, and solve concrete problems. This requires a whole-of-society mega-project reminiscent of the Apollo program—not the closed, secretive model of the Manhattan Project.

The Benefits of an Apollo Program Model

The Apollo program’s legacy is one of society-wide scientific innovation, transparency and openness, and U.S. space dominance. NASA’s educational outreach brought enthusiasm for space exploration to schools nationwide, introduced students to real-life engineering challenges, and opened STEM career paths for thousands through scholarship programs (including author Martell’s mother, who obtained a math degree in 1962 from an Apollo-related scholarship program). The program created numerous technological spin-offs and sowed the seeds of a vibrant commercial launch industry while inspiring global admiration for what an open, democratic society could accomplish.

An Apollo-style AGI program could pursue several key objectives. First, such a program could accelerate the diffusion of AI throughout American society. While targeted, narrow initiatives may seem more cost-effective in the short term, they fail to create the ecosystem-wide transformation necessary for true technological leadership. As Jeffrey Ding correctly argues, national technological power depends primarily on a country’s ability to diffuse technology throughout its economy and society. Inventing cutting-edge AI isn’t enough. The United States needs the widespread ability to apply those inventions to use cases across the economic and defense realms. This requires a population that can understand AI’s benefits, use AI tools effectively, comprehend and mitigate risks, and participate meaningfully in AI governance. AI diffusion will require expanding whole-of-society AI education, developing analogues to agricultural extension offices that help small businesses adopt AI, building industry vertical-specific shared infrastructure and tools, and similar initiatives that enable AI to spread broadly through the economy.

Second, an Apollo-style AGI program would focus on concrete, measurably effective civilian applications that improve American lives. While states and civil society organizations have crucial roles to play in implementation and local adaptation, a federal program would provide the necessary strategic coordination, baseline funding, and infrastructure that enables these stakeholders to maximize their impact. Rather than pursuing AGI solely as an abstract achievement or competitive milestone, the program would target specific use cases such as personalized education, AI-powered health care delivery, more effective and responsive governance, and advanced cyber defense. These practical objectives would focus national efforts while delivering tangible benefits to citizens.

Third, the program would fund diverse research initiatives beyond current commercial focuses. While private-sector investment in transformer-based architectures is substantial, it is heavily biased toward generative AI use cases. Government funding should support alternative technical approaches that might lead to more effective AGI. This diversification helps prevent over-indexing on current technological paradigms and can avoid strategic surprise if China pursues alternate, and ultimately more successful, technical paths.

Fourth, the program would invest heavily in evidence-based safety and security research. As models become more expensive to train—potentially reaching $100 billion by 2027—they become critical national assets, requiring protection from theft and manipulation. Additionally, preventing serious accidents through robust safety measures will maintain public trust and enhance adoption. Rigorous testing is also needed to assess and address national security threats, and the government will need to partner with the private sector to address these concerns.

Fifth, the program would build a comprehensive national data infrastructure to fully realize AGI’s potential. While current large language models excel at learning from vast amounts of text data, truly transformational AGI requires a broader foundation. This includes data from embodied AI systems interacting with the physical world and, crucially, structured information capturing how human experts solve complex problems in real-world scenarios. Building this sophisticated data layer would create a vital national asset that supports both fundamental AGI research as well as civilian and defense applications. While states such as New York and California are developing approaches to AI data (and governance generally), a patchwork of state-based AI rules and initiatives risks creating compliance and integration barriers to companies operating in multiple jurisdictions. A federal approach is needed to ensure coherence and interoperability. Just as the Apollo program required unprecedented advances in measurement and instrumentation, an AGI moonshot demands new approaches to collecting, curating, and managing the data that will power next-generation AI systems.

Sixth, the program would heavily invest in rapidly expanding America’s energy production capacity—particularly clean, high-density sources such as nuclear fission—so that the United States has the energy needed to lead the world in AGI. This initiative could also leverage its own AI-driven innovations—such as predictive modeling, real-time load management, and sophisticated efficiency analytics—to optimize everything from power plant design to the grid’s transmission and distribution.

Finally, the program would engage in careful AGI diplomacy to prevent dangerous escalatory dynamics with China. Indexing on the Manhattan Project could spark exactly the kind of militarized race the U.S. wants to avoid. By contrast, framing AGI development as a peaceful, scientific endeavor reduces the risk of conflict while enabling the United States to focus resources on enabling transformative AI across society.

***

The choice between these frameworks is not merely about branding—it will fundamentally shape how AGI develops, who benefits from these transformative technologies, and whether they create the society-wide conditions that ensure success across civilian and defense applications. If a national mega-project for AGI is launched, the best framing is an Apollo-style moonshot that protects, enriches, and unites the nation, demonstrating once again that democratic values and open innovation represent humanity’s best path forward.

Matt ChessenCraig Martell, Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2025 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.