Memorandum Outlines U.S. Government’s Role in AI Development

The memorandum provides guidance to government agencies—in coordination with the private sector—for responsible AI development. 

Memorandum Outlines U.S. Government’s Role in AI Development
President Joe Biden (Photo: Adam Schultz/Flickr, https://www.flickr.com/photos/whitehouse/51420635724/, Public Domain)

On Oct. 24, President Biden released a National Security Memorandum on artificial intelligence (NSM on AI), in addition to an accompanying framework offering high-level recommendations. The NSM fulfills a requirement outlined in Section 4.8 of Executive Order 14110, which requires large AI developers and providers to share safety tests with the government, directs agencies to establish safety and testing standards, and calls for action to address the technology’s impact. The NSM applies to agencies within the intelligence community and those using National Security Systems (NSS); the framework serves as guidance for agencies in managing their “respective components/sub-agencies.” Both documents are intended to serve as the Department of Defense/intelligence community counterpart to the civilian-focused memorandum on AI (M-24-10) issued by the Office of Management and Budget—representing a comprehensive government approach to AI.

Accordingly, the NSM is the “first-ever” such document, with an overarching goal of responsibly, safely, and securely ensuring an “edge over rivals seeking to leverage AI to the detriment of [U.S.] national security.” The administration outlined “three core principles”: securing American leadership in AI, harnessing AI for national security, and accelerating responsible adoption via “clear rules of the road.”

Contents of the Memorandum

Section 1: Policy. The NSM—and the annex that remains classified because it “addresses additional sensitive national security issues,” including how to counter adversarial uses of AI—provides guidance on the use of AI in national security settings. Recognizing the potential for U.S. leadership in technological transitions, the memorandum calls on agencies that operate or have a significant influence on NSS to work with the private sector to maintain that leadership but to do so responsibly. As part of that effort, there must be changes to how the government uses AI, including ending the use of “individual bespoke tools” and instead relying on systems with “multi-purpose AI capability.”

Section 2: Objectives. The objectives of the NSM include working with the public to “promote and secure” U.S. leadership in AI development; using powerful general purpose models in a way that respects American ideals; and creating an internationally adopted framework for “safe, secure, and trustworthy AI development and use.”

Section 3: Promoting and Securing the United States’ Foundational AI Capabilities. The memorandum outlines the U.S. government’s strategic priorities for advancing AI technology securely and competitively. These policies aim to foster domestic AI innovation (Section 3.1); safeguard U.S. AI from foreign threats (Section 3.2); and ensure the safety, security, and reliability of AI (Section 3.3). 

3.1. Promoting Progress, Innovation, and Competition in United States AI Development

This section focuses on policies aimed at maintaining the United States’s leading position in AI by investing in the infrastructure and talent necessary to promote domestic AI growth. 

The NSM directs the assistant to the president for economic policy and the director of the National Economic Council to assess key competitive advantages in the private-sector AI ecosystem, including chip design, adequate electricity, resourced tech platforms, and the availability of skilled workers. To achieve the latter, the NSM calls for the assistant to the president for national security affairs—in conjunction with other executive departments and agencies—to streamline the visa process for highly skilled noncitizens in AI and related fields.

To foster AI innovation, the NSM directs the Department of Energy and the intelligence community to integrate large-scale AI capabilities into the design and renovation of computational facilities. The White House chief of staff, in coordination with other agencies, is tasked with streamlining the permitting and approval process for AI-enabling infrastructure such as clean energy and high-capacity data links. 

The NSM instructs the National Science Foundation to use the National AI Research Resource, supporting U.S. competitiveness in AI research by ensuring broad access of universities, nonprofits, and independent researchers to AI resources and data. Finally, it directs multiple government agencies, including the departments of State, Defense, Energy, and Commerce, and the intelligence community, to invest in AI technologies.

3.2. Protecting U.S. AI from Foreign Intelligence Threats

This section highlights the critical need to protect the U.S. AI ecosystem from foreign interference, including investment schemes, cyber espionage, and other methods.

The NSM directs the National Security Council and the Office of the Director of National Intelligence (ODNI) to issue recommendations for the President’s Intelligence Priorities, ensuring foreign intelligence threats to U.S. AI are effectively addressed. Additionally, the NSM calls on the ODNI, in coordination with other government agencies, to identify and safeguard vulnerable points within the AI supply chain from potential foreign intrusion.

The NSM addresses the use of “gray-zone methods” by foreign actors—tactics that do not trigger a clear legal response—to acquire proprietary AI information. It directs the Committee on Foreign Investment in the United States to closely scrutinize transactions that could grant foreign actors access to sensitive AI information through means such as technology transfers, data localization, or other indirect strategies. This is especially imperative with regard to critical technical artifacts (CTAs), which have the potential to significantly decrease the cost of obtaining, recreating, or utilizing AI capabilities. 

3.3. Managing Risks to AI Safety, Security, and Trustworthiness

This section outlines key strategies for mitigating the risks to public safety, national security, and individual rights posed by developing AI technologies, while preserving U.S. leadership in the field.

Recognizing the risks of deliberate misuse and accidents—such as offensive cyber operations, unauthorized extraction of sensitive information, and harassment—the NSM outlines comprehensive frameworks for mitigating these threats through the testing and evaluating of AI systems. Specifically, it calls upon the Department of Commerce—through the AI Safety Institute (AISI), within the National Institute of Standards and Technology (NIST)—to collaborate with private-sector developers in facilitating voluntary prerelease safety testing of frontier AI models. These tests will focus on cybersecurity, biosecurity, chemical weapons, and system autonomy. Additionally, the NSM directs AISI to issue guidelines for managing such risks and establish benchmarks for evaluating AI capabilities. 

The NSM proposes that agencies develop sector-specific testing mechanisms to assess the ability of AI systems to “detect, generate, and/or exacerbate” risks. Whereas the National Security Agency will focus on cyber threats, the Department of Energy will address radiological and nuclear risks. The Department of Homeland Security, in coordination with other relevant agencies, is responsible for chemical and biological threats. 

The NSM emphasizes the need for agencies to prioritize research concerning AI safety, security, robustness, and trustworthiness. It calls for the publication of guidance on known AI vulnerabilities and best mitigation practices, ensuring that the U.S. is prepared for emerging AI threats.

Section 4. Responsibly Harnessing AI to Achieve National Security Objectives. Section 4 details government plans to manage and oversee its AI usage within national security agencies, both as a means to further national security interests (Section 4.1) and with respect to the United States’s human rights commitment (Section 4.2). This includes ensuring the government acquires useful and usable AI systems, and that their usages are in alignment with national security interests and human rights and civil rights considerations.

4.1 Enabling Effective and Responsible Use of AI. According to the NSM, the government seeks to expand its AI procurement, AI expertise hiring, and collaboration with contractors and private industry. This process of acquiring AI systems establishes objective metrics to measure safety and security, accelerates the procurement process and increases competitiveness, shares AIs across agencies as much as possible while respecting their distinctive needs, considers amendments to the Federal Acquisition Regulation, and engages with private industry throughout the acquisition process. 

The section also discusses the general principles governing the procured AIs. The Defense Department and Justice Department will work to review and revise their policies, ensuring that any AIs used by federal agencies are consistent with frameworks protecting civil and human rights. This entails special consideration for governing training models that work with personal, traceable information; constitutional considerations; issues with classification, compartmentalization, and bias; threats to the integrity of analyses conducted using AI tools; a lack of interoperability or human rights safeguards; and any barriers that may exist to sharing AI models and insights, either with allies or as part of international treaties and commitments.

To improve internal coordination and share AI resources effectively, the memorandum recommends agencies adopt organizational practices applicable to multiple agencies; consolidate research, development, and procurement; align policies across agencies; and develop policies to share information with the Defense Department when an AI might pose a risk to safety, security, or trustworthiness.

4.2 Strengthening AI Governance and Risk Management. In this section, the NSM continues discussing human rights, civil rights, privacy, and safety, with a focus on ensuring that a human remains involved in the decision-making process. Alignment with democratic values is fundamental to the government’s use of AI, requiring robust guidelines and risk management frameworks.

According to the memorandum, these risk management frameworks must be structured but adaptable; consistent, while respecting the distinctive nature of each department and agency; designed to enable innovation; transparent but limiting classified information; integrated with human and civil rights concerns; and reflective of the United States’s commitment to global norms and best practices. Heads of agencies must monitor and mitigate the risks of their AI usage, including threats to physical safety, privacy concerns, concerns about algorithmic bias, a lack of operator knowledge, lack of transparency and accountability, data spillage, poor performance, and potential deliberate misuse of any AI system. 

The NSM emphasizes that these frameworks should include specific requirements. Each covered agency must also have a chief AI officer, responsible for AI oversight, coordination with other agencies, promotion of innovation, and risk mitigation. The frameworks must also establish risk-level guidance, as well as specific guidance for “high-risk activities,” including decisions made by AI that could have a substantial impact on national security, human rights, or other democratic values. The frameworks must also ensure AIs are sufficiently transparent and respectful of privacy and civil liberties; that human operators receive adequate training; that an annual inventory of high-impact AI is maintained; that there are sufficient whistleblower protections in place; that the usage of high-risk AI is justified through a waiver program; and that all security guidance issued by the national manager for national security systems is implemented. 

Section 5. Fostering a Stable, Responsible, and Globally Beneficial International AI Governance Landscape. This section outlines the U.S. government’s commitment to global leadership in establishing AI governance norms to promote safety, security, and alignment with democratic values. The memorandum sets forth reporting requirements for agencies related to their AI activities, and mandates the creation of an AI National Security Coordination Group, tasked with harmonizing the AI policies of various agencies and establishing a committee to acquire AI-enabling talent.

Section 6. Ensuring Effective Coordination, Execution, and Reporting of AI Policy. This section establishes that the government coordinates internally and consistently on AI policy execution. This includes the formation of an AI National Security Coordination group for chief AI officers (CAIO) who are members of the Committee on National Security Systems, as well as the assurance that agencies’ usage of AI is in alignment with one another, similar to the CAIO Council established by M-24-10.

Section 7 Definitions. This section provides definitions for the memorandum. While most are pulled from Executive Order 14110, there are a few additions. “AI safety,” for example, is defined as the mechanisms by which harms are minimized or mitigated. “AI security” consists of the practices meant to protect AI systems from “cyber and physical attacks, thefts, and damage.” The definition of “critical technical artifacts (CTAs)” is information specific to a single model or group of models that, when “possessed by someone other than the model developer,” would allow for significant reduction in costs to utilize the model’s capabilities. “Frontier AI model” is used for “cutting-edge” models; and an “open-weight model” is one that has “weights that are widely available,” noting that while it is usually as a result of “public release,” there is an accounting for other instances. Notably, “AI trustworthiness” is not defined, granting individual agencies leeway in defining the minimum standards to which their AI systems must adhere.

Section 8. General Provisions. This section establishes standard limitations, ensuring the memorandum does not inhibit the functioning or authority of federal agencies.

Background and Reactions

National Security Adviser Jake Sullivan stated that the NSM “will define the future,” requiring new capabilities, tools, and doctrine. He highlighted the speed at which AI has developed, uncertainty about AI’s growth trajectory, and private-sector leadership in developing AI. Sullivan pointed to the importance of extending the lead American companies have built, especially as they go “head-to-head with [China-based] companies” to become “the technology partner of choice for countries around the world.” He emphasized that while the AI revolution has been led by private companies, the U.S. government must ensure that its development respects both human rights and national security interests. Sullivan also noted the need to improve the visa process for AI and AI-adjacent workers, and to streamline domestic chip production; the prodigious economic energy of the federal government should be used to ensure America’s continued dominance in the AI space.

In a statement, National Economic Adviser Lael Brainard recognized the economic impact of the NSM, noting that there have been past instances in which “critical technologies and supply chains that were developed and commercialized” domestically end up overseas due to a “lack of critical public sector support.” Pointing out the requirement within the NSM for the National Economic Council to assess U.S. AI competitiveness, Brainard identified a need for “strong domestic foundations in semiconductors, infrastructure, and clean energy” to ensure adequate access to the necessary computing resources. With the private sector’s “significant investments in AI innovation,” she said, this NSM serves as the U.S. government’s attempt to provide the “policy changes and support necessary to enable rapid AI infrastructure growth over the next several years.”

Reactions to the NSM varied. The ACLU expressed concern about the lack of transparency around reporting requirements, making it difficult to understand what AI tools are being deployed and for what purposes—arguing that the national security agencies are being “left to police themselves” without appropriate oversight. The trade association ITI, which represents technology companies including those with AI solutions, praised the NSM’s effort to establish a prominent role for the U.S. AI Safety Institute. ITI emphasized that the institute will create a collaborative research and development environment and establish the type of public-private partnership necessary to provide lawmakers with recommendations to “improve the cybersecurity of AI models and systems.”

Janneke ParrishMegan ThomasOmid Ghaffari-Tabrizi, Published courtesy of Lawfare. 

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.