Regulatory Approaches to AI Liability

Federal agencies wield crucial tools for regulating AI liability but face substantial challenges in effectively overseeing this rapidly evolving technology.

Regulatory Approaches to AI Liability
(Photo: Rawpixel, Public Domain)

Editor’s note: This essay is part of a series on liability in the AI ecosystem, from Lawfare and the Georgetown Institute for Technology Law and Policy.

As artificial intelligence (AI) rapidly transforms industries and permeates daily life, there is a clear need for effective and nuanced regulation by federal agencies. Through legislative frameworks created by Congress, agencies have the power to develop detailed regulations, enforce compliance, and adjudicate disputes in specialized areas where Congress lacks the necessary expertise and resources.

This primer explores the capabilities and limitations of agency regulation in the context of AI liability, examining how agencies such as the Federal Trade Commission, Department of Justice, Department of Commerce, and Securities and Exchange Commission navigate the intricate landscape of AI governance. By understanding the tools at agencies’ disposal—from rulemaking and adjudication to investigation and enforcement—we describe the potential for effective oversight in this rapidly evolving field. At the same time, we examine the challenges agencies face, including resource constraints, political pressures, and the risk of regulatory overreach.

What Agencies Can and Can’t Do

Federal agencies get their authority from Congress, which delegates specific powers through “organic statutes”—statutes that grant agencies the legal authority to regulate particular areas and outline the scope of each agency’s jurisdiction in that area. This delegation of power is based on the understanding that Congress, with its broad and general legislative responsibilities, lacks the detailed expertise and resources needed to manage every nuanced aspect of governance—they set overarching goals and hand the fine print of execution to dedicated bodies. By delegating authority and allocating resources to agencies, Congress ensures that areas requiring specialized knowledge, like AI, are overseen by entities capable of understanding and addressing their unique challenges and risks.

Agency Powers

Federal agencies are equipped with a range of powers that enable them to fulfill their regulatory responsibilities.

Rulemaking

Arguably the most important powers that agencies have is the ability to create legally binding rules and regulations. Rulemaking procedures can be either formal or informal. Formal rulemaking involves a trial-like hearing where stakeholders can present evidence and arguments, an often lengthy and complex process. However, by far the more common approach is informal rulemaking, sometimes called “notice-and-comment.” During this process, the agency  publishes a proposed rule in the Federal Register and invites public comments. After reviewing these comments, the agency must respond and may revise the proposed rule before issuing the final version. This process allows agencies to tap into the collective expertise of industry participants, academic researchers, consumer advocates, and other interested parties such as members of the broader public. By incorporating this diverse input, agencies can craft more informed and balanced regulations that are better suited to the complexities of advanced technologies. This is especially important in fields like AI, where the technology is not only rapidly advancing but also intersecting with a wide range of sectors, from health care to finance to national security.

Adjudication

In addition to rulemaking, agencies have the power to adjudicate disputes, which involves applying existing laws and regulations to specific cases. This power allows agencies to interpret and enforce regulations in real-world situations, setting precedents that guide future regulatory actions. In the AI sector, where the implications of technology use can vary widely depending on the context, the ability to adjudicate individual cases is vital for ensuring that regulations are applied appropriately and consistently. Adjudication also allows agencies to address issues that may not yet be fully covered by existing regulations. AI is a rapidly advancing field, and it is possible that new applications or unforeseen consequences will arise that were not anticipated when regulations were first drafted. Through adjudication, agencies can fill these gaps by interpreting and applying laws in ways that address emerging issues, even in the absence of specific rules.

Furthermore, regulatory adjudication helps agencies build a body of precedent through reasoned, published decision-making that can guide future AI development and compliance efforts. While agencies are not bound by precedent in the way that courts are, the decisions made through adjudication can still provide valuable guidance for companies and other stakeholders. These decisions clarify how existing laws and regulations apply to AI, helping to shape industry practices and set expectations for responsible AI development and use. As AI technologies continue to evolve, the precedents established through adjudication can serve as a living framework that adapts to new challenges as they arise on the ground and informs the ongoing regulatory process.

Investigation

Agencies also have extensive investigatory powers, enabling them to monitor compliance with regulations and investigate potential violations. This includes the authority to issue subpoenas, conduct audits, and require the production of documents. For AI technologies, which often involve complex data processes and algorithms, the ability to conduct thorough investigations is essential for detecting and addressing issues such as bias, discrimination, or breaches of privacy. Moreover, investigations can lead to the discovery of practices that were not previously recognized as problematic. In the fast-evolving field of AI, new risks and ethical dilemmas emerge regularly. Agency investigations are often the first step in identifying these issues and bringing them to the attention of regulators and the public. For example, an investigation might reveal that an AI system is disproportionately affecting a particular demographic group, leading to regulatory actions aimed at preventing discrimination. Such discoveries are critical for developing informed and effective regulatory responses that keep pace with technological advancements.

Agency investigation is also a crucial information-gathering process for regulating AI because it provides the government with the necessary tools to uncover and understand practices that are otherwise shielded from public view by corporate confidentiality. In the AI industry, much of the critical information about how AI systems are developed, trained, and deployed is held privately by companies. This includes proprietary algorithms, datasets, and internal decision-making processes, all of which are typically considered trade secrets and are therefore not publicly disclosed. Without the investigatory powers of regulatory agencies, the government would have limited visibility into these practices, making it difficult to assess whether AI technologies are being used responsibly and in compliance with the law. Agency investigatory powers allow the government to pierce this veil of confidentiality and gain access to the information needed to regulate effectively.

For instance, an agency like the Federal Trade Commission (FTC) might use its investigatory powers to examine how a company’s AI algorithms handle consumer data. This could involve scrutinizing the datasets used to train the AI, the processes for anonymizing data, or the decision-making logic embedded in the algorithm. Without such investigations, the government would have to rely on external sources like whistleblowers to obtain this information. However, these sources are problematic: Whistleblowing carries significant personal and professional risks for the individual involved, often leading to career-ending consequences or legal retaliation.

Enforcement

Enforcement is another critical power of federal agencies, allowing them to take action against entities that violate regulations. Enforcement actions can range from issuing warnings and fines to negotiating consent decrees and filing lawsuits in federal court. In the AI industry, where the potential for harm can be significant—whether through flawed decision-making algorithms or breaches of data security—effective enforcement is crucial for deterring misconduct and ensuring that companies adhere to regulatory standards.

“Soft” Law

Agencies like the National Institute of Standards and Technology and the Cybersecurity and Infrastructure Security Agency play a crucial role in shaping the development and deployment of cutting-edge technologies like AI, even though they lack direct investigatory, rulemaking, or enforcement powers. These agencies influence the industry primarily through the creation of standards, guidelines, and best practices, which, while not legally binding, carry significant weight within the industry and are often voluntarily adopted by companies seeking to align with recognized benchmarks of quality, safety, and security.

Advantages of Agency Regulation in Holding AI Accountable

Expertise

One of the primary advantages of agency regulation in holding AI accountable is the technical expertise that agencies can bring to the table, especially when compared to Congress or the courts. Federal agencies are staffed with experts who possess deep knowledge of the industries they regulate, and this expertise is critical in the context of AI, a field characterized by its technical complexity and rapid innovation. For instance, the FTC employs technologists and data scientists who understand AI algorithms and can assess the risks associated with their deployment.

Speed

Another significant advantage of agency regulation is the speed and flexibility with which agencies can act. Unlike Congress, which often moves slowly due to the complexities of the legislative process, agencies can respond more swiftly to emerging issues. This is particularly important in the AI sector, where technology evolves rapidly and new risks can emerge suddenly. Agencies can use their rulemaking and enforcement powers to address these issues in a timely manner, preventing harm before it occurs. For example, if a new AI application is found to have potentially harmful effects, an agency like the FTC or the Food and Drug Administration can quickly investigate and, if necessary, take action to mitigate the risk.

Agility

One significant value of agencies is their ability to amend or even wholly repeal their own legally binding regulations or informal standards. Unlike courts, which are often constrained by the principle of stare decisis and must adhere to established precedent, agencies possess the flexibility to update or discard regulations in response to new information, changing circumstances, or advancements in technology. This adaptability is particularly beneficial in the realm of technology, where rapid innovation can quickly render existing regulations obsolete or counterproductive. As new technologies like AI evolve, agencies can revise their regulations to address emerging risks, incorporate the latest research, and better align with current industry practices.

Scope

Agencies also operate at the federal level, which allows them to set consistent standards across the United States. This is especially important in the context of AI, where inconsistent state-level regulations could create a fragmented landscape that complicates compliance and stifles innovation. By establishing federal standards, agencies can provide clarity and predictability for AI developers and users, ensuring that AI technologies are regulated in a manner that supports innovation while protecting the public. For instance, federal standards for data privacy or algorithmic transparency would create a uniform framework that companies can follow, reducing the burden of navigating multiple, potentially conflicting state regulations.

Another advantage of agency regulation is its ability to bypass certain legal barriers that can limit the effectiveness of civil litigation. In civil lawsuits, plaintiffs must demonstrate that they have suffered a concrete injury to have standing to sue. This requirement can be a significant barrier in cases involving AI, where harms may be diffuse, indirect, or aggregate. Agencies, however, are not bound by the same standing requirements. They can investigate and take action based on potential or systemic risks, even in the absence of a specific injured party. This proactive approach is particularly important for regulating AI, where the full extent of potential harms may not become apparent until it is too late to prevent them through traditional legal channels.

Furthermore, agencies are uniquely positioned to provide coordinated, holistic oversight of AI technologies, which often intersect with multiple regulatory domains. AI’s applications are diverse, ranging from health care to finance to national security, each of which falls under the jurisdiction of different federal agencies. By collaborating and sharing information, agencies can provide more comprehensive oversight than any single entity could achieve on its own. For example, the FTC might work with the Justice Department and the Securities and Exchange Commission to address AI-related issues that span consumer protection, antitrust, and securities law. This coordinated approach ensures that AI is regulated in a holistic manner, taking into account the full spectrum of potential risks and benefits, and preventing regulatory gaps that could be exploited by bad actors.

Democratic Responsiveness

The notice-and-comment rulemaking process used by agencies also provides significant advantages in the regulation of AI. This process allows agencies to benefit from the expertise of external stakeholders, including industry participants, consumer advocates, academic experts, and other interested parties. By soliciting and considering public comments on proposed rules, agencies can ensure that their regulations are well informed and balanced, addressing the concerns of both industry and the public. In the context of AI, where the technology is evolving rapidly and its impacts are not always fully understood, this collaborative process is crucial for developing regulations that are both effective and adaptable to future developments.

Shortcomings of Agency Regulation of AI

Despite the many advantages of agency regulation, there are also significant shortcomings that can limit its effectiveness in holding AI accountable.

Resource Constraints

Agencies are reliant on Congress for their budgets, and many are chronically underfunded. This financial constraint limits their ability to hire the necessary experts, conduct thorough investigations, and enforce regulations effectively. In the AI sector, where cutting-edge expertise is crucial for understanding and regulating complex technologies, underfunding can severely hamper an agency’s ability to keep pace with technological advancements and emerging risks. For example, without adequate funding, an agency like the FTC may struggle to hire enough technologists or data scientists to fully assess the implications of new AI products, leading to gaps in regulatory oversight.

Even when funding is available, agencies often face challenges in hiring the experts they need to effectively regulate AI. The private sector, with its higher salaries and competitive benefits, can attract top talent away from public service. This talent gap is particularly problematic in the AI field, where the demand for skilled professionals far exceeds the supply. Agencies may find it difficult to compete with tech companies for the best AI researchers and engineers, which can limit their ability to understand and address the complex issues that arise in this rapidly evolving field. Without access to top-tier expertise, agencies may struggle to craft effective regulations or to anticipate and mitigate the risks associated with AI technologies.

Political Constraints

Agencies are also vulnerable to changes in political leadership, which can significantly impact their regulatory effectiveness. Many federal agencies are led by politically appointed officials, and shifts in political administration can result in changes to agency priorities, regulatory approaches, and even the enforcement of existing rules. This lack of continuity can undermine long-term regulatory strategies and create uncertainty for regulated entities. For example, an agency that takes a proactive stance on AI regulation under one administration might shift to a more laissez-faire approach under the next, disrupting ongoing regulatory efforts and leading to inconsistent enforcement. This political volatility can be particularly challenging in the AI sector, where long-term oversight is necessary to manage the risks associated with rapidly evolving technologies.

Industries affected by agency regulations, including those involving AI, often resist new rules, arguing that they stifle innovation or impose undue burdens. This resistance can slow down the regulatory process, lead to watered-down regulations, or result in costly legal battles. Moreover, powerful industry lobbyists can influence Congress to limit an agency’s regulatory authority, further constraining its ability to act. In the AI sector, where rapid innovation is a key driver of economic growth, industry resistance to regulation is particularly strong. Companies may argue that stringent regulations will hinder their ability to develop new technologies and bring them to market, potentially leading to a regulatory environment that favors industry interests over public safety and ethical considerations.

Legal Constraints

Moreover, the power of federal agencies is increasingly being challenged by the judiciary, particularly by conservative judges who are skeptical of the administrative state. A significant development in this regard is the recent overturning of the Chevron doctrine, a legal principle that has historically granted agencies deference in interpreting the scope of their own powers. If courts no longer defer to agency interpretations, it could severely restrict agencies’ ability to regulate AI effectively. Agencies might face more legal challenges to their rules, leading to prolonged litigation and uncertainty in the regulatory environment. This judicial pushback could also embolden industry actors to resist regulatory oversight, further complicating efforts to ensure that AI technologies are developed and deployed responsibly.

Another limitation of agency regulation is the jurisdictional constraints imposed by organic statutes, which define the specific areas that each agency can regulate. This can be a significant limitation in the context of AI, a technology that transcends traditional regulatory boundaries. For example, the Federal Communications Commission might have authority over AI as it relates to telecommunications, but not over AI applications in health care or finance. This jurisdictional fragmentation can lead to gaps in regulation, where certain aspects of AI are not adequately covered by any single agency. This fragmentation may be exacerbated by the increasing tendency of courts to read organic statutes narrowly, requiring Congress to explicitly give agencies the power over “major questions” of policy.

Overreach

Finally, there is the risk of regulatory overreach, where agencies impose burdensome regulations that stifle innovation or create unnecessary barriers to market entry. While the goal of regulation is to protect the public interest, overly stringent rules can have the unintended consequence of slowing the development and adoption of beneficial AI technologies. For example, if regulations are too restrictive, they could discourage investment in AI research and development, limiting the potential of these technologies to address critical challenges in areas like health care, education, and environmental sustainability. Balancing the need for effective oversight with the need to support innovation is a constant challenge in the regulation of AI, and agencies must be careful to avoid stifling the very technologies they are tasked with overseeing.

What Specific Agencies Can Do to Regulate AI

As AI continues to integrate into various sectors, the role of federal agencies in regulating AI technologies becomes increasingly critical. The regulatory landscape for AI is complex and involves multiple agencies, each with its own mandate and area of expertise. While some observers have argued for a single agency to govern AI across the board, that approach has serious drawbacks and there is no short- or medium-term prospect for the creation of such an agency. Here we highlight some of the main agencies that are already playing a role in regulating AI generally.

To be sure, this is not an exhaustive list—for example, in the health care sector, the Department of Health and Human Services, through agencies like the Food and Drug Administration and the Centers for Medicare & Medicaid Services, plays a crucial role in regulating AI. And other agencies, such as the Department of Defense, are playing a major role in developing the AI technologies themselves. It is likely that, either currently or in the near future, every agency will affect AI at least to some extent.

Federal Trade Commission

The Federal Trade Commission plays a particularly important role in AI regulation through its mandate to protect consumers and promote competition.

Consumer Protection

The FTC plays a crucial role in protecting consumers from the potential harms associated with AI. As the primary federal agency responsible for consumer protection, the FTC is tasked with preventing deceptive, unfair, or otherwise harmful business practices, including those involving AI technologies. AI presents unique threats to consumers that necessitate the FTC’s intervention, ranging from privacy violations and discrimination to deceptive advertising and lack of transparency.

One of the most significant threats AI poses to consumers is the potential for privacy violations. AI systems often require large amounts of personal data to function effectively. This data can include sensitive information such as financial records, health data, location tracking, and even biometric identifiers like facial recognition. If companies do not handle this data responsibly—by securing it adequately, obtaining proper consent, or using it only for stated purposes—consumers’ privacy can be severely compromised. The FTC has a long history of addressing privacy issues, and it can use its authority to enforce privacy standards in the AI context, ensuring that companies respect consumer data rights.

AI systems are also prone to reflecting and even amplifying biases present in the data they are trained on. This can lead to discriminatory outcomes in areas such as lending, employment, health care, and law enforcement. For example, an AI-driven hiring tool might inadvertently favor certain demographic groups over others, or a facial recognition system might perform poorly on individuals with darker skin tones, leading to unfair treatment. The FTC can step in to investigate and address these issues, ensuring that AI systems do not perpetuate or exacerbate discrimination and that companies are held accountable for the fairness of their AI-driven decisions.

With its hordes of data, AI can be used to create highly sophisticated and targeted advertising, sometimes in ways that are deceptive or misleading. For instance, AI-driven algorithms might be used to create deepfake videos or to generate fake reviews that mislead consumers about a product’s quality or a service’s reliability. The FTC’s role is to ensure that advertising practices are truthful and not misleading, regardless of the technology used. This includes monitoring and regulating the use of AI in advertising to protect consumers from deception.

AI systems often operate as “black boxes,” making decisions in ways that are not transparent or understandable to users. This lack of transparency can be problematic when AI systems are used in critical decision-making processes, such as determining eligibility for credit, insurance, or employment. Consumers may not know how or why decisions are being made, and they may have little recourse if they believe they have been treated unfairly. The FTC can advocate for greater transparency in AI systems, requiring companies to provide clear explanations of how AI-driven decisions are made and ensuring that consumers have the ability to challenge or appeal those decisions if necessary.

AI systems can also introduce new security vulnerabilities. For example, AI-driven devices connected to the internet might be susceptible to hacking, which could lead to unauthorized access to personal data or the manipulation of critical systems. The FTC can enforce security standards and hold companies accountable if their AI products are found to be insecure or if they fail to protect consumer data adequately.

Competition

AI, by its nature, presents several challenges that can invite anti-competitive practices. For one, AI systems rely heavily on large datasets for training and improving algorithms. Companies with access to vast amounts of data can develop more sophisticated AI models, creating a significant competitive advantage. This can lead to data monopolies in which dominant players control the critical resource—data—needed to compete effectively. The FTC monitors such practices to prevent dominant firms from using their data advantage to exclude competitors or create barriers to entry.

Relatedly, AI technologies often exhibit strong network effects, in which the value of a service increases as more people use it. For instance, AI-driven platforms such as recommendation engines or digital assistants become more effective with more users, which attracts even more users, reinforcing the dominance of the leading platforms. The FTC scrutinizes such scenarios to ensure that these network effects do not result in the creation of insurmountable barriers for new entrants or smaller competitors.

AI systems, particularly those used in pricing strategies, can learn to collude with other algorithms to fix prices or divide markets without explicit human direction. This is a form of anti-competitive behavior that is difficult to detect and regulate using traditional antitrust tools. The FTC actively investigates such cases to ensure that AI-driven collusion does not harm consumers by artificially inflating prices or reducing choices in the market.

The AI era has seen an explosion of strategic investments and mergers and acquisitions—large technology companies often acquire smaller AI startups to consolidate their market position. While these acquisitions can lead to innovation and growth, they can also reduce competition by eliminating potential rivals. The FTC reviews such mergers and acquisitions to determine whether they would significantly reduce competition or lead to the creation of monopolies. If a merger is deemed likely to harm competition, the FTC can block it or require the companies to divest certain assets to maintain a competitive market.

Department of Justice

Alongside the FTC, the Justice Department is responsible for enforcing antitrust law. The FTC and the Justice Department recently agreed to divide antitrust enforcement in the AI industry, with the FTC leading the investigation into the conduct of OpenAI and Microsoft, and the Justice Department investigating Nvidia, the largest maker of AI chips. In addition, the Justice Department, unlike the FTC, can bring criminal antitrust cases.

The Justice Department also plays an important role in regulating the national security implications of AI. It is responsible for enforcing the export controls that the State Department and the Commerce Department impose on AI hardware and software. It also plays an important role in the Committee on Foreign Investment in the United States, an interagency group that controls foreign investment in sensitive areas of the U.S. economy, including high technology like AI.

Department of Commerce

The Commerce Department is playing an increasingly important role in regulating AI. In particular, the National Institute of Standards and Technology (NIST) is helping set AI standards and guidelines. While NIST’s standards are not legally binding, they are highly influential and often adopted by federal agencies and private industry—indeed, they are often cited by courts in legal disputes as informing a standard of care. NIST’s work in developing standards for AI, including those related to cybersecurity and algorithmic transparency, provides a critical foundation for ensuring that AI technologies are developed and deployed in ways that are ethical, secure, and reliable.

The Commerce Department has emerged as a key part of the Biden administration’s AI strategy, especially as it pertains to implementing the Biden executive order on AI. NIST has been tasked with developing standards for the safe development of AI, the National Telecommunications and Information Administration with analyzing the risks of open-source AI models, the Bureau of Industry and Security with using the Defense Production Act to require AI labs to provide security information about their models to the federal government, and the U.S. Patent and Trademark Office with clarifying the applicability of intellectual property law to AI.

Securities and Exchange Commission

The Securities and Exchange Commission (SEC) has emerged as a significant force in tech regulation, leveraging its authority under securities law to hold technology companies accountable for irresponsible behavior. The SEC can use securities law—corporate governance, transparency, and investor protection—to enforce general accountability in the tech sector, a strategy that is increasingly relevant as AI becomes more integral to business operations. In addition, the SEC can address public issues such as privacy, bias, discrimination, and security by enforcing corporate responsibilities to act in the best interests of shareholders, especially when their operations have far-reaching impacts. For instance, if an AI company engages in discriminatory practices or violates privacy, it harms not just its reputation but also its long-term shareholder value, prompting SEC action for failing to meet fiduciary duties.

The SEC’s control over mandatory disclosures also plays a vital role in regulating tech. Public companies must disclose material risks, including cybersecurity breaches, privacy concerns, and biases in AI systems. For example, if a company uses AI systems that inadvertently discriminate against certain demographic groups, the SEC may require that the company disclose these risks to its shareholders. Failing to disclose such material risks could lead to allegations of fraud or misrepresentation, as the company would not be providing investors with a full and fair picture of the potential liabilities it faces.

Additionally, the SEC ensures that companies have proper internal controls to manage AI-related risks, especially when AI handles sensitive data or makes automated decisions affecting people’s rights. The SEC can require stricter governance, such as ethics boards or third-party audits, to address risks such as bias or data misuse. By enforcing these standards, the SEC not only protects investors but also ensures companies uphold public welfare in the face of AI’s potential harms.

Conclusion

The regulation of AI through federal agencies presents a complex interplay of opportunities and challenges. On the one hand, agencies bring technical expertise, speed, and a federal-level perspective that are essential in managing the complexities and rapid developments of AI technology. Their ability to influence industry practices informally, bypass certain legal barriers, and coordinate across multiple domains offers a powerful mechanism for holding AI accountable and ensuring that its benefits are realized while minimizing its risks.

On the other hand, agencies face substantial hurdles, including financial and political constraints, challenges in attracting necessary expertise, and increasing judicial scrutiny. These challenges can limit their effectiveness and create uncertainty in the regulatory landscape. As AI continues to evolve, the role of agencies in regulating this transformative technology will likely remain a dynamic and contested space, requiring ongoing adaptation and engagement from all stakeholders involved.

While agency regulation is not a panacea for all the challenges posed by AI, it remains a critical tool in the broader effort to ensure that AI is developed and deployed in ways that are safe, ethical, and beneficial to society. Strengthening and supporting agencies in their regulatory missions, while also remaining vigilant to the risks of overreach, will be essential for navigating the complex and rapidly changing landscape of AI governance.

Chinmayi SharmaAlan Z. Rozenshtein, Published courtesy of Lawfare

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.