The Chaos at OpenAI is a Death Knell for AI Self-Regulation

Sam Altman speaking in San Francisco, September 2015. (TechCrunch).

If Society Wants to Slow Down the Rollout of This Potentially Epochal Technology, It Will Have to Do It the Old-Fashioned Way: Through Top-Down Government Regulation.

It was a wild week at OpenAI, the artificial intelligence (AI) company most famous for its immensely successful ChatGPT service. Sam Altman, the company’s CEO and arguably the most important person in the race to develop artificial general intelligence (AGI), was fired by OpenAI’s nonprofit board, which (although details are still sketchy) was concerned that Altman was not moving cautiously enough in light of the dangers that AI could pose to society.

But the board’s actions appear to have backfired badly. No sooner was Altman fired than Microsoft, which has a close partnership with OpenAI, announced his hiring to head an internal Microsoft AI research division. And, in the face of a near total revolt by OpenAI’s employees, the board ultimately agreed to hire Altman back as CEO, with several of the members who fired him in the first place resigning

It’s been a dramatic story—equal parts entertaining and baffling—and one that will no doubt be discussed for years within the technology industry. But beyond its ramifications within Silicon Valley, it also provides an important lesson to would-be regulators of AI: The possibility of meaningful self-regulation, especially through clever corporate forms, is a chimera. The power struggle within the company and the ultimate failure of the nonprofit board to maintain control of an increasingly commercialization-minded company is a reminder that, if society wants to slow down the rollout of this potentially epochal technology, it will have to do it the old-fashioned way: through top-down government regulation.

OpenAI is notable not only for its incredible technological advances but also for its complicated and unusual corporate form—and the unprecedented decisions that corporate form has made. When OpenAI was founded in 2015, it originally registered as a nonprofit research organization to help develop AGI in a way that would be beneficial to humanity.

But AI, especially AGI, is expensive. As it became clear that a pure nonprofit entity would not have enough resources to spend on the data and computation needed to train and run highly sophisticated models, OpenAI restructured in 2019, launching a “capped” for-profit subsidiary legally required to uphold the nonprofit’s mission—building safe AGI that “benefits all of humanity”—and limited in the financial returns it can provide for investors. The guarantor for this unusual, at least in Silicon Valley, mission was a nonprofit board that controlled the nonprofit, which in turn controlled the for-profit subsidiary. The board was not the usual crew of tech boosters; it had serious critics of rapid AI development, most notably Helen Toner.

Over time, however, OpenAI’s values began to shift. According to its IRS filings, by 2018 the company was no longer boasting its commitment to “openly share our plans and capabilities along the way”; and by 2021 the goal had become to “build general-purpose artificial intelligence,” resonant of commercial productization interests, rather than the more open-ended, research-oriented mission to “advance digital intelligence.”

Concurrent with the creation of its for-profit subsidiary, OpenAI entered into a “strategic partnership” with Microsoft, in which Microsoft would invest upward of $13 billion in OpenAI, in large part in the form of computation credits on Azure, Microsoft’s cloud computing platform. Microsoft is reportedly a 49 percent owner of the company and, until it secures its return on investment, will receive 75 percent of OpenAI’s profits. And this is not OpenAI’s first dalliance with Microsoft—in 2019, Microsoft purchased an exclusive license to the GPT-3 technology powering ChatGPT. With a for-profit subsidiary and a tech giant as partial owner, suffice it to say OpenAI is no longer purely a research organization.

If all of this sounds a touch elaborate—incoherent, even—it’s because it is. It is a Russian doll of alternating research and profit-driven entities. Nonprofit startups may not be uncommon, but the tension between the goals of different arms can certainly create chaos. Once OpenAI went down the path of commercialization—even in service of the lofty goal of researching how to make AGI safe for humanity—it was inevitable that these two motivations would slam into each other down the line. 

OpenAI’s labyrinthian corporate structure aside, the church of innovation has always housed two schools of thoughts: those driven primarily by science and those driven primarily by bringing products to market. Throw in AI, and the features and bugs of each approach become existential and cataclysmic—saving humanity versus preventing the end of the world. These tensions within OpenAI between “boosters,” who want to accelerate the deployment of powerful AI systems, and “doomers,” who worry about the systems’ existential dangers, had been simmering below the surface long before Altman’s ousting. The issue first came to a head in 2020, when 11 OpenAI employees—led by Dario Amodei, then-vice president of research—who were disillusioned with the company’s shift away from safety-minded research left to establish Anthropic, an AGI research company that claims to embody the cautious approach the ex-employees felt OpenAI had abandoned. This schism foreshadowed the second crescendo of OpenAI’s faction warfare when, in 2022, the company released ChatGPT, alienating a portion of employees who felt the decision was premature and irresponsible. 

These doomers, including OpenAI’s chief scientist Ilya Sutskever, have understandably been concerned that, once OpenAI started making billions of dollars and had customers to satisfy, business imperatives would crowd out OpenAI’s original mission of cautious research. And the looming presence of Microsoft—famously cutthroat even by Silicon Valley’s standards, and eager to embed AI in as much of its ecosystem as possible, no matter the risks—couldn’t have made the position of Sutskever and other go-slow advocates within OpenAI easier.

On the other side, the boosters, led by Sam Altman himself, wanted to bring this powerful new technology to the public quickly while continuing to progress toward true AGI. As the former CEO of Y Combinator, a start-up accelerator with the specific purpose of churning out high-impact, disruptive technology companies as quickly as possible, Altman was seen as a driving force behind ChatGPT’s public release and the rapid pace of OpenAI’s subsequent productization of foundation models. Ironically, it is reported that ChatGPT’s rapid release was motivated in part by fears that safety-minded Anthropic was developing its own chatbot. Altman beat them to the punch.

Nor did Altman’s AI evangelism stop with OpenAI; Altman’s deals outside of his company appear to contradict the company’s original mission of slow, cautious research. In the weeks prior to his firing, Altman was reportedly seeking funding for a new AI chip venture. AI chip technology would jump-start AI development, further incentivizing the release of new models faster. 

The internal game of chicken between doomers and boosters came to its third climax when the board fired Altman on Nov. 17. While the precipitating factor is still unclear, shortly before firing Altman, the board received a letter from several OpenAI researchers warning that an algorithmic breakthrough known as Q* could dramatically shorten the time needed to achieve AGI. Perhaps this sudden realization of how far OpenAI’s research had advanced was one reason some members of the board felt that Altman had not been “consistently candid in his communications with the board” (though it remains unclear how the board could have remained in the dark about OpenAI’s researcher, given that one of the board members was Sutskever himself). 

Even before the letter, some members of the board appeared to have lost confidence in OpenAI’s commitment to safety and cautious research, with its key skeptic, Helen Toner, authoring a paper lauding Anthropic’s approach over OpenAI’s. A festering lack of faith in OpenAI’s responsible practices and the realization that AGI wasn’t nearly as far off as originally thought fueled doubts over Altman’s leadership abilities. To underscore how much this decision was driven by safety concerns, the board reportedly approached Amodei, CEO of Anthropic, for the CEO role and a potential merger between the companies.

In the end, rightly or wrongly, the board decided that Altman was not serving OpenAI’s mission of cautious AI research—emphasis on “cautious”—and fired him, as was its right. Altman himself has emphasized how “important” it is that the board could fire him, since “no one person” should be trusted to control AI. 

But it’s pretty clear that the board’s attempt to depose Altman has backfired badly. Altman quickly received almost universal support from the rest of Silicon Valley. And, most tellingly, over 700 of OpenAI’s 770 employees—a remarkable proportion considering that many doomers had left OpenAI to join Anthropic—signed a letter demanding that Altman be reinstated and that the board resign. Bizarrely, Sutskever himself, said to have played a key role in Altman’s ouster, also signed the letter (Sutskever has since said that he “deeply regrets” his participation and “will do everything I can to reunite the company”). 

The board’s decision backfired in more ways than one. If its primary motivation was to preserve OpenAI’s goals of cautious research, presumably to protect the public from the release of harmful AI, then it clearly did not anticipate the way the weekend would unfold. Shortly after Altman was fired, Microsoft first stated that it had no knowledge of this coup and then quickly swooped in to hire Altman and former OpenAI president Greg Brockman, who resigned in solidarity with Altman and previously had his own skirmishes with the board. The board’s decision thus threatened to move an AI evangelist out of an environment where there were some built-in checks on his ambitions for deploying the technology to an environment in which, backed with corporate resources, he would be free—indeed encouraged—to build and release foundation models as fast as possible. 

In the end, Altman did not go to Microsoft, but that’s only because the board, bowing to the inevitable, hired Altman back. Not only that, but two of the board members who voted to fire Altman—Toner and technology entrepreneur Tasha McCauley—resigned (a third member who ousted Altman, Quora CEO Adam D’Angelo, remains on the board for now). The fall of the public face of responsible AI research and his reemergence, first as an employee of an aggressively for-profit technology company and then back at the helm of OpenAI (now with a neutered board), shows the fragility of relying on industry self-governance and casts serious doubts on the efficacy and reliability of the myriad voluntary commitments companies have made toward responsible AI. A company that values AI safety and bringing AI to market will always have to adjudicate between the two, and commercial imperatives are hard to resist in the long term, both within a company and, especially, in the broader market. 

It also has broader implications for where AI talent and investment goes. When the choice is between questionable job security at an organization like OpenAI, whose board admitted that destroying the company might be “consistent with the mission” of ensuring AI safety, and guaranteed job security at a company that needs all the talent it can get to push more products faster, it’s not hard to guess where the talent will go. Beyond talent, this incident may influence where venture funding goes. Less visible, though just as important, if not more so, this incident may influence how much the AI industry heeds the warnings of doomers and respects the principles of cautious research moving forward—or whether bias, security, and privacy get lost in the shuffle when it comes to AI.

So far, the chaos at OpenAI is most relevant for the company, its employees and investors, and the broader AI community. But lessons for would-be regulators are already coming into focus. Specifically, it’s clear that, to the extent that AI poses systemic risks to society, relying on AI companies to self-regulate is not a realistic option. The pressures—whether scientific or financial—to keep pushing the envelope are too great, and there’s simply no credible way for AI companies to commit to moving slowly and not breaking too many things.

To date, in the United States, the federal approach to AI has consisted of voluntary commitments, guidelines, and an executive order that mostly seeks to kick off more studies and guidance, while imposing some requirements on federal contractors. Several of the most hard-hitting provisions apply only to models that have yet to be developed. While not a bad place to be, it remains uncertain when, if, and how binding obligations will be developed. It’s not that these obligations are not being considered—the same document that outlines the voluntary AI commitments states that these are “only a first step in developing and enforcing binding obligations to ensure safety, security, and trust.” To properly regulate AI, “new laws, rules, oversight, and enforcement” will be required.

This approach is reminiscent of another area of tech regulation: cybersecurity. For decades, technology companies were shielded from accountability for shipping insecure products in a variety of ways. Even after the societal threat of poor cybersecurity became abundantly clear—for the average individual or business and to critical infrastructure as a whole—the government opted to push voluntary frameworks rather than concrete requirements for improved cybersecurity. This approach has been widely considered a failure, a recognition that culminated in the Biden administration’s National Cyber Strategy, which called for greater accountability for software manufacturers. The document underscores the conclusion the security community came to along ago: Industry incentives are not only inadequate to foster strong cybersecurity; they are often contrary to the goals of strong cybersecurity. It doesn’t pay to slow down development in the interest of secure design. The same lesson applies to AI—when industry incentives and the public interest are misaligned, even the best-intentioned companies cannot resist the yoke of market drivers. 

None of this is to comment on what top-down AI regulation should look like, or even whether top-down regulation is, all things considered, the appropriate strategy. AI accelerationists, focusing on the potentially massive social, economic, and technological upsides to AI, worry plausibly that regulation will simply get in the way of AI’s benefits. This is not the place to adjudicate between the boosters and the doomers.

But whatever the right answer, society can’t just trust the technology industry to arrive at it. If society decides that AI needs to be regulated, the recent meltdown at OpenAI demonstrates why this regulation will have to come from the top. 

– Published courtesy of Lawfare. Eugenia Lostri, Alan Z. Rozenshtein, Chinmayi Sharma.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.