The era of overbroad facial challenges to tech regulation is over.
The Supreme Court’s recent decision in Moody v. NetChoice was, for many a court watcher, something of a disappointment. The Court did not, as many (we included) hoped, rule on the merits of whether the Florida and Texas laws limiting the content moderation decisions of social media platforms violated the First Amendment. Although a majority of the Court reaffirmed that content moderation is a kind of editorial discretion protected by the First Amendment and that, on remand, the lower courts should find that the Texas law (and likely also the Florida law) violated the large social media platforms’ First Amendment right to moderate their public news feeds, that part of the Court’s opinion was, as Justice Samuel Alito noted in his concurrence, “nonbinding dicta.”
That is because the actual holding of Moody is a more technical one about what is required for a law to be struck down “on its face.” In constitutional litigation, courts prefer to hear claims that a law is unconstitutional “as applied” to a specific set of facts—in other words, that one particular kind of enforcement scenario would be unconstitutional and that particular enforcement scenario should be prohibited. By contrast, a facial challenge seeks to strike down a law in its entirety, typically because it is unconstitutional in all of its applications. In First Amendment cases, the standard for a facial challenge is somewhat lower, requiring, as the Moody Court explained, only that “a substantial number of [the law’s] applications are unconstitutional, judged in relation to the statute’s plainly legitimate sweep.” This is how NetChoice chose to challenge the Florida and Texas laws—as generally unconstitutional.
The problem, in the Court’s eyes, was that neither NetChoice nor the district and circuit courts actually performed the requisite facial-challenge analysis. Rather, throughout both sets of litigation, everyone involved treated the cases as if they were brought as applied to the narrow “heartland applications” of public news feeds on large social media platforms. This left questions about many other important applications, such as whether the laws even applied to email or direct messaging services like Gmail and WhatsApp, or to e-commerce sites like Etsy and Uber where user content is a secondary activity—and, if they did apply, whether they were constitutional. The record as it stood was thus insufficient for the Supreme Court to say whether, as is required for a facial challenge, the unconstitutional applications of the statute outweighed the constitutional ones.
Moody may therefore seem like merely a “procedural” ruling, but (and not just because we are law professors) this hugely undersells its importance. Whatever happens on remand—whether or not the Texas and Florida laws are actually adjudged unconstitutional when applied to public social-media news feeds—the Court’s holding as to what is required for First Amendment facial challenges against laws regulating digital technology is significant, and will limit the technology industry’s ability to resist the growing tide of government regulation.
That is because narrow, as-applied First Amendment litigation is probably incapable of delivering tech companies the laissez-faire regulatory environment they have come to expect. Indeed, the tech industry may be uniquely dependent on this kind of broad, preemptive judicial protection. Internet-based services are constantly rolling out new features. This state of constant self-overhaul is a major advantage for tech companies looking to achieve “disruption” by evading static regulatory frameworks and eating competitors’ lunches. But perpetual reinvention becomes a disadvantage for a company trying to lock in a solid regulatory shelter against a broadly written law. Applying the law to the new features may raise new questions and call for new analysis under the First Amendment. In other words, a shapeshifting tech product that wins an as-applied challenge is protected only until it shapeshifts again. In this sense, a company will struggle to engage in a strategy of regulatory or market disruption without disrupting its own legal security in the process.
And as-applied challenges may also have serious limits even for platforms that don’t change much over time. This is because each platform is a microcosm containing a wide variety of activities and situations. When a law is written broadly—as were Florida’s and Texas’s—and the range of covered activities even within one platform is kaleidoscopically complex, it is doubtful that an as-applied challenge with an appropriately narrow focus can offer much security to the whole operation.
For decades, tech companies have been able to get tort claims dismissed on a basically automatic basis under Section 230. A group like NetChoice could be forgiven for assuming until very recently that First Amendment litigation would work in a similar way, allowing tech interests to get quick, paint-by-numbers court orders at an early enough stage to stop new regulation from getting off the ground. After all, the Supreme Court’s few previous First Amendment cases dealing with direct regulation of the internet all involved resoundingly successful facial challenges. And other recent First Amendment cases displayed a business-friendly, state-hostile approach that seemed highly favorable in spirit to the tech platforms’ interests.
It is not surprising, then, that First Amendment challenges brought by online platforms or their surrogates against new internet regulations in recent years have generally come as facial challenges with requests for pre-enforcement injunctive relief. NetChoice and other groups have won pre-enforcement preliminary injunctions in cases challenging state children’s “design code” laws, state age-verification laws for porn sites and social media sites, and multiple attempts to “ban” TikTok or force it to divest (see also the latest litigation in the D.C. Circuit).
But, as others have noted, Moody puts a large stumbling block in front of such strategies. As the Moody Court observed, “The online world is variegated and complex, encompassing an ever-growing number of apps, services, functionalities, and methods for communication and connection.” And that’s not all—even a single service may offer a whole variety of subservices and host a wide range of third-party activities. Enforcement of a law that ranges over all these situations is likely to apply in many distinct ways that all require their own separate First Amendment analysis.
Justice Amy Coney Barrett, for example, noted in her concurring opinion that there may not be a single answer to the scope of First Amendment protections even when it comes to public social-media news feeds. She noted that when algorithms implement human-designed content policies, they likely enjoy First Amendment protection. But she then questioned whether purely automated algorithms that simply show users content similar to what they’ve engaged with before would receive the same protection.
None of this means that Moody is fatal to facial challenges. For example, a district court recently endorsed, after citing the Moody decision, NetChoice’s facial challenge to a Mississippi social-media age verification law. But it’s clearly having a negative effect—for example, in July, a district court rejected a First Amendment facial challenge to a state statute restricting how digital advertising platforms pass along a tax to consumers, citing Moody throughout. Moody also played a central role in the recent Ninth Circuit oral argument in NetChoice v. Bonta, a facial challenge to a California “child design” code law. Citing Moody, several of the judges seemed skeptical of NetChoice’s facial-challenge strategy.
Moody will likely have wider-ranging effects than just putting Silicon Valley at a worse litigating posture (though this by itself would be notable)—it may also serve to advance First Amendment doctrine itself. In principle, a facial challenge under the First Amendment requires a court to take stock of the full range of a law’s applications, both constitutional and unconstitutional. But that’s a hard thing for courts to do in practice, all the more so at a preliminary stage when the law has not gone into effect yet. Lawyers are trained to spot issues, not non-issues, and naturally enough, a set of hypothetical applications that would violate the Constitution can focus the judicial mind in a way that a much broader set of “nothing to see here” hypotheticals might not. So it is easy to see why a court, once it is sufficiently concerned that some substantial number of a law’s applications would likely violate the First Amendment, might be tempted to condemn the whole provision without thoroughly exploring the landscape of hypothetical non-events that would not violate the First Amendment.
But rushing to judgment in this manner can promote a lopsided body of case law that primarily defines what governments can’t do, rather than clarifying what they can do within constitutional bounds. The resultant “negativity bias” gives a huge strategic plus to tech litigants hoping to fend off new regulation. But it has probably stunted the development of First Amendment jurisprudence. Moody’s insistence on a more rigorous analysis of all of a law’s applications may therefore lead to more nuanced and balanced First Amendment jurisprudence, particularly as it relates to the regulation of digital platforms.
By forcing courts to consider and rule on the constitutionality of specific applications, Moody challenges courts to develop clearer guidelines on permissible regulation in the digital sphere, potentially allowing for more effective and constitutionally sound legislation in the future. It remains to be seen how much judicial clarity will ultimately emerge. But one thing we can already say confidently after Moody is that tech’s First Amendment stock appears to have been overvalued—and that some kind of correction is underway.
– Kyle Langvardt, Alan Z. Rozenshtein, Published courtesy of Lawfare.