The article is here; the Introduction:
Over the previous a number of many years, a mix of a laissez-faire regulatory atmosphere and Part 230’s statutory protections for platform content-moderation selections has largely foreclosed the event of First Modification doctrine on platform content material moderation. However the standard knowledge has been that the First Modification would defend most platform operations even when this regulatory protect have been stripped away. The best path to this conclusion follows what we name the “editorial analogy,” which holds {that a} platform deciding what content material to hold, take away, promote, or demote is in principally the identical place—with the identical sturdy First Modification protections—as a newspaper editorial board contemplating which op-eds to hold.
Whereas formally interesting, this analogy operates at such a excessive stage of abstraction that one may simply as plausibly characterize platforms as extra akin to governments—establishments whose energy over speech requires democratic checks moderately than constitutional safety. These competing analogies level in reverse instructions: one treats platforms as democracy-enhancing audio system deserving autonomy; the opposite as institutional censors warranting regulation.
A circuit break up over which analogy to comply with prompted the Supreme Court docket’s resolution final Time period in Moody v. NetChoice, LLC. The Eleventh Circuit had invalidated Florida’s content-moderation regulation as an unconstitutional interference with platforms’ editorial discretion. The Fifth Circuit upheld Texas’s related regulation based mostly on the normal understanding that frequent carriers—on this case social platforms—are appropriately topic to anti-discrimination necessities.
The Court docket discovered each of those tales too tidy.
All of the Justices agreed that some platform moderation selections are “editorial” and speech-like in nature. But in addition they agreed that this safety may range throughout platforms, companies, and moderation strategies. Unable to resolve these nuances on a sparse report, the Court docket remanded for extra detailed factual growth about how these legal guidelines would really function.
Whereas Moody can pretty be characterised as a punt—merely suspending arduous constitutional questions—its very reluctance to embrace categorical analogies marks a big shift. Just by characterizing direct regulation of platform content material moderation as a fancy query that requires shut, fact-specific evaluation, Moody upsets tech litigants’ fundamental technique and suggests a extra nuanced First Modification jurisprudence than many anticipated. Furthermore, the Justices’ numerous opinions provide revealing glimpses of why conventional analogies fail to seize platforms’ novel traits.
This Article examines Moody‘s implications for platform regulation. Half I traces the event of the First Modification’s protections for “editorial discretion” and the political controversies that prompted the state regulation. Half II analyzes the Justices’ competing approaches. Half III explores Moody‘s speedy affect on litigation technique, explaining how its skepticism in direction of facial challenges will reshape tech-industry resistance to regulation, whereas arguing that the choice leaves shocking room for fastidiously designed guidelines that may face up to extra targeted constitutional scrutiny. Half IV proposes transferring past editorial analogies to give attention to platforms’ precise results on consumer speech—an method that we now have endorsed elsewhere and that we consider higher serves First Modification values within the digital age.