Welcome to the first Issue of Volume 76 of the Federal Communications Law Journal, the nation’s premier communications law journal and the official journal of the Federal Communications Bar Association (FCBA). We are excited to present the first Issue of this Volume showcasing the diverse range of issues encompassed by technology and communications law. This Issue provides analysis and insight into the future regulation of facial recognition technology and social media companies, as well as the implications of generative Artificial Intelligence (AI) in the art world.
This Issue begins with an article from Lawrence J. Spiwak, Esq., President of the Phoenix Center for Advanced Legal & Economic Public Policy Studies. His Article analyzes the ongoing proposal to consider social media companies to be “common carriers” from a regulatory perspective, filling the analytical gap of how such a regime might work and examining the intended and unintended consequences of such a proposal.
This Issue also features four student Notes, all of which explore innovative ways to apply existing frameworks to novel technology issues.
First, Ileana Thompson explores how the multi-district litigation against opioid manufacturers for their role in the opioid epidemic may serve as a framework for future mass tort litigation against social media companies whose algorithms are designed to drive social media addiction.
In our second Note, Katherine Wirvin argues for the adoption of a slightly modified Coogan Law to protect the financial interests of minors who are YouTube stars (or, “KidTubers”).
In our third Note, Catherine Ryan explores the threat to individual rights posed by facial recognition technology and advocates for an expansion of human rights to include the right to one’s own facial biometric data.
Finally, David Silverman proposes that the fair use doctrine used in the Supreme Court decision Google v. Oracle be applied to AI image generation in the world of art.
The Editorial Board of Volume 76 would like to thank the FCBA and The George Washington University Law School for their continued support of the Journal. We also appreciate the hard work of the authors and editors who contributed to this Issue.
The Federal Communications Law Journal is committed to providing its readers with in-depth coverage of relevant communication law topics. We welcome your feedback and encourage the submission of articles for publication consideration. Please direct any questions or comments about this Issue to email@example.com. Articles can be sent to firstname.lastname@example.org. This Issue and our archive are available at http://www.fclj.org.
By Lawrence J. Spiwak
The debate over how internet platforms moderate content has reached a fever pitch. To get around First Amendment concerns, some proponents of content moderation regulation argue that internet platforms should be regulated as “common carriers”—that is, internet platforms should be legally obligated to serve all comers without discrimination. As these proponents regularly point to communications law as an analytical template, it appears that the term “common carrier” has become a euphemism for full-blown public utility regulation complete with a dedicated regulator. However, proponents of common carrier regulation provide no details about how this regime would work. Viewing the question through a regulatory—as opposed to a First Amendment—lens, the purpose of this paper is to offer a few insights on how to fill that analytical gap, and to ask if we will be happy with the inevitable consequences (intended and unintended) if we proceed down that road. To provide context, this paper begins with a brief overview of the legal origins of the “internet platforms are common carriers” argument as a strategy to overcome First Amendment concerns. Next, this paper reviews the prominent academic literature arguing for internet platforms to be treated as common carriers, which draws upon direct analogies to the communications industry. However, if communications regulation is to provide the analytical template for internet platform regulation, then a more accurate understanding of communications law is required. Following this discussion, this paper reviews Justice Clarence Thomas’s concurrence in Biden v. Knight Foundation, along with the two cases—one from the Eleventh Circuit and one from the Fifth Circuit—in which, at the time of this writing, the Supreme Court has just granted certiorari and where the question of whether internet platforms may be treated as common carriers is at the heart of the dispute. The penultimate section of this paper outlines some of the important—yet unaddressed—legal questions that will arise should the Supreme Court ultimately rule that internet platforms are common carriers that could eventually be subject to some sort of public utility regulation. Concluding thoughts are at the end.
By Ileana Thompson
As the COVID-19 global pandemic forced everyone into isolation, social media use increased across all generations, particularly for individuals aged 18-24 years old. There is growing scientific research studying the effects of social media use. As social media use continues to increase, the negative effects that ensue worsen. Notably, there is growing evidence that social media companies design their algorithms in ways that are intended to encourage continuous and excessive use of their product. When such use becomes excessive, the user may experience symptoms that mirror the behaviors associated with other forms of addiction, like opioid addiction. The addiction- like behaviors that result from excessive social media use have been described as social media addiction. If excessive social media use, and thus social media addiction rates, continue to increase, social media companies may be vulnerable to mass tort litigation for their role in the increase of social media addiction. The multi-district litigation against several of the major opioid manufacturing and retail companies for their role in the opioid addiction crisis provides a framework for how similar litigation may play out with respect to social media companies. Specifically, this Note will examine how social media companies and opioid manufacturing and/or retail companies share similar market structures, affect the brain and the individual in similar ways, and operate in similar carte blanche regulatory regimes to propose that social media companies are similarly poised to face mass tort litigation for their role in the growing rates of social media addiction.
By Katherine Wirvin
Many child YouTube sensations have gained micro-celebrity status by garnering online followings by appearing on their parents’ ‘family vlogging’ YouTube channels (known as the ‘children of family vloggers’). For other children, their influence comes from YouTube channels that feature the child opening and reviewing toys (known as “kidfluencers”). This Note nicknames these two types of social media child stars “KidTubers.” Regardless of how these kids gain their following, they generate income and opportunities for their families. However, these KidTubers do not have any legal protections entitling them to any of the income they generate through brand deals or monetized videos unless they live in Illinois, which just passed an amendment to their Child Labor Law, effective in 2024. The Fair Labor Standards Act (FLSA) exempts child stars from the Act’s protection, and in even in states that have established protections for child actors, protections do not extend to social media stars (barring Illinois). This Note examines the lack of income protections for KidTubers, both federally and state-to-state, and how most state protections for traditional child actors do not explicitly extend these protections to social media stars. This Note puts forth a proposal on how to frame the expansion of child actor labor laws to KidTubers through a federal child labor law. Specifically proposing a federal Coogan Law inspired child labor law that mirrors Pennsylvania’s current law (with slight modifications), and how that would allow KidTuber content to fall into the law’s already protected class of child performers.
By Catherine Ryan
Human rights were not bestowed upon humanity from some higher power, nor are they the result of impromptu global benevolence. They come about through grassroots advocacy for action in the face of some common practice that upends deeply-held notions of humanity. In their ultimate form, they result. in international coordination in order to identify the violation of rights and protect against it. This Note will argue that, through existing human rights conventions and customary international practices, the right to privacy of one’s facial biometric data is a human right, and facial recognition technology represents a serious threat to that right. This Note will then assert that the immediacy of the threat requires a coordinated effort to regulate the collection, storage, use, and sale of facial biometric data through domestic legislation, executive branch action, and international agreements borrowing from the United Nations Guiding Principles on Human Rights, better known as the Ruggie Principles.
Burying the Black Box: AI Image Generation Platforms as Artists’ Tools in the Age of Google v. Oracle
By David Silverman
Though the advent and proliferation of art-generation platforms powered by artificial intelligence (“AI”) are relatively new hurdles with which modern artists must contend, these platforms have already had a profound impact on the world of art. Image-generation platforms interpret user-inputted text prompts by learning from millions of points of image data related to the prompts, then teaching itself to synthesize and “unscramble” that data into one, cohesive image. Under current copyright law, the doctrine of fair use protects works that use aspects or elements of copyrighted work, provided the new work is transformative on the original. Although courts have interpreted the term transformative to include an element of creative choice, how should courts view the data-gathering mechanism of an AI, which treats its data points more like code than artistic inspiration? The solution may lie in the Supreme Court’s decision in Google v. Oracle, where the Court held that Google’s use of a Sun Java API in its software development was protected by fair use because its use of the API was proportionally small, and the final product in which the API was included was distinguishably different from the use of the API alone. AI image generators, as they currently exist, are black boxes, meaning that neither user nor programmer can know exactly what images the AI uses to teach itself how to generate an image based on a specific text prompt. This Note argues that as computer scientists learn how to determine exactly what points of data an AI uses to generate an image, the Supreme Court’s fair use analysis in Google v. Oracle should represent the model for fair use analysis as it applies to AI-generated art.