Skip to main content

Kennedy School Review

Topic / Science, Technology and Data

How Congress Should Regulate AI in the Short-term

AI presents a perfect regulatory storm. Like nuclear weapons, it can end humanity. Like bioweapons, non-state actors can develop and deploy the technology. Like social media, regulators and policymakers appear unwilling or unable to understand the seriousness of the short- and long-term risks posed by AI. This storm will not subside; AI development will continue regardless of the regulatory actions taken by Congress. The proper regulatory response, then should not include futile attempts to stop inevitable research and development or discourage beneficial and innovative uses of AI. Instead, Congress should focus on making critical infrastructure more resilient to the worst-case scenarios presented by AI. 

In the short run, this is a relatively straightforward task that legislators should accomplish sooner than later, but the regulatory clock is already ticking. Congress should prioritize establishing regulations that accomplish two goals: first, informing stakeholders—private, public, and NGOs—as to the likelihood and severity of such scenarios; second, penalizing companies that inadequately monitor and respond to actions that may make those scenarios more likely. In other words, the sooner Congress starts measuring and monitoring the risks posed by AI, the sooner it can develop durable regulatory frameworks to mitigate those risks.

We will save a discussion on how Congress should act upon better understanding and mitigating the AI risks for another article.

Thankfully, Congress can look to a pre-existing risk-based regulatory framework to kickstart this effort: data breach laws. Cyberattacks, like disruptions caused by AI, are inevitable—regulators have accepted that bad actors have greater technological capacity and resources than their targets. Rather than attempt to play defense against every such attack, Congress and state legislatures have imposed data breach laws with the intent of reducing the odds of such attacks and the severity of those attacks, shifting some of the onus to companies to proactively institute reasonable security and respond effectively to protect consumers in the case of inevitable attacks. This risk-based approach to regulation has several downsides, but it nevertheless offers important lessons for AI regulation.

Prior to diving further into the merits of applying a data breach framework to AI, it’s important to outline why Congress should approach AI regulation from a risk-reduction standpoint. AI poses existential risks—defined by Oxford philosophy professor Nick Bostrom as a risk that “threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.” Inadequate regulation of AI may manifest in the following worst-case scenarios—all of which present existential risk according to the AI research and non-profit Center for AI Safety:

  • Weaponization of AI by malicious actors
  • Undue and excessive delegation of human-tasks to AI that causes humans to lose control over their own institutions, economies, and cultures
  •  Widespread disinformation that undermines the ability of humans to take collective action and imperils the authority of governing institutions
  • Power-seeking AI that seizes control over major private and public institutions
  • Autocratic regimes gaining greater influence and power through oppressive uses of AI
  • AI unexpectedly and rapidly developing more power than anticipated resulting in stakeholders having too few means and too little time to mitigate its risks
  • AI developing the capacity to evade and trick efforts to monitor and understand its activities

These risks differ in scale than the risks commonly associated with AI, such as job loss from increased automation. The latter type of risks merit attention, but it’s the existential risks that mandate that Congress institute a risk-reduction approach to AI in the coming months.

The existential risks outlined above may seem fantastical. Isn’t “power-seeking AI” the plot of myriad Hollywood movies? Haven’t luddites been warning about a technological takeover since the telegram? Why is this time any different?

It may not be.

Over time, fear of the existential risks outlined above may prove to have been overblown. For now, Congress cannot take such a gamble. Legislators have a responsibility to develop a better understanding of such risks before worst-case scenarios become unavoidable futures. Tellingly, the creators of this technology agree that now is not the time to underestimate its risks. Industry leaders, such as Sam Altman of OpenAI, openly share a fear that they are creating a technology that will cause significant harm to the world. Altman went so far as to testify before Congress to ensure elected officials realized the scale and magnitude of AI risks, and the myriad harms that AI could bring about.

Adapting a Data Breach Regulatory Framework

As cyberattacks proliferated and increasingly impacted private companies and consumers, California became the first state to enact a data breach law in 2003, mandating notice to individuals and state regulators following discovery of a breach. In the twenty years since, every state has enacted their own data breach law. The state data breach laws come in every shape and size, creating a true regulatory patchwork. The data breach laws all define covered personal information differently, require notifications at different thresholds, and provide for different exceptions and penalties. The data breach laws generally have the same impetus however: they create a risk-reduction approach to the ever-present threat of cyberattacks, offering both incentives for proactive preparation and formalization of information security best practices, and penalties to account for consumer harm caused by data breaches. 

Data breach laws have a few key characteristics that should inform AI regulation. 

First, the majority (34) of state data breach laws incentivize having written information security policies in place. Data breach statutes vary in their notification requirements with most requiring written, telephonic, or electronic notification, and others specifying exactly what must be included in the notification to individuals. An information security exemption allows the entity some reprieve from prescriptive notification requirements. For example, Virginia’s data breach statute includes the following provision: “An entity that maintains its own notification procedures as part of an information privacy or security policyfor the treatment of personal information that are consistent with the timing requirements of this section shall be deemed to be in compliance with the notification requirements of this section if it notifies residents of the Commonwealth in accordance with its procedures in the event of a breach of the security of the system.”

Information security policy exemptions function as an incentive to entities subject to data breach laws. They encourage the adoption of such policies, which can help an entity understand and organize its business around certain cybersecurity risks. Information security policies help entities prepare for the worst, and regulators in many states reward that preparation by allowing entities to rely on their internal procedures rather than conform to the state’s notification requirements. 

Second, state data breach laws commonly include exemptions for notification for certain regulated sectors, including healthcare and finance. Both are governed by strict data privacy and security regulations, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and Gramm-Leach-Bliley Act (GLBA), respectively. Certain states lean on those existing regulatory frameworks allowing entities who are in compliance with HIPAA, GLBA, or other approved primary regulators to escape the state’s notification requirements. These types of provisions recognize the limits of legislating a rapidly evolving landscape. Data breaches and security incidents don’t look the same as they did in 2003; neither does the security of our personal data that we expect from companies. By leaning on existing regulatory structures, states allow themselves more flexibility. Those regulatory regimes, often that have more onerous security requirements, are closer to the entities they regulate and likely more able to adapt to changing technologies. If state data legislatures tried to amend data breach laws to keep up with technology, there would be no winners. Laws would consistently be behind the technology and lawmakers would be spending valuable time and capital inefficiently. 

Particularly as AI continues to develop, the technology will have vastly different impacts in different sectors. Privacy and security concerns stemming from AI in the healthcare industry will look different than in the critical infrastructure sector. Executive rulemaking can help bridge the gap and be more responsive to specific industry concerns, while an overarching law like data breach laws or a Congressional AI bill can address the incentives and risks across sectors. 

Additional Policy Considerations

Though data breach laws provide useful frameworks for the development of AI regulations, there’s still room for improvement. If Congress opts to emulate its approach to data breaches in the AI context, then it should increase the efficacy of such an approach by considering the following improvements.

First, expanding the technical competence of regulators. If the gap in technical expertise between industry leaders and regulators becomes too large, then all the risk detection in the world will have a limited effect on Congress actually mitigating those risks. Congress should take companies like OpenAI up on their supposed willingness to collaborate with regulators in the creation of responsive policies by educating a cohort of government AI experts. These experts could spend months, if not years, embedded within companies learning from the world’s foremost experts in what could be world-ending technological development. State data breach laws have been criticized for their unresponsiveness to the changing technical landscape. Incorporating the expertise of companies developing AI should be a regulatory priority. 

Second, increasing incentives to err on the side of caution. In short, the creation of a fine and penalty regime that spurs compliance among regulated entities. Despite industry leaders such as Sam Altman, the CEO of OpenAI, admitting that they do not know just how risky their technology may be, these AI developers have not slowed their AI R&D. One lesson from consumer protection laws generally is that some companies consider fines resulting from a breach as a cost of business that they can stomach and, therefore, minimally comply with regulations. The existential risks of AI mean the size and likelihood of enforcement of penalties must prevent a similar willingness to try to squeak by—too many minimally compliant companies will increase existential risks to an unacceptable degree.

Third, setting the federal legislative standard instead of leaving the door open for a patchwork approach. Though the state-by-state approach to data breach governance has some benefits, the costs may be higher. On the positive side of the ledger, states can more quickly implement, and amend their regulations to reduce risks from emerging technology. Perhaps unintentionally, the patchwork approach itself makes those risks less likely because regulated entities have to pay more attention to the nuances of each state’s approach. This increased attention to regulation necessarily forces companies to move slower and break fewer things. 

On the negative side, this patchwork approach may delay or undermine a more comprehensive and sustainable approach to risk reduction. The ongoing struggles to pass a nationwide privacy law demonstrate this negative side effect. States with more onerous requirements have objected to federal legislation that may preempt and weaken those requirements. Both AI developers and the public would benefit from avoiding a similar struggle. A federal law could provide developers with the sort of legal clarity and stability required to innovate, and it could ensure all Americans have the necessary protection from AI’s existential risks rather than allowing some AI-risk tolerant states to serve as the breeding ground for the proper policy response.

Conclusion

Now is not the time for novel regulatory approaches. Congress must take the existential risks of AI seriously and deploy pre-existing risk assessment tools and regulations to better understand, monitor, and mitigate those risks. Radical ideas have a time and place—right now, though, such ideas will only increase the odds of prolonged congressional debate.

Congress has already taken action in line with a risk-reduction approach to AI; in 2021, it directed the National Institute of Standards and Technology to develop a risk framework for AI systems. In an era marked by partisanship and gridlock and on an issue that presents as many and as significant risks as AI, the path of least regulatory resistance ought to be followed – at least in the short term.


Kevin Frazier is an incoming Assistant Professor at the Benjamin L. Crump College of Law at St. Thomas University, where he’ll continue his research on the intersection of democratic governance, emerging technology, and the law. Prior to joining academia, Kevin served as a judicial clerk on the Montana Supreme Court. He is a proud graduate of the UC Berkeley School of Law and Harvard Kennedy School. He tweets using @KevinTFrazier.

Mari Dugas is an attorney at Cooley LLP, focused on privacy, data protection, and cybersecurity. She is a Certified Information Privacy Professional for the US (CIPP/US). Before joining Cooley, Mari served as a legal intern in the Office of the Staff Judge Advocate of the US Cyber Command, and was an author and managing student staff editor for the online legal forum Just Security. Before attending law school, Mari worked in cybersecurity and election security policy at the Harvard Kennedy School’s Belfer Center.

Photo credit: Harold Mendoza via Unsplash