Regulating the U.S. Government’s Use of Artificial Intelligence

BY: Henry Rood

Artificial intelligence (“AI”)[HR1] [AC2] , specifically large language models such as ChatGPT, are developing rapidly. AI’s immense utility has ensured mass adoption by businesses and the public alike. However, the United States government[HR3] [AC4]  is only beginning to regulate the internal use of AI. What follows are recent efforts to regulate government use of AI and some major developments expected to take place in the near future.

 

Signed by President Trump on December 3, 2020, Executive Order 13960 sets out principles for the internal use of AI. Order 13960 requires that the government use of AI shall be lawful, purposeful, and effective. It also requires that the AI used be safe, secure, regularly monitored, and accountable. These principles[JN5] [HR6] [HR7]  have vague descriptions but were intended to guide efforts to regulate the internal use of AI to “foster public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law. . . .”

 

Following Order 13960, on December 27, 2020, Congress passed the AI in Government Act of 2020 as part of the Consolidated Appropriations Act of 2021. The Act created the Center of AI Excellence within the General Services Administration (GSA) with the purpose of facilitating government adoption of AI. The Act requires that the Office of Management and Budget (OMB), in coordination with the Director of the White House Office of Science and Technology Policy (OSTP) and the GSA Administrator, issue memorandums regulating government use of AI. OMB is expected to publish a draft memorandum in the Federal Registrar for public comment soon. The draft memorandum is expected to establish an AI governance board and require that agencies each name a chief AI officer. Most notably, the draft memorandum is expected to prohibit agency use of AI that impacts public safety and civil rights without waiver. The Biden-Harris administration identified several areas susceptible to the harmful use of AI including surveillance, voting, education, housing, employment, healthcare, and finance. The requirement serves to maintain complete human oversight over the important functions of government to prevent unintended automated discrimination and misapplication of the law.

 

In accordance with the AI in Government Act, on July 6, 2023, the Office of Personnel Management (OPM) identified key skills and competencies needed for positions related to AI, such as sociotechnical competence, software engineering, application development, systems design, and data analysis. This memorandum shows that OPM is considering how to acquire talent to improve the implementation and utilization of AI in the civil service.

 

In October 2022, the OSTP released the non-binding Blueprint for an AI Bill of Rights to promote safe and effective systems, algorithmic discrimination protections and data privacy among other considerations. Then, in July 2023, the Biden-Harris administration secured voluntary commitments from leading AI companies, such as Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, to manage the risks posed by AI. The voluntary commitments included: (1) internal and external security testing of AI systems before their release; (2) information-sharing across the industry and with governments, civil society, and academia; (3) facilitation of third-party discovery and reporting vulnerabilities; and (4) public reporting of AI system capabilities, limitations, and areas of appropriate and inappropriate use. The voluntary commitments are intended to ensure the development of secure and trustworthy AI. While many have come out in support of this measure, three criticisms of the voluntary commitments are the limited language of the commitments, the lack of enforcement mechanisms, and the potential negative effects they might have on market competition.

 

Building on Executive Order 13985[HR8] [JN9] , which promotes racial equity and support for underserved communities, on February 16, 2023, President Biden issued an Executive Order focused on combatting algorithmic discrimination. AI generated content is determined by the data that AI was trained on. Through the careful review and curation of training data, AI bias can be mitigated. On September 27, 2023, President Biden announced his plan to sign an executive order this fall to ensure the responsible innovation of the artificial intelligence industry.

 

In light of these recent efforts to regulate government use of AI, the question remains: are these vague executive orders and voluntary commitments currently enough?

 

Despite all the aspirational language, these efforts do little to regulate government use of AI or establish procedures and remedies for harm resulting from their use. If the government continues to use AI while the technology develops at a rapid pace, the courts will need to fill in the gaps. Ultimately, the coming OMB regulations will determine the pace of the regulation of government use of AI. The public should utilize the memorandum comment period to voice their concerns and check the staunchly anti-regulation technology industry. The potential automation of vital governmental functions is at stake. This development could violate rights and cause harm at an unprecedented scale. Furthermore, the public should also be mindful of how government use of AI could impact the most vulnerable groups in society.

 

AI has the potential to revolutionize productivity in the civil service. However, there is still uncertainty about how to best implement and utilize AI while protecting privacy, civil rights, and civil liberties. For now, the government is encouraging the internal use of AI subject to what little regulation and oversight that currently exists. There is broad bipartisan support for further AI regulation. What remains to be seen is the specific form these regulations will take. The coming months will be filled with developments as AI continues to advance and the government continues to play catch-up.

Previous
Previous

From Nation-States to Cyberspace: Rethinking Sovereignty in the Digital Age

Next
Next

The Legal and Financial Implications of Twitter’s Rebrand to “X”