The use of artificial intelligence (AI) and other algorithmic tools is changing how government agencies do their work. As the Administrative Conference has recognized, these tools “hold out the promise of lowering the cost of completing government tasks and improving the quality, consistency, and predictability of agencies’ decisions.” At the same time, these tools “raise concerns about the full or partial displacement of human decision making and discretion.”[1] The Conference adopted Statement #20, Agency Use of Artificial Intelligence, in 2020 to help agencies consider when and how to use algorithmic tools appropriately.[2] More recently, it adopted specific recommendations addressing the use of algorithmic tools to review regulations,[3] manage public comments,[4] and provide guidance to the public.[5]
In this Recommendation, the Conference turns to the use of algorithmic tools in regulatory enforcement. An algorithmic tool is a computer-based process that “uses a series of rules or inferences drawn from data to transform specified inputs into outputs to make decisions or support decision making.”[6] Many agencies engage in regulatory enforcement—that is, detecting, investigating, and prosecuting potential violations of the laws they administer. These agencies are often “faced with assuring the compliance of an increasing number of entities and products without a corresponding growth in agency resources.”[7] As agencies seek ways to make regulatory compliance “more effective and less costly,”[8] many are considering how they can use algorithmic tools to perform regulatory enforcement tasks such as monitoring compliance; detecting potential noncompliance; identifying potential subjects for investigation, inspection, or audit; and gathering evidence to determine whether corrective action against a regulated person is warranted. Indeed, a report to the Conference analyzing the use of AI in federal administrative agencies found that “AI has made some of its most substantial inroads in the context of agency enforcement activities.”[9]
The use of algorithmic tools in regulatory enforcement presents special opportunities for agencies. When used appropriately, such tools may enable agencies to perform enforcement tasks even more efficiently, accurately, and consistently. Algorithmic tools may be particularly useful in performing many of the most time- and resource-intensive tasks associated with regulatory enforcement, such as synthesizing voluminous records, determining patterns in complex filings, and identifying activities that might require additional review by a human being.
At the same time, significant challenges and concerns arise in agencies’ use of algorithmic tools in regulatory enforcement.[10] The Conference has previously identified possible risks associated with agencies’ use of algorithmic tools, including insufficient transparency, internal and external oversight, and explainability;[11] the potential to unintentionally create or exacerbate “harmful biases” by encoding and deploying them at scale;[12] and the possibility that agency personnel will devolve too much decisional authority to AI systems.[13] Such risks are heightened when, as in the regulatory enforcement context, agencies use algorithmic tools to make decisions or take actions that affect a person’s rights, civil liberties, privacy, safety, equal opportunities, or access to government resources or services.[14]
Since the Conference issued Statement #20, Congress enacted the AI in Government Act, which directs the Director of the Office of Management and Budget (OMB) to provide agencies with guidance on removing barriers to agency AI use “while protecting civil liberties, civil rights, and economic and national security” and on best practices for identifying, assessing, and mitigating harmful bias.[15] Executive Order 13,960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, identifies principles for agencies when designing, developing, acquiring, and using AI and directs agencies to inventory their uses of AI and make those inventories publicly available.[16] Executive Order 14,110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, requires agencies to designate Chief AI Officers, who have primary responsibility for overseeing their agencies’ AI use and coordinating with other agencies, and establishes the Chief AI Officer Council to coordinate the development and use of AI across agencies.[17] OMB Memorandum M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, which implements the AI in Government Act and Executive Order 14,110, provides guidance to agencies on strengthening the effective and appropriate use of AI, advancing innovation, and managing risks, particularly those related to rights-impacting uses of AI.[18] Memorandum M-24-10 further provides risk-management practices for agency uses of AI that affect people’s rights, which are derived from the Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s AI Risk Management Framework.[19] Those practices include “conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI.”[20] Additionally, OMB issued Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government, which “integrat[es] these considerations for AI risk management into agency acquisition planning.”[21]
Consistent with these authorities, this Recommendation provides a framework for using algorithmic tools in regulatory enforcement in ways that promote the efficient, accurate, and consistent administration of the law while also safeguarding rights, civil liberties, privacy, safety, equal opportunities, and access to government resources and services.
RECOMMENDATION
-
When considering possible uses of algorithmic tools to perform regulatory enforcement tasks, agencies should consider whether and to what extent such tools will:
a. Promote efficiency, accuracy, and consistency;
b. Create or exacerbate unlawful or harmful biases;
c. Produce an output that agency decisionmakers can understand and explain;
d. Devolve decisional authority to automated systems;
e. Adversely affect rights, civil liberties, privacy, safety, equal opportunities, and access to government resources or services;
f. Use inappropriately or reveal publicly, directly or indirectly, confidential business information or trade secrets; and
g. Affect the public’s perception of the agency and how fairly it administers regulatory programs.
-
When agencies use algorithmic tools to perform regulatory enforcement tasks, they should assess the risks associated with using such tools, including those in Paragraph 1, and put in place oversight mechanisms and data quality assurance practices to mitigate such risks. During a risk assessment process, agencies should consider, among other things, the:
a. Ability to customize tools and systems to the agency’s ongoing needs and to specific use cases;
b. Tendency of such tools to produce unexpected outcomes that could go beyond their intended uses or have the potential for biased or harmful outcomes;
c. Training and testing methodologies used in developing and maintaining such tools;
d. Quality assurance practices available for data collection and use, including the dependency of such tools on the completeness and veracity of the underlying data on which they rely; and
e. Oversight procedures available to the agency and the public to ensure responsible use of such tools.
- When agencies use algorithmic tools to perform regulatory enforcement tasks, agencies should ensure that any agency personnel who use such tools or rely on their outputs to make enforcement decisions receive adequate training on the capabilities, risks, and limits of such tools and understand how to appropriately assess their outputs before relying on them.
- When agencies provide notice to regulated persons of an action taken during an investigation, inspection, audit, or prosecution, they should specify if an algorithmic tool provided a meaningful basis for taking that action, consistent with existing legal requirements.
- Consistent with legal requirements, agencies should notify the public on their websites of algorithmic tools they meaningfully use to investigate, inspect, audit, or gather evidence to discover non-compliance by regulated entities, along with information about the sources and nature of the data used by such tools.
- Agencies that meaningfully use or are considering using algorithmic tools in regulatory enforcement should engage with persons interested in or affected by the use of such tools to identify possible benefits and harms associated with their use.
- Agencies that use algorithmic tools to perform regulatory enforcement tasks should provide effective processes whereby persons can voice concerns or file complaints regarding the use or outcome resulting from the use of such tools so that agencies may respond or take corrective action.
- The Chief AI Officer Council should facilitate collaboration and the exchange of information among agencies that use or are considering using algorithmic tools in regulatory enforcement.
[1] Admin. Conf. of the U.S., Statement #20, Agency Use of Artificial Intelligence, 86 Fed. Reg. 6616 (Jan. 22, 2021).
[2] Id.
[3] Admin. Conf. of the U.S., Recommendation 2023-3, Using Algorithmic Tools in Retrospective Review of Agency Rules, 88 Fed. Reg. 42,681 (July 3, 2023).
[4] Admin. Conf. of the U.S., Recommendation 2021-1, Managing Mass, Computer-Generated, and Falsely Attributed Comments, 86 Fed. Reg. 36,075 (July 8, 2021).
[5] Admin. Conf. of the U.S., Recommendation 2022-3, Automated Legal Guidance at Federal Agencies, 87 Fed. Reg. 39,798 (July 5, 2022).
[6] Recommendation 2023-3, supra note 3. For purposes of this Recommendation, “algorithmic tools” includes AI technologies but not basic scientific or computing tools.
[7] See Admin. Conf. of the U.S., Recommendation 2012-7, Agency Use of Third-Party Programs to Assess Regulatory Compliance, 78 Fed. Reg. 2941, 2941 (Jan. 15, 2013).
[8] Id. In Recommendation 2012-7, the Conference noted that agencies “may leverage private resources and expertise in ways that make regulation more effective and less costly.” Id.
[9] David Freeman Engstrom, Daniel E. Ho, Catherine M. Sharkey & Mariano-Florentino Cuéllar, Government by Algorithm in Federal Administrative Agencies 22 (Feb. 2020) (report to the Admin. Conf. of the U.S.); accord Cary Coglianese, A Framework for Governmental Use of Machine Learning 31 (Dec. 8, 2020) (report to the Admin. Conf. of the U.S.).
[10] Michael Karanicolas, Artificial Intelligence and Regulatory Enforcement (Dec. 9, 2024) (report to the Admin. Conf. of the U.S.); cf. Recommendation 2023-3, supra note 3; Admin. Conf. of the U.S., Recommendation 2021-10, Quality Assurance Systems in Agency Adjudication, 87 Fed. Reg. 1722 (Jan. 12, 2022); Recommendation 2021-1, supra note 4; Statement #20, supra note 1; Admin. Conf. of the U.S., Recommendation 2018-3, Electronic Case Management in Federal Administrative Adjudication, 83 Fed. Reg. 30,686 (June 29, 2018).
[11] “Explainability” allows those using or overseeing AI systems to “gain deeper insights into the functionality and trustworthiness of the system, including its outputs,” and helps users understand the potential effects and purposes of an AI system. Nat’l Inst. of Standards & Tech., Artificial Intelligence Risk Management Framework (AI RMF 1.0) 16 (2023) [hereinafter AI RMF 1.0].
[12] Statement #20, supra note 1, at 6617.
[13] See id. at 6618.
[14] See Off. of Mgmt. & Budget, Exec. Off. of the President, M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence 29 (2024) (providing a comprehensive definition of “rights-impacting” uses of AI).
[15] Pub. L. No. 116-260, div. U, title 1, § 104 (2020) (codified at 40 U.S.C. § 11301 note).
[16] See Exec. Order No. 13,960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, 85 Fed. Reg. 78,939 (Dec. 3, 2020).
[17] Exec. Order No. 14,110 § 10.1(b), Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 88 Fed. Reg. 75,191, 75,218 (Oct. 30, 2023); OMB Memorandum M-24-10, supra note 14.
[18] See OMB Memorandum M-24-10, supra note 14, at 29.
[19] Id.; see also Off. of Sci. & Tech. Pol’y, Exec. Off. of the President, Blueprint for an AI Bill of Rights (2022); AI RMF 1.0, supra note 11.
[20] Exec. Order No. 14,110, supra note 17.
[21] Off. of Mgmt. & Budget, Exec. Off. of the President, M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government (2024), at 1.