This article was authored by Cary Coglianese, Professor of Law at the University of Pennsylvania, and Todd Rubin, an Attorney Advisor at ACUS.
This article first appeared in the Regulatory Review's series on "Five Recommendations for Improving Administrative Government" that focuses on the ACUS Recommendations adopted at the December 2017 Plenary Session. Reposted with permission. The original may be found here, and the Regulatory Review's entire series may be found here.
A little scientific rigor can go a long way when it comes to designing government regulation. That is the message of a recent recommendation issued by the Administrative Conference of the United States (ACUS), a government agency dedicated to finding ways to improve administrative processes in the federal government. ACUS adopted its new recommendation, entitled Learning from Regulatory Experience, to help regulatory agencies think more carefully and systematically about making their regulations work better.
For many of the same reasons that pharmaceutical companies test new drugs to make sure they work as intended, it makes sense for government agencies to test and measure the impact of the rules they impose on the economy. The stakes to society are often just as high for regulation as they can be for new treatments for cancer and heart disease.
When it comes to new medicines, pharmaceutical researchers follow a rigorous scientific process. These researchers consider multiple treatment options and then execute one or more in a laboratory setting in order to learn whether they actually might work. Once a potential treatment looks promising, researchers then prepare to test that treatment in humans for safety and efficacy. They develop a protocol that lays out a hypothesis, specifies the duration of the study, identifies treatment and control groups, specifies what data will be collected, and outlines how the data will be analyzed. Only once this protocol has been approved can researchers proceed to test the treatment on humans.
It can help to think about the development and testing of new regulations along similar lines. As with the pre-testing discovery and development work that goes into finding new pharmaceuticals, agencies can benefit from examining various possible regulatory “treatments” before settling on a potential approach. Granted, regulators seldom can conduct laboratory research of the kind involved in the development of drugs. But when developing new regulations, regulators might be able to study and learn from existing variation in regulation by comparing approaches in different jurisdictions, analyzing peer-reviewed studies, and, as appropriate, conducting their own voluntary pilot programs.
After a potential policy design emerges from this discovery and development work, regulators should take further steps that will prepare them to “test” a new regulation. Of course, for practical and ethical reasons, it will not always be possible for regulations to be tested through double-blind, randomized control trials, as with new medicines. But regulators would still do well to treat the rollout of any new regulation as a learning opportunity. To guide their learning, regulators should consider carefully and answer questions such as:
-
What is the “hypothesis”? For example, an environmental regulator might try to specify and clearly state the level of harm reduction expected from a new rule.
-
How should comparison and treatment groups be defined? One simple way to define these groups is through a before-and-after research design: The world after regulation is the treatment group, to be compared with the world before the regulation. Still more rigorous designs take advantage of statistical methods such as “difference-in-differences” and “regression discontinuity” analysis, which can call for somewhat harder thinking about comparison and treatment groups but can yield more credible findings.
-
To whom will the regulation apply? To the extent that regulatory control or stringency can be varied, the regulator can create more opportunities to learn. Of course, variation—such as by random assignment—can raise legitimate concerns about fairness and equal treatment under the law. But sometimes these concerns can be adequately addressed. Moreover, pilot programs or voluntary initiatives can also give regulators opportunities to learn what works.
-
What data will be collected? For regulators to know with confidence what works, they need evidence and analysis. And any high-quality analysis requires sufficient, reliable data.
-
How will the data be analyzed? To learn whether a regulatory “experiment” makes a difference, analysts need to control for possible confounding variables—that is, other factors that might influence observed outcomes. Fortunately, a suite of modern statistical techniques—such as, again, difference-in-differences, regression discontinuity, and instrumental variables—can often permit the analyst to control for confounders.
-
What is the target time frame or frequency with which the regulation, once in effect, should be assessed? Regulations might not always lead to immediate changes in the problems they are designed to address. As regulators develop new regulations, they should specify a realistic time frame when improvements can start to be expected.
When regulators contemplate these questions for any new regulation, they can benefit from outside input, whether through peer review processes, advisory committees, public hearings, listening sessions, or public comments. Agency officials should make sure they take all affected interests’ views into account when defining a rule’s objectives and specifying the type of subsequent research that would best determine whether the rule is meeting those objectives.
Opportunities for learning from regulatory experience abound. Not only should regulators develop plans for future study of the new rules they adopt, but they should identify older rules for evaluation too. As Executive Order 13,563 declares, government agencies must find ways to “measure, and seek to improve, the actual results of regulatory requirements,” searching in particular for those existing rules that may be “outmoded, ineffective, insufficient, or excessively burdensome.”
In 2004, the U.S. Securities and Exchange Commission (SEC) varied the application of its “Uptick Rule,” which had been in place since the 1930s. As Zachary Gubler describes in a thoughtful report he prepared for ACUS that helpfully informed the Conference’s recommendation on regulatory learning, the SEC randomly selected 1,000 firms on the Russell 3000 index for which it declared the rule would no longer apply. Based on the results of its experiment, which failed to find any substantial increase in market efficiency from the Uptick Rule, the SEC rescinded it.
Of course, not all existing rules can be studied in this way, but agencies can draw on a variety of quasi-experimental methods widely used in other contexts to study their existing stock of regulations and learn systematically from the experience with these rules.
Agencies may also sometimes be able to learn about how their rules are working by making careful decisions, in appropriate instances, to exempt some regulated entities from certain existing requirements in order to try out new approaches. The U.S. Department of Health and Human Services (HHS), for example, has used demonstration waivers to allow states to adopt alternative approaches in areas such as child support enforcement and child welfare—specifically so HHS can learn from experience.
Government officials increasingly recognize the value in experimentation and evaluation when it comes to a variety of government programs. A recent report of the bipartisan Commission on Evidence-Based Policymaking, for example, emphasized the need to build “a future in which rigorous evidence is created efficiently, as a routine part of government operations, and used to construct effective public policy.” Speaker of the U.S. House of Representatives Paul Ryan (R-Wis.) and U.S. Senator Patty Murray (D-Wash.) have introduced bipartisan legislation that would support the collection of data needed to evaluate and improve government programs.
These affirmations of evidence-based policymaking ought to apply to regulation as well. ACUS’s latest recommendations on Learning from Regulatory Experience, especially when combined with its 2014 recommendations on Retrospective Review of Agency Rules, make a compelling case for bringing scientific rigor to decision makers’ understanding of what makes for effective and efficient government rules.
Share on FacebookShare on TwitterShare on Linkedin