Anti-Regulatory Studies Found Deceptive

A series of influential studies purporting to show that federal regulation is broadly irrational are based on data that is highly misleading and frequently manufactured to fit a preconceived point of view, according to an investigation by Richard Parker, a law professor at the University of Connecticut, who presented his findings October 17 during a conference of the American Bar Association. These studies are familiar to anyone who has followed the debate over regulatory effectiveness. They are frequently invoked in calls for process “reforms” -- both legislative and administrative -- designed to limit regulatory output and elevate the use of monetized cost-benefit analysis. Yet it turns out, they are frauds. Specifically, Parker takes aim at:
  • A 2000 study by Robert Hahn, co-director of the AEI-Brookings Joint Center for Regulatory Studies, which concluded, “using the government’s numbers,” that less than half the major rules issued between 1981 and 1996 pass a cost-benefit test (costs minus monetized benefits). Yet Hahn’s study did not disclose the names of these rules, making the findings unverifiable. “Merely getting the list of rules -- and the corresponding tabulation of costs and benefits -- required months of supplication,” Parker writes in a summary paper distributed at the conference. “When I finally obtained the spreadsheet, I immediately made a startling discovery. Forty-one of the 136 rules in his database -- fully 31 percent of all the rules -- are assigned a zero benefit.” This includes a rule requiring tankers to devise response plans for large oil spills; a rule requiring manufacturing facilities to publicly disclose toxic releases; and three rules to limit toxic pollutants in drinking water. Clearly such rules carry significant benefits, many of which are difficult, if not impossible, to monetize. Yet according to Parker, “It turns out that Hahn, with a few narrow and limited exceptions, has assigned a zero value to any benefit which the government’s regulatory impact assessment [RIA] does not quantify and monetize. Hahn, amazingly, also zero-values even benefits that are quantified and monetized in an agency RIA, unless they happen to fall into one of his select categories of recognized benefit -- even as he insists that he is using the government’s numbers.” For instance, EPA promulgated a rule in 1992 to protect 3.9 million agricultural workers from pesticides, which the agency estimated would yield substantial benefits, including the prevention of serious developmental defects, stillbirths, and acute pesticide poisoning. Yet Hahn’s study only recognized the health benefits of “reducing the risk of cancer, heart disease, and lead poisoning.” As a result, Hahn scores EPA’s rule as having no benefit.
  • A 1995 study by Tammy Tengs and John Graham, current administrator of OMB’s Office of Information and Regulatory Affairs (OIRA), which found that 60,000 additional lives could be saved each year if the government redirected resources from current regulatory interventions to more “cost-effective” options. Graham has provocatively labeled this “statistical murder.” Yet Parker points out the logical fallacy of this claim, which assumes a fixed regulatory budget where a dollar spent on Risk A is a dollar less for Risk B. “In fact, there is no such budget, and no such tradeoff,” Parker writes. Even so, Parker points out that Graham’s hypothetical reallocation encourages the adoption of two interventions -- influenza vaccines for all citizens and continuous (vs. nocturnal) oxygen for hypoxemic obstructive lung disease -- that alone account for more than 42,000 of the 60,000 additional lives saved. “Are we to believe that the nation’s failure to [adopt these interventions] is somehow related to the allegedly excessive regulation of benzene or other interventions at the cost-ineffective bottom of his list?,” Parker asks. “If not, where is the statistical murder?” Ironically, “Graham’s re-allocation works by finding one or more instances of under-regulation to match every instance of over-regulation,” Parker writes. “But under-regulation, of course, is not the lesson that regulatory critics choose to draw from the Graham study.” Another central point of Graham’s study is that toxic regulation is frequently cost-ineffective when compared to non-toxic related interventions. Yet amazingly, Graham’s own data suggests the exact opposite. Parker points out that only 4 percent of expenditures on toxic regulations exceed $8 million per life saved, while 63 percent of the funds devoted to non-toxic interventions exceed that threshold.
  • A 1987 study by John Morrall, a senior economist at OIRA, which presented a table of 44 regulations and concluded that one-third cost more than $100 million for every life saved. Yet like the work by Hahn and Graham, Morrall’s study is deeply misleading. Morrall, like Hahn and Graham, relies on agency cost-benefit data. But Morrall revises these estimates “whenever he disagrees with them -- often by several orders of magnitude, and always in the direction of higher costs and lower benefits,” according to Parker. Moreover, Morrall offers no supporting documentation for his changes, making his findings impossible to replicate. Indeed, Morrall admitted to Parker that his assumptions and calculations are “scattered around in filing cabinets.”
“All three studies rely on undisclosed data and non-replicable calculations,” Parker concludes. “They misrepresent ex ante guesses about the costs and benefits of future or hypothetical regulations as actual measurements of ‘the’ costs and benefits of regulation. They grossly under-estimate the value of lives saved, or the number of lives saved, or both.” Parker’s complete findings will be revealed in a forthcoming 90-page paper.
back to Blog