
Agencies "Adapt" Data Quality Guidelines
by Guest Blogger, 5/15/2002
Federal agencies across the board recently released draft guidelines on data quality in response to a Fiscal Year 2001 appropriations rider. For most, the mere mention of "data quality" is likely to inspire a yawn. Yet for business interests, like the U.S. Chamber of Commerce, these guidelines represent an important new vehicle to challenge federal regulation, by challenging the information that supports it. As the Chamber's William Kovacs told BNA (a Washington trade publication), "This is the biggest sleeper there is in the regulatory area and will have an impact so far beyond anything people can imagine."
Agency Guidelines
Environmental Protection Agency
Department of Education
Health & Human Services
Food & Drug Administration
Office of Management and Budget (Specific)
Office of Management and Budget (Implementing)
Center for Disease Control
Center for Medicare and Medicaid Services
National Institute of Health
Department of Justice
State Department
Internal Revenue Service
Department of Labor
Department of Transportation
National Archives and Records Administration
National Science Foundation
Consumer Product Safety Commission
Federal Energy Regulatory Commission
Nuclear Regulatory Commission
Of particular interest in this regard are the implications of the guidelines for agency risk assessments, which generally serve as the foundation and justification for health, safety, and environmental regulation. In laying out agency-wide parameters for the guidelines, as directed by Congress, OMB's Office of Information and Regulatory Affairs (OIRA), under the leadership of John Graham, went far beyond the congressional mandate and asked agencies to "adapt or adopt" principles for risk assessment laid out in the Safe Drinking Water Act (SDWA) to establish that information disseminated to the public meets standards of "quality, objectivity, utility and integrity," and that "influential information," such as risk assessment, is independently "reproducible." Potentially, this sets up an extremely high burden of proof for regulatory action.
Yet overall, agencies clearly sought to minimize the impact of the guidelines in this respect. For instance, no agency (even EPA) chose to "adopt" the SDWA principles, and those that indicated they would "adapt" did not clarify what this means or commit to any concrete changes. At least one agency, the Dept. of Transportation, did not even address the issue. Agencies also sought to preserve leeway and discretion in deciding on challenges to their data quality, and most seem to believe that none of this is subject to judicial review.
Critically, however, this last point remains unsettled. As Graham told a workshop at the National Academy of Sciences (NAS) on March 21, "[T]here are as many legal theories about how these issues can be litigated as there are lawyers. My personal hope is that the courts will stay out of the picture, except in cases of egregious agency mismanagement. Yet it will probably take a few critical court decisions before we know how this law and the associated guidelines will be interpreted by judges." Given the nature of the guidelines, and the interest of industry in using them in nefarious ways, the outcome could potentially have profound implications for the future of risk assessment.
In virtually any risk assessment, there is a great deal of scientific uncertainty. Sometimes an agency may be confronted with conflicting studies, and in almost all cases, it is extremely difficult to pinpoint exactly how much risk flows from a particular hazard. To deal with this inevitable uncertainty, agencies are forced to make certain default assumptions, which frequently point the agency in the direction of caution -- that is, a more protective standard.
The data quality guidelines, however, could be interpreted as leaving little room for uncertainty or such assumptions -- as Joe Rodricks, principal at the ENVIRON International Corp., pointed out at the NAS workshop March 22 -- and that seems to be what has captured industry's interest.
Each agency, as directed by Congress and OMB's implementing guidelines, has proposed an administrative process for the "correction of information" in their draft guidelines, allowing for challenges of data quality by affected parties, including an appeals process for those unhappy with an agency judgment. (The appeals process was added at the last second by OMB and was not contemplated by Congress.) These challenges, for now, are ultimately decided by the agency itself, which might leave the impression that this is all relatively benign; after all, it seems unlikely an agency would turn against the assumptions in its own risk assessment.
Yet this is what makes the unresolved question of judicial review so critical -- because it could take ultimate decision-making authority out of the agency's hands. Moreover, according to Graham, "If agencies do not develop an objective appeals process, I predict that there will be efforts down the road to authorize appeals outside the agency."
In such a scenario, there are a number of ways industry might successfully challenge an agency risk assessment. For instance, all "influential information" -- and by any measure, risk assessment would fall under this category -- must be "reproducible," that is the same result would be achieved following reanalysis. Yet a risk assessment can be extremely complex, drawing from a vast range of studies and data sets (each subject to their own separate data quality challenges). As Rodricks asked, "Can all people look at all the information on dioxin and cancer and reach the same conclusion that EPA has reached (or at least tentatively reached) about dioxin and cancer?" Indeed, it might be possible to look at the studies and data used by the agency, and draw completely different conclusions. Does this mean the risk assessment fails the "reproducible" test, that the agency's information lacks sufficient quality?
In its draft guidelines, EPA, like other agencies, suggests that "a high degree of transparency about data and methods" will be sufficient to demonstrate "reproducibility." This is helpful in that it gives clarity to the standard, and seems to be a reasonable and realistic demand. However, even under this more limited definition of "reproducibility," an agency could run into trouble on a data quality challenge. A risk assessment generally does not produce new information; rather, it draws upon existing information, seeking to bring together a wide variety of data and studies. In formulating its 1997 clean air standards, for instance, EPA relied heavily on research from Harvard University -- conducted with a grant from the National Institutes of Health -- that linked air pollution with asthma and other adverse health effects. EPA publicly provided aggregated data used in its risk assessment, but did not provide the underlying data, which was retained by Harvard -- much to the consternation of industry, which pressed to have it turned over. Could the data quality guidelines be used to discourage EPA from using such a study? On this basis, could EPA's risk assessment be found to lack a "high degree of transparency?"
Likewise, information must also meet standards of "objectivity," which is presented in an "accurate, clear, complete, and unbiased manner, and as a manner of substance, is accurate, reliable, and unbiased." Yet the inherent uncertainty of risk assessment, and the assumptions it necessitates, could potentially be found to conflict with this principle. The presence of a particular risk might be easily established (i.e., dioxin causes cancer), but measuring the extent of that risk to determine the proper level of regulation -- which risk assessment is meant to do -- is extremely difficult (i.e., what levels of dioxin cause cancer, and at what rate?). In doing so, agencies are often forced to employ assumptions and take educated guesses. If an agency's assumptions lead to a recommendation of caution, or a more protective regulation, affected industry could potentially challenge the finding as biased and lacking sufficient "objectivity."
On top of this, agencies are to meet principles laid out in the Safe Drinking Water Act, which are perhaps the most rigorous standards for risk assessment written into statute. Previously, Graham had issued an agency-wide memo on regulatory analysis that also pressed SDWA principles for risk assessment, saying that agency proposals employing these methods would be viewed more favorably by OIRA -- which must grant clearance to all health, safety, and environmental protections before they can take effect. Graham seized the data quality guidelines to achieve formal adoption of these risk assessment principles across agencies.
The SDWA places particular emphasis on "peer-reviewed science and supporting studies" and asks for very detailed information about the risk being examined. For instance, the agency is to identify "each population" affected, the "expected risk" for each of these populations, and "each significant uncertainty" that emerges in the risk assessment. Graham has said such rigor, specifically the practice of agency peer review, should satisfy the "objectivity" requirement of the guidelines.
Yet frequently risk assessment relies on information that is not peer reviewed, possibly grounds for a data quality challenge. OSHA, for instance, is instructed by statute to base its risk assessment on the "best available information." This, of course, includes peer-reviewed studies, but it also can include data provided by industry and labor unions, or information gathered during OSHA inspections and site visits. "Is it always peer-reviewed information?," asked William Perry, director of OSHA's Office of Risk Reduction Technology, at the NAS workshop. "No, it can't be. Is it always scientific information in the sense of data collected through hypothesis testing? No, it can't be. If we restrict ourselves to that we can't tell the decision maker anything." Perry, nonetheless, said he believed OSHA's practice for risk assessment was already consistent with the SDWA: "I think that idea of getting and using the best evidence available is really the underlying principle that the Safe Drinking Water Act was trying to get at."
However, taken to this level of generality, Graham's insistence on "adapting or adopting" the SDWA does not seem likely to mean much. To help Graham's push, OMB recently convened an interagency working group to discuss how to apply the SDWA principles beyond the context of safe drinking water. Yet after serving on this working group, Perry observed, "There is no way agencies are going to just agree on some kind of common or even very similar language for adapting the Safe Drinking Water Act principles to their own agency-specific guidelines. I think our statutes are too different. Our histories are too different. Our regulatory and perhaps non-regulatory policies are too different. We just have too many different things that we have to consider."
Indeed, the agency draft guidelines seem to support this conclusion. The guidelines of Health and Human Services (HHS), for instance, give risk assessment perfunctory treatment in stating its intent to adapt to the SDWA. EPA, which operates under the SDWA, also chose to adapt for health risk assessments, but questioned SDWA's applicability to assessments of environmental and safety risks, which is common at OSHA. Perhaps this is why the Dept. of Transportation, which is responsible for highway safety, ignored any mention of risk assessment in its draft guidelines.
Other agencies, such as the Food and Drug Administration and the Centers for Disease Control, more openly resisted the SDWA principles -- which were written for health risk assessment, specifically with cancer in mind -- as inconsistent with certain functions. FDA points out that many of its actions related to non-cancer-causing hazards -- for example, actions related to adverse effects from drugs -- are based on the judgment of scientific experts, and are essentially qualitative. "Although we analyze the economic costs of the regulations and consider alternatives, regulations like these do not lend themselves to the types of quantitative risk assessments contemplated by the Safe Drinking Water Act principles," FDA states.
Yet while the issue of the SDWA and its general importance is still unclear, Graham's advocacy of its principles is consistent with his general approach to data quality in OMB's implementing guidelines, now reflected in the agency draft guidelines. These guidelines seem to demand a new level of scientific certainty for the inherently uncertain practice of risk assessment. If such certainty is not achieved, an agency may be subject to challenge, which is especially dangerous if such challenges are open to industry litigation.
In general, it is surprising that OMB's guidelines provide such extensive discussion of risk analysis. This was never debated by Congress in the context of the data quality rider. Where it was debated, through regulatory legislation presented in the Contract with America, it was rejected. The emphasis on use of the SDWA makes it even more clear that this is simply a personal issue of John Graham's, not necessarily good policy or even needed.
In the end, this could curtail the ability of an agency to take swift and necessary action, as it must contend with such concepts as "objectivity" and "reproducibility." As an analogy, if someone is being hit over the head with a hammer, the logical thing to do is seize the hammer; it's obvious enough that damage is being done. Under the new data quality regime, however, an agency could be forced to sit on the sidelines measuring the precise extent of the damage.
