National Academies Report Takes PART To Task

The National Academy of Sciences (NAS) released a report entitled "Evaluating Research Efficiency in the U.S. Environmental Protection Agency" yesterday that reviewed the way PART evaluates federal research and development programs. This review was requested by the Environmental Protection Agency (EPA) in 2006 in order to assist the agency in "developing better assessment tools to comply with PART, with emphasis on efficiency," according to the preface to the report. I don't know, but I suspect, EPA requested this study because they are frustrated with the poor ratings and inflexibility of the PART for EPA research and development programs and tired of feeling like the ugly duckling of the federal government, at least in OMB's eyes. Turns out, the NAS study draws many of the same conclusions we have promoted about the PART, particularly its inability to correctly evaluate and capture the work of R&D programs. For instance, NAS finds that measuring research programs based on outcomes (i.e. does research on health policy make people healthier) is neither achievable nor valid. It further finds that efficiency in research should only be one part of evaluating the quality, relevance, and effectiveness of research programs. These conclusions lead NAS to make three excellent recommendations for how the federal government should evaluate research. Read more on the NAS' recommendations... The NAS study draws a number of correct conclusions about the PART and how it is used and makes three strong recommendations to modify how the federal government assesses the efficiency of research at EPA and other federal agencies:
    1) Adding needed contextualization to PART reviews, NAS recommends the federal government use multiple understandings of "efficiency," introducing the idea of investment efficiency and process efficiency. NAS defines investment efficiency as "doing the right research and doing it well." This metric should focus on whether "the R&D portfolio, including the budget, is relevant, of high quality, matches the agency's strategic plan, and is adjusted as new knowledge and priorities emerge." Process efficiency is defined as "how program managers exercise skill and prudence in using and conserving resources." This metric should focus on inputs and outputs - how the agency implements their research plan - the budget and the number of people it takes to do so, how many grants or reports it produced, etc. NAS states that process efficiency can be evaluated using quantitative measures such as Earned Value Management (EVM) but that investment efficiency should be determined by an expert panel 2) NAS draws the correct conclusion that measuring program performance, particularly for R&D programs, based on the standards used in the PART (what NAS calls ultimate outcomes) is neither achievable nor valid. This is because such outcomes are "far in the future and are highly dependent upon actions taken by many other people who may or may not use the research findings." NAS found that no agency in the federal government has found a way to show efficiency based on ultimate outcomes. This finding recognizes the unique role research plays in the accumulation of knowledge used to make more efficient policies and decisions over time, as well as the multiple steps necessary to be taken by multiple actors in order to achieve broad goals between producing outputs and expecting outcomes. NAS recommends introducing intermediate outcomes, a metric separate from ultimate outcomes sought by OMB. These intermediate outcomes act as an actually measurable bridge between outputs and outcomes. This would also help to acknowledge that research often tells us more by failing than by succeeding. 3) Finally, NAS found the PART was being implemented differently for different agencies and programs across the federal government. In particular, NAS states "OMB has rejected some methods for measuring research efficiency when proposed by EPA, but accepted them when proposed by other agencies." Specifically, the NAS study found OMB has encouraged EPA to use Earned Value Management to measure the efficiency of research but has not applied the same standard to other agencies. This is exactly the type of inconsistency that housing such a review tool at OMB, in the hands of people of varying influence and power, leads too.
We are still reviewing the entire report, which makes some interesting recommendations about ways to fix these limitations above (including evaluating programs on "intermediate outcomes" - which it interestingly enough recommends would be best measured by a panel of experts and not just a formula. Couldn't have said it better myself, NAS). I'll post more reaction and highlights as I work my way through the report, but overall this is a strong study from a highly respected body that continues to blow holes in the PART.
back to Blog