Program Assessment And Budget Cuts Ahead

This Administration has not made reducing the size and effectiveness of government a stated goal; however, the strides that are being made to devolve responsibilities to the states and to privatize government functions, deregulate and limit government oversight, and defund government by reducing federal (and often state) revenue through huge tax cuts, make the words unnecessary. One new and potentially effective tool in this effort to delimit the role of the federal government is the “Program Assessment Rating Tool,” or “PART.”

What is the President’s Management Initiative?

The President’s Management Initiative consists of five initiatives:

  • Strategic Management of Human Capital (making sure that the federal workforce has the right people with the right skills to do their work).
     
  • Competitive Sourcing (opening up a ‘sufficient” number of commercial activities to competition). An example of how “competition saves the taxpayer money” is cited. OMB requested bids on the printing of the FY 2004 budget. The result? The Government Printing Office did the printing at a 23% less cost than the year before.
     
  • Improved Financial Performance (enhancing the quality and timeliness of financial information and preventing waste, fraud and abuse). The largest abusers cited are all programs for low-income people: Medicare, the EITC, Housing Subsidy Programs, and Supplemental Security Income.
     
  • Expanded Electronic Government.
     
  • Budget and Performance Integration (to “build a results-oriented government that funds what works and reforms or ends failing federal programs, redirecting or recapturing their funding.”) This linkage of program performance and budget decisions is being accomplished by the PART.

A scorecard with “status” scores of green (agency meets all the standards for success), yellow (agency achieves some, but not all, of the criteria, and red (has any number of serious flaws) and “progress” scores of green (implementation is proceeding according to plans), yellow (slippage in implementation schedule), and red (initiative in serious jeopardy) is issued each quarter and ratings posted on the Results.gov website. The FY 2004 Budget also has a scorecard.


 

Finding that the Government Performance and Results Act (GPRA) has fallen short of its goals, however well-intentioned -- most specifically in failing to link performance data and budget decisions -- President Bush has rapidly moved forward with his own performance budgeting effort. Since GPRA was signed into law in 1993, and thus cannot just be scuttled, an evaluation questionnaire called the Performance Assessment Rating Tool (PART), is being used by the Administration to “implement the objectives of GPRA.” Already 20% of all federally funded programs, a total of 234 programs, were reviewed during preparation of the FY 2004 budget using the PART. The plan is to evaluate an additional 20% of programs each year, until the PART becomes government-wide. The President included an entirely new volume to the FY 2004 Budget containing the performance and management assessments that have already been accomplished. In addition, two chapters in the main Budget volume, “Governing with Accountability” and “Rating the Performance of Federal Programs” discuss the overall President’s Management Agenda.

Despite all the hype, the PART cannot be characterized as a refined or sophisticated effort to gauge government performance. Rather, it gives the impression of the grade-school sticker method used to reward good work or punish bad work. Its very simplicity, however, makes it a potentially powerful method to justify budget cuts or increases. In spite of vocal protests that the President’s agenda is not to downsize government or reduce its role, there are clear indications to the contrary. Huge tax expenditures that reduce government revenue are one such indication. Another is the current effort to devise a simple way to link the “performance” of government programs and services to the budgeting process.

While no one can argue that every government program is useful and operating at peak effectiveness, determining performance is a difficult process fraught with ambiguity. Linking performance evaluations with budget decisions brings into play underlying ideological positions about the role of government and the role of various programs according to which side of the aisle you sit.

Generally, the field of performance evaluation is based on judging not the output, the actual work done, but the outcome, i.e., the actual effects on the target audience of a program or service. Quantifying the outcome, while it makes good common sense (i.e., it doesn’t matter if a program distributes 100,000 brochures about good nutrition, it matters whether the information is useful to people and makes them change their eating habits) is never going to be easy, and in some cases is impossible. If you provide quality job training to “x” number of people (output), but not very many people get jobs (outcome), your performance is not going to be found effective, no matter what intervening reasons might account for the failure of your clients to get jobs. How exactly do you show that your provision of HIV AIDS education to children (output) has lowered the number of deaths from AIDS (outcome)? If you clean up a Superfund site (output), how do you clearly demonstrate that there has been a corresponding improvement in the health of people in the area (outcome)? How do you figure out the starting points from which to set goals and judge improvement? How do you factor in external factors like the economy or improvements in medical care or other effects that may hamper or foster your efforts? GPRA was envisioned as a long-term, iterative process through which agencies could move towards effective evaluation that could be useful in the budget process. In his typical style, President Bush has effectively fast-tracked GPRA to focus on an immediate “fix,” or, in his budget document’s words, “to give true effect to the spirit as well as the letter of [GPRA].” This should be cause for concern.

What is PART and how is it scored?

The PART is the questionnaire being used to achieve the fifth initiative of the President’s Management Initiative, Budget and Performance Integration.

The PART has four sections:

  • Program Purpose and Design (20% weight) to assess whether the program design makes sense and the purpose is clear.
     
  • Strategic Planning (10% weight) to assess whether the agency sets valid annual and long-term goals for the program.
     
  • Program Management (20% weight) to rate agency management of the program, including financial oversight and program improvement efforts.
     
  • Program Results (50% weight) to rate program performance on goals reviewed in the strategic planning section and through other evaluations.

Each section has a series of questions, tailored to the type of program, requiring “yes” or “no” answers, except the “Results” section gets “refined” by including “yes,” “no,” “small extent” and “large extent.” The President’s Management Initiative Scorecard includes the same red, yellow and green scoring for each of its five initiatives, including the “Performance and Budget Integration” category.


 

What are the results of the first PART evaluations? Over half of the programs evaluated by the PART were “unable to demonstrate results.” The following excerpt captures the “unable to demonstrate results” category well:
 

 Despite enormous federal investments over the years, virtually none of the programs intended to reduce drug abuse are able to demonstrate results. Such a finding could reflect true failure or simply the difficulty of measurement, so further analysis is in order. (pg 51, Budget volume)

Or, this statement could be a hint that religious-based drug treatment programs, in line with the President’s charitable choice initiatives, will become the privileged beneficiaries of government grants intended to reduce drug abuse.


Source: "Rating the Performance of Federal Programs," President's FY 2004 Budget
 

44.5% of the evaluated programs were “adequate,” “moderately effective” or “effective,” and 51% were “ineffective” (see pie chart above). Unsurprisingly, grant programs received lower than average ratings, reflecting the inherent difficulty of evaluating the performance of a variety of grantees by a federally run program in a standardized fashion.

The Budget volume, Performance and Management Assessments, lays out for each program that was evaluated:

 

  • The PART rating (effective, ineffective, adequate, moderately affecting and results not demonstrated).
     
  • The type of program (competitive grant, block/formula grant, regulatory, capital assets, credit, direct, and research and development).
     
  • A program summary, a PART assessment summary, and the Administration’s recommendations.
     
  • A chart showing levels achieved of purpose, planning, management, and results/accountability.
     
  • A chart showing long-term and annual measures, targets and accomplishments.
  • The program funding levels for 2002, estimated 2003, and proposed 2004.

All the programs that were evaluated are indexed in the back of the volume by Department along with their rating.

The Office of Management and Budget (OMB) used the evaluations to make budget recommendations for FY 2004. Eleven programs were judged flat-out ineffective. While some did not receive proposed cuts, some did. For example:

 

  • Even Start (Department of Education block grants for family literacy programs, including early childhood education, adult education, and parenting education). The Administration requested only enough funds to continue awards to current grantees and redirected the remainder of the funds to the “Early Reading First” program. FY 2002 budget was $250 million; recommended FY 2004 budget is $175 million.
     
  • Safe and Drug Free Schools State Grants (Department of Education block grants for programs to reduce youth crime and drug abuse). Despite a RAND study cited in the assessment that the grant funds are spread too thinly to support quality interventions, the Administration recommended a cut in funding with future funding tied to results, from $472 million in FY 2002 to $422 million in FY 2004.
     
  • Vocational Education State Grants (Department of Education block grants to support state-sponsored vocational education programs). The Administration recommended that “grantee funding will be contingent on a rigorous assessment that student outcomes are being achieved”; grantees should be able to “focus” funds according to the needs of students in a particular locality; and states should be allowed to redirect funds to Title 1 of the Elementary and Secondary Education Act program. Recommended cut in funding from $1.18 billion in 2002 to $1.0 billion in 2004.
     
  • Project-Based Rental Assistance (Department of Housing and Urban Development capital asset program to fund landlords who rent affordable apartments to low-income families). The Administration recommends no expansion in this program, with funding increases in FY 2004 only included because more properties are receiving renewing assistance contracts.
     
  • Juvenile Accountability Block Grants (Department of Justice block grant program to provide states with funds to support improvements in state and local juvenile justice systems). The Administration recommends no funding for this program in 2004.
     
  • Earned Income Tax Credit (EITC) Compliance (Department of Treasury direct federal program to reduce erroneous payments of EITC credit). Recommended budget increase from $146 million in FY 2002 to $251 million in 2004 to require “high-risk” EITC applicants to pre-certify that the children claimed on their return are really qualifying children under EITC. “High-risk” will be identified through databases having to do with child custody and include those taxpayers with characteristics like being relatives other than parents who claim a child for EITC purposes. Also, IRS is to delay refunds on returns deemed to be high risk while agents take action to resolve cases. (Would that corporate tax evaders were subject to similar actions!)
     

However, a program did not have to be ineffective to receive recommended funding cuts. The Community Oriented Policing Services (COPS) for hiring and redeploying police officers was not determined “ineffective,” but “unable to demonstrate results.” Nevertheless, since its impact on crimes is “inconclusive,” it was phased out. The COPS program is slated for a little funding to accomplish its other goals of reducing crime and increasing trust in police. Funding in FY 2002 was $684 million and recommended funding in FY 2004 is $164 million.

What does all this mean? The PART is a tool that will be used to “objectively” justify budget cuts or increases. Given the current Administration’s devolution, deregulation, and defunding triad, it will likely be used to reduce government. For more information about the PART see OMB Watch’s previous Watcher article and our comments to OMB or the OMB website. The President has encouraged public comments to address the PART’s “limitations and shortcomings,” which include increasing consistency, defining “adequate” performance measures, minimizing subjectivity, measuring progress towards results, institutionalizing program ratings, assessing overall context, and increasing the use of rating information. Comments may be emailed to performance@omb.eop.gov.

back to Blog