From: Howard Rodstein, Crime Victims United

Date: November 27, 2006

Subject: Washington State Institute Paper on Evidence-Based Practices

 

Below please find my thoughts about the Washington State Institute For Public Policy paper on evidence-based practices as they relate to prison construction, criminal justice costs and crime rates.

 

Howard Rodstein

Crime Victims United

 

======================================================================

 

The 2005 Washington State legislature asked the Washington State Institute for Public Policy (WSIPP) to report on the potential of "evidence-based options" to reduce the need for prison beds, save money for taxpayers, and reduce the crime rate. In October of 2006, WSIPP published its response entitled "Evidence-Based Public Policy Options To Reduce Future Prison Construction, Criminal Justice Costs, and Crime Rates". The paper was authored by Steve Aos, Marna Miller and Elizabeth Drake and is published at http://www.wsipp.wa.gov/pub.asp?docid=06-10-1201.

 

The conclusion of the paper is that moderate to aggressive use of evidence-based programs should result in significant cost savings for taxpayers and for individual victims and should result in a slight decrease in the crime rate. At present WSIPP estimates that Washington spends $41 million per year on evidence-based programs. Assuming an increase in funding to $63 million per year in 2006 dollars, the paper projects taxpayer savings of up to $2 billion from 2008 to 2030. In that year, the incarceration rate would be about the same as today and the crime rate would be slightly lower. Three fewer prisons would be required relative to the current forecast.

 

To reach these conclusions, the authors looked at evaluations of 571 evidence-based juvenile and adult treatment and prevention programs from the United States and other countries. They determined the cost of the programs and the financial benefits, if any, to taxpayers and to victims, resulting from reduced recidivism and crime prevention. They then calculated the net benefit of the programs.

 

Although they were also charged with investigating sentencing alternatives and the use of risk factors in sentencing, this paper does not address them but rather assumes that sentencing practices will remain unchanged. All of the projected benefits of evidence-based programs are the result of preventing crime.

 

The paper considered evaluations based on control groups and "rigorous comparisons groups". It found that "some evidence-based programs can reduce crime, but others cannot." For example, for adult offenders, 25 cognitive behavioral therapy programs showed an average recidivism rate of 59 percent compared to 63 percent for the comparison groups. (This decrease of 4 percentage points is reported as a 6.3 percent reduction because 4 is 6.3 percent of 63.) On the other hand, the average change in recidivism for 22 adult boot camp programs was zero percent.

 

In the juvenile realm, 21 restorative justice programs for low-risk offenders showed an average recidivism reduction of 8.7 percent while 10 "scared straight" programs showed an average recidivism increase of 6.8 percent.

 

Some programs showed dramatic decreases in recidivism but included few program evaluations. For example, the "Nurse Family Partnership-Mothers" program showed a 56.2 percent reduction in recidivism but just one program evaluation was available.

 

Unlike some proponents of treatment who portray it as a panacea, the paper cautions that "it is one thing to model these results carefully on a computer, it is quite another to find a way to make them actually happen in the real world . . . Safeguarding the state's investment in evidence-based programs requires ongoing efforts to assess program delivery and, when necessary, taking the required steps to make corrective changes . . . As in every enterprise, quality control matters."

 

The paper touches on the crime-reduction power of incarceration: "We found that a 10 percent increase (or decrease) in the incarceration rate leads to a statistically significant 3.3 percent decrease (or increase) in crime rates." Oregon's incarceration rate rose from 206 per 100,000 residents in 1995 to 531 in 2005. This represents ten 10 percent increases. Using the WSIPP formula, that would generate a 33 percent decrease in crime rates. What we actually saw in Oregon over this period is a 45 percent decrease in the violent crime rate and a 16 percent decrease in the property crime rate. The difference may relate to the fact that violent criminals account for most of Oregon's increase in incarceration rate.

 

In recent years, corrections spending in Oregon has been a frequent subject of debate. On one hand are groups like Crime Victims United which say that increased spending has paid off generously because far fewer people have become victims. On the other hand are groups which say that increased spending is a waste and has not provided greater safety. They argue for major reductions in sentences which would result in significant savings in prison costs but, if we can believe the WSIPP formula, would also result in significant increases in crime rates. The only way to reduce expenditures while also reducing crime rates is to prevent crime. Crime prevention has been the holy grail for decades but the reality often fell short of the promise.

 

Now the WSIPP paper holds out the prospect that we can prevent crime while reducing future spending increases through the use of evidence-based programs. This is something we can all celebrate, if it is true. So the question becomes how much faith should we place in this projection?

 

WSIPP has studied the impact of treatment and crime prevention programs for many years and has considerable expertise in this area. They surveyed a wide range of treatment and prevention programs and set an appropriately high bar for inclusion of program evaluations in their study. The caveats they include in their paper reassure us that they understand the difference between theory and reality and the need for ongoing evaluation of programs. All of this lends credence to their conclusions.

 

Their conclusions are based on a large number program evaluations. Can these evaluations be trusted? There are many potential sources of systematic error. Usually evaluations are performed by the people who designed and ran the programs or by academics who are pre-disposed to believe that programs are effective. It is well known that program evaluations that show good results are more likely to be published than those that show poor results. Selection bias, the result of a comparison group that is not truly comparable to the treatment group, is common but is not always evident from the program evaluations. The flawed practice of ignoring program dropouts in program evaluations is also known to occur. WSIPP has attempted to control for these problems but it is hard to know how successful they have been. As the paper notes, a goal for the future is to perform "a formal risk analysis to test the degree to which the model's findings are sensitive to key data inputs."

 

Some of the reported savings strain credulity. One program shows a savings of $77,798 per program participant and many show savings of $10,000 per participant or more. With such savings one would expect government's budgets to be generating hefty surpluses.

 

Anyone who has relied on a weather forecast to plan their weekend can appreciate the difficulty of forecasting outcomes 24 years in the future.

 

It would be naive to accept the paper's findings as gospel and unwise to dismiss them out-of-hand. We should maintain an open-minded healthy skepticism. This means proceeding cautiously and gaining more experience through which we will become better qualified to judge. This process can be no better than the information that we gather to guide us. As the paper says: "We recommend that the legislature initiate an effort to evaluate the outcomes of key programs in Washington. If the evaluations are conducted with rigorous and independent research designs, then policymakers in Washington will be able to ascertain whether taxpayers are receiving positive rates of return on their dollars."

 

The key phrase here is "if the evaluations are conducted with rigorous and independent research designs". Those who design and run the programs and their evaluations should aim for a level of rigor sufficient to convince an open-minded skeptic or disabuse an open-minded believer. Without rigorous and credible evaluation, we will find ourselves 24 years down the road knowing nothing more than we do today.

 

======================================================================

 

Here are some questions that came to mind as I read this paper.

 

* How can one distinguish a credible evaluation from one that should not be trusted.

 

* Do you accept as rigorous program evaluations that ignore dropouts?

 

* How can one compare the reality of a program to the theory?

 

* People are understandably skeptical about pharmaceutical studies conducted by drug companies. Should they also be skeptical of treatment programs studies conducted by treatment proponents?

 

* Random selection is considered essential in pharmaceutical studies. Should it also be considered essential in treatment studies?

 

* Is there a pro-treatment bias in academia?

 

* Is prevention more cost-effective than treatment or vice versa?

 

* In Oregon, we had a 44 percent decrease in reported violent crime from 1995 through 2002 - the largest decrease in the nation. Can you estimate the financial benefit to taxpayers and victims of this decrease?

 

* Oregon's incarceration rate increased 158 percent increase in incarceration rate from 1995 through 2005. This amounts to ten 10 percent increases. According to your findings, this should yield a 33 percent decrease in crime rate. Is this a valid application of your findings?

 

* How can you eliminate self-selection bias in the nurse family partnership-mothers programs?

 

* The nurse family partnership-mothers program lists only one evaluation. Why is it not listed as having "too few evaluations".

 

* What conclusions should one draw from the same program showing vastly different results in two similar cities? For example, the Breaking The Cycle drug court program showed a 35 percent decrease in rearrests in Birmingham and a 7 percent increase in rearrests in Jacksonville (Source: GAO-05-219).

 

* If we adopt an evidence-based program, do we still need to rigorously evaluate our implementation of it?

 

* Studies use various definitions of recidivism over various amounts of time. How do you account for this disparity?

 

* The measure of recidivism mentioned in the paper is "another felony or misdemeanor conviction after a 13-year follow-up". Most studies report 1-year or 3-year results. Where do you get information on a 13-year follow-up.

 

* How can you estimate cost benefits based on recidivism rates which do not take into account the number of new crimes committed by a participant or prevented by a program?

 

* The paper does not distinguish between violent and non-violent crime. If violent crime is more costly, doesn't this factor need to be considered when comparing program effectiveness?

 

* Under Senate Bill 267, Oregon is increasing the use of evidence-based programs at a rapid rate. Should we therefore expect significant reductions in crime rates in the near or intermediate term? Should this be factored into prison population forecasts.

 

* A preliminary study of drug treatment programs in Oregon prisons showed no statistically significant benefits. How should we interpret this?

 

* The paper found that a moderate-to-agressive portfolio of evidence-based programs would cause a slight decrease in crime rates. If we spent the same amount of money on incarceration instead of treatment, how would that affect crime rates?