Having just published two Straight Talk reports on the pattern of disappointing effects for most rigorously-evaluated programs, we focus this report on a compelling positive example of demonstrated effectiveness. The example we showcase is the Knowledge is Power Program (KIPP) network of charter schools. Our full review of the evidence on KIPP is linked here on our Social Programs That Work website; the following are brief highlights.
- Program: A nonprofit network of 209 college-preparatory, public charter schools that serve a predominantly low-income, minority population of students from pre-K through high school.
- Evaluation Methods: Two well-conducted randomized controlled trials—RCTs—that respectively evaluated the effectiveness of KIPP elementary schools (some of which offer pre-K) and KIPP middle schools as implemented on a sizable scale. Schools in these studies were located in nine states and the District of Columbia.
- Key Findings: Both types of schools produced sizable, statistically-significant effects on reading and math achievement—increases of between 5 and 10 percentile points (compared to the control group)—as measured two to three years after random assignment.
- Other: This evidence of effectiveness applies to KIPP pre-K/elementary schools and middle schools (KIPP high schools have not yet been evaluated in an RCT). A limitation of the findings is that they apply to the subset of KIPP schools that are oversubscribed, and may not necessarily generalize to other KIPP schools that are not oversubscribed.
For purposes of disclosure, we note that the Laura and John Arnold Foundation (LJAF) and members of its board have provided funding support for KIPP schools, and LJAF funded a supplementary analysis of the pre-K/elementary school RCT (as described in the full review). However, our team at the Foundation—Evidence-Based Policy—was not involved in those efforts and conducted this evidence review independently.
In addition to summarizing the evidence findings, we offer four brief observations on their implications for policy and research, as follows.
First, these findings provide a convincing answer to the nihilist, described in an earlier Straight Talk report, who says that positive RCT findings have only marginal policy value because the world is complex and ever-changing, and what works in one instance tells us little about what will work in a different time or place.
The KIPP findings show that, in fact, some positive RCT findings do successfully replicate—in this case, across two well-conducted RCTs of KIPP as implemented in multiple U.S. jurisdictions, with two student samples of different ages (pre-K/elementary school and middle school). Such evidence provides strong confidence that if new jurisdictions were to implement KIPP in a similar population and similar settings—adhering, of course, to the program’s key features—they would likely see similar, meaningful gains in student achievement. We say this because the successful replication (i) essentially rules out the possibility that each RCT’s positive finding was a statistical fluke (since the chance of two such flukes is extremely small); and (ii) shows that KIPP’s effects can generalize across different settings and age groups.
Second, the KIPP findings are an important success story for the tiered-evidence grantmaking approach of the U.S. Department of Education’s Investing in Innovation (i3) and Education Innovation and Research (EIR) initiatives. As described in our last Straight Talk report, such tiered-evidence initiatives award their largest grants (for “scale-up” or “expansion”) to programs that have credible prior evidence of effectiveness. In this case, KIPP won a $50 million scale-up grant from i3 in 2010 based on highly-promising prior evidence including results of a randomized evaluation of a KIPP middle school in Lynn, Massachusetts. The government thus made a major investment in KIPP, but also prudently required additional, independent RCT evaluations as a condition of the grant. The results of those RCT evaluations, summarized above, show that the investment paid off.
Third, the KIPP findings illustrate a general pattern in social program evaluations (which we’ve also noted in previous Straight Talk reports): Effectiveness often depends more on the specific program model that is used (KIPP) than on the general program approach (charter schools). Whereas the evidence on KIPP charter schools is positive and compelling, the evidence on charter schools as a general approach is not especially promising. In 2010, the Institute of Education Sciences (IES) published results of a large, IES-commissioned RCT of 36 over-subscribed charter middle schools across the United States that used a heterogeneous array of program models. The study found that the charter schools’ average effect on reading and math achievement was near zero over a two-year follow-up period. More specifically, some charter schools out-performed the regular public schools attended by control group students, whereas other charter schools under-performed the regular public schools; but, on average, neither type of school had an advantage.
As a final, cautionary observation—while we can be confident that KIPP increases student achievement, we don’t yet know for certain that such gains will lead to improved longer-term life outcomes (e.g., college attendance and completion, workforce earnings, job satisfaction). Studies have found that school achievement is correlated with such longer-term outcomes, but the relationship is not necessarily causal, as it is possible that other factors—such as individual motivation or family support—may be driving both high achievement and better long-term earnings and other outcomes. Thus, as a next step in the research, we believe it would be valuable to conduct longer-term follow-up of the two KIPP RCTs to see if KIPP’s positive effects on student achievement are indeed a harbinger of better life outcomes for the highly-disadvantaged children that KIPP serves. But, in the meantime, the KIPP findings provide strong reason for optimism.