- Since 2015, Arnold Ventures’ Evidence-Based Policy initiative has funded more than 75 randomized controlled trials (RCTs), totaling about $50 million, to evaluate a wide range of social programs. Well-conducted RCTs are widely considered the strongest method of evaluating program effectiveness.
- Today we begin sharing the studies’ findings, through one-page summaries posted on the RCT Findings page of the Arnold Ventures website as the study results become available. The first four are posted, and we will notify Straight Talk subscribers of additions to the site via periodic email updates.
- Each summary aims to provide readers with a concise overview of the study and its principal findings on important outcomes, written in plain language, with careful attention to neither overstate nor understate the results. Each summary also links to the researchers’ full study report.
- Through these summaries, we hope to address common shortcomings in the program evaluation literature, such as: (i) public reporting of only studies that show positive effects (we summarize all studies we have funded regardless of the findings); (ii) selective reporting of only positive effects in each study (we summarize the effects on all pre-registered primary outcomes); and (iii) the omission of key study limitations that may reduce confidence in the results (we note any such limitations).
- We welcome any comments or suggestions on our reporting process (via email).
Over the past five years, Arnold Ventures’ Evidence-Based Policy team has funded more than 75 RCTs, totaling approximately $50 million, to evaluate a wide range of social programs and practices (“interventions”). We focus on RCTs because, when well-conducted, they are widely considered the most scientifically rigorous method of evaluating intervention effectiveness. Our main goal is to build the body of social interventions with credible evidence of sizable, sustained effects on important life outcomes, so that they can be expanded in order to improve people’s lives on a larger scale.
Today, we begin sharing the findings of these studies, through one-page summaries of the RCTs that have reported interim or final results. We will post all summaries on the RCT Findings page of the Arnold Ventures website as the study findings become available, and will notify Straight Talk newsletter subscribers of additions to the site via periodic email updates. The first four summaries have been posted, and here are the direct links:
- Nurse-Family Partnership Home Visitation Program: Long-term findings on maternal and child mortality in three randomized controlled trials. Read More
- Project QUEST Occupational Training for Low-income Individuals: Long-term (nine-year) earnings impacts in the QUEST randomized controlled trial. Read More
- KIPP Charter Middle Schools: Long-term findings on college enrollment and persistence in a large randomized controlled trial. Read More
- Federal Workplace Health and Safety Inspections: A randomized controlled trial of workplace health and safety inspections administered by federal Occupational Health and Safety Administration (OSHA). Read More
Consistent with our overall approach to evidence reporting, these summaries aim to provide readers with a concise overview of the study and its principal findings, written in plain language, with careful attention to neither overstate nor understate the findings. Each summary includes:
- A description of the intervention that was evaluated.
- An overview of the study design.
- The impacts found on the study’s pre-registered primary outcomes. Consistent with best evaluation practice [c][d], we request that researchers whom we fund—as a condition of our grant award—pre-register at the study’s inception the primary outcomes and analyses that will be used to assess the intervention’s effectiveness, and to adhere to these outcomes/analyses over the course of the study. We encourage researchers to pre-register, as primary outcomes, one or a few outcomes that are of self-evident policy importance (e.g., in the four RCTs linked above, mortality, workforce earnings, college enrollment and persistence, and serious workplace injuries, respectively). Each of our summaries reports the intervention’s impact on all of the study’s primary outcomes.
- An overview of the study’s quality, including any limitations that could reduce confidence in the accuracy of the results.
- A link to the researchers’ full study report, which provides greater detail on the intervention, study methods, and findings, including any findings based on secondary or exploratory (as opposed to primary) outcomes and analyses.
Through these summaries, we hope to address well-known and/or common shortcomings in the reporting of evaluation results, such as:
- Publication bias—that is, the public reporting of only the studies that show positive intervention effects, as opposed to studies that show no effects or adverse effects, causing an overall positive bias in the evaluation literature. We seek to address this problem by posting summaries of all RCTs we fund, regardless of the findings.
- Selective outcome reporting—that is, the common tendency of study abstracts or reports to highlight the study’s positive impact findings, while (i) not mentioning disappointing or countervailing impacts that the study found (see examples [e][f]); or (ii) failing to note that the positive impacts were based on post-hoc (as opposed to pre-registered) analyses or were just a few of many impacts that the study estimated. Item (ii) can easily lead to “false-positive” findings that occur by chance due to the study’s measurement of numerous outcomes using multiple analysis methods (see examples [g][h]). As noted above, we seek to address these problems by having researchers pre-register their primary outcomes and analyses and by including the results of all such outcomes/analyses in our summaries.
- The omission or downplaying of important study limitations. It is not uncommon for study abstracts or summaries to omit or downplay key limitations in the study’s design or execution that could have led to an inaccurate estimate of the intervention’s impact, such as high rates of sample attrition, violations of random assignment, or sizable differences in characteristics between the intervention and control groups at the start of the study (see examples [i][j]). Each of our summaries notes any such limitations.
- The presentation of the size of the intervention’s effects in terms that only a researcher can understand, such as standard deviation units or odds ratios. Our summaries seek to present the size of effects in easily understood terms, such as gain in annual earnings (for a job training program), increase in high school graduation rates (for an education program), or reduction in arrest rates (for a crime prevention program).
- The omission of information about the duration of the study. It is remarkable how many study abstracts do not report the time period over which the intervention’s impact was measured, so that readers cannot readily gauge (without reading the full study report) whether it was a short-term—e.g., three month—impact that could easily fade over time, or a longer-term impact—e.g., three years—that may represent an enduring improvement in participants’ lives. Each of our summaries reports on the time period over which the study’s impacts were measured.
We would welcome any comments or suggestions on our reporting process (via email), as it is still in an early phase and we hope to make it as useful as possible to the policy and research community. (We also encourage readers interested in potentially applying for funding to conduct an RCT to read our RCT Opportunity and Moving the Needle funding announcements.)
 Institute of Education Sciences and National Science Foundation, Common Guidelines for Education Research and Development, August 2013, linked here. National Research Council and Institute of Medicine, Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities, Mary Ellen O’Connell, Thomas Boat, and Kenneth E. Warner, Editors (Washington DC: National Academies Press, 2009), recommendation 12-4, p. 371, linked here. U.S. Preventive Services Task Force, “Current Methods of the U.S. Preventive Services Task Force: A Review of the Process,” American Journal of Preventive Medicine, vol. 20, no. 3 (supplement), April 2001, pp. 21-35. The Food and Drug Administration’s standard for assessing the effectiveness of pharmaceutical drugs and medical devices, at 21 C.F.R. §314.126, linked here. Every Student Succeeds Act, Section 8002 definition of “evidence-based,” Public Law 114-95, December 10, 2015.
 We will summarize findings of all RCTs that have produced final results. We will also summarize interim findings in cases where the research team has publicly reported such findings.
 In addition, we request that researchers whom we fund—as a condition of our grant award—post their final RCT evaluation report on the Open Science Framework within one year of the study’s final data collection.