Select Page

About Straight Talk on Evidence

[T]he idea that we all hope you have learned in studying science in school … is a kind of scientific integrity …. For example, if you’re doing an experiment, you should report everything that you think might make it invalid – not only what you think is right about it …. [T]he idea is to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.

– Nobel-prize winning physicist Richard Feynman, Caltech commencement address, 1974

Problem

Exaggerated claims of  effectiveness are pervasive in reported social program evaluation findings, sometimes leading to the expansion of ineffective programs at the expense of programs with meaningful effects on people’s lives.

Researchers and program providers often face strong incentives to report evaluation findings in the most favorable possible light. For program providers, positive findings may be essential to future funding and to the continuation of their program. For academic researchers, positive findings greatly increase the chances of publication in a top academic journal, which, in turn, is a central factor in faculty tenure decisions. By contrast, there is often no incentive—or even a strong disincentive—to report findings of no or adverse program effects, or to highlight study limitations that weaken the credibility of the results. Doing so may jeopardize program or research funding, likelihood of publication, and/or career advancement. Program providers and researchers may also feel strongly invested in a program’s success, and unconsciously interpret the study results in a way that confirms their hopes. Thus, for professional and other reasons, they can find it very difficult to report evaluation findings in a way that is consistent with principles of scientific integrity.

These factors fundamentally distort the reporting of evaluation findings, leading to frequently-overstated claims of effectiveness. Such distortion has been well documented in the reporting of medical research.[i] In social policy, Arnold Ventures’ Evidence-Based Policy team monitors and reviews program evaluations on an ongoing basis. We have found the problem to be at least as great in social policy as it is in medicine, often:

  • affecting publications in leading peer-reviewed scientific journals;
  • causing programs to be erroneously labeled as “evidence based” by government and nonprofit certifying organizations; and
  • distorting the communication of information about “what works” to policymakers, the press, and the public.

In some notable cases, this has led to the expansion of programs that are likely to be ineffective, while diverting attention and funding from other programs that do have credible evidence of important impacts on people’s lives (examples [1],[2]).

 

Goal of Straight Talk

We seek to distinguish credible findings of program effectiveness from the many others that claim to be, through an easy-to-read, no-spin digest of recent program evaluation findings.

We report primarily on randomized controlled trial (RCT) evaluations, given their unique ability to generate strong evidence about program impact. But even an RCT design provides no guarantee that the study was well implemented or that its results, as reported, accurately represent the true findings. We aim to explain in plain language why certain findings are credible or not, and whether they are presented accurately or exaggerated, so that readers can understand the findings themselves. Our target audience includes policy and program officials, researchers, journalists, and philanthropic funders.

 

Process

We systematically monitor the evaluation literature, and select studies to report based on such factors as their policy importance and level of press or policy attention. Our evidence reports are authored by Arnold Ventures’ Evidence-Based Policy team (or, occasionally, by guest researchers identified in the byline) and are independently reviewed by outside experts. Once we have developed a draft Straight Talk report, we share it with the study’s lead author, and invite him or her to provide a written response to the report prior to its public dissemination. The author’s response, if provided, is disseminated with our report and, if needed, our brief rejoinder to the response.

As a philanthropic organization, we receive no financial benefit from any program, enabling us to serve as an impartial reviewer of the evidence. In the few instances where the organization has helped fund an evaluation study that we are reporting on, we disclose our funding in the evidence report.

 

About Arnold Ventures

Arnold Ventures’ mission is to improve lives by investing in evidence-based solutions that maximize opportunity and minimize injustice. We work to develop and support initiatives that encourage governments and nonprofit organizations to help build the evidence base for social interventions and to consider reliable evidence as one of the primary factors in their decisions.

Arnold Ventures’ Evidence-Based Policy team, which leads the philanthropy’s work on Straight Talk, is comprised of the former leadership of the Coalition for Evidence-Based Policy, a nonprofit, nonpartisan organization that, from 2001 to 2015, played a key role in the launch of the evidence-based policy movement. 

If you have questions or suggestions regarding this site, please contact Leya Mohsin on the Arnold Ventures’ Evidence-Based Policy team.

You can view our privacy policy on the Arnold Ventures website


References:

[i] Examples include: F. E. Vera-Badillo, R. Shapiro, A. Ocana, E. Amir & I. F. Tannock, “Bias in reporting of end points of efficacy and toxicity in randomized, clinical trials for women with breast cancer,” Annals of Oncology, vol. 24, no. 5, 2013, pp. 1238-1244, linked here. Nasim A. Khan, Manisha Singh, Horace J. Spencer, and Karina D. Torralba, “Randomized controlled trials of rheumatoid arthritis registered at ClinicalTrials.gov: what gets published and when,” Arthritis & Rheumatism, vol. 66, no. 10, October 2014, pp. 2664-74, linked here. John P. A. Ioannidis, “Why Most Published Research Findings Are False, PloS Medicine, vol. 2, no. 8, August 2005, p. e124, linked here.