Strategy AssociatesThe Obvious Expert Fact or Fiction: When “Truths” Are Not True
Fact or Fiction: When "Truths" Are Not True

Fact or Fiction: When “Truths” Are Not True

In a Nutshell

Each scientific field must adopt its own methods of ensuring accuracy, which is science-behind-the-science.  But ultimately, this self-reflection is a key part of the scientific process itself and its’s attempts at self-renewal. Science has proved itself to be an incredibly powerful method; and yet there’s always room for further advancement.

There’s never an end point. We’re always groping towards the next big-thing or breakthrough. Sometimes science does disappear down the wrong path for a bit before it corrects itself. For a collogue of ours called Nosek, who led the re-testing of 100 psychology papers, the current focus on reproducibility is simply part of the scientific process.

“Science isn’t about truth and falsity, it’s about reducing uncertainty.” Really this whole project is science on science: Researchers doing what science is supposed to do, which is be skeptical of our own process, procedure, methods, and look for ways to improve.

A sign of strength

The idea that papers are publishing false results might sound alarming but the recent crisis doesn’t mean that the entire scientific method is totally wrong. In fact, science’s focus on its own errors is a sign that researchers are on exactly the right path.  Ivan Oransky, producer of the blog Retraction Watch, which tracks retractions printed in journals, tells Quartz that ultimately, the alarm will lead to increased rigor.

“There’s going to be some short-term and maybe mid-term pain as all of this shakes out, but that’s how you move forward,” he says. “It’s like therapy—if you never get angry in therapy, you’re probably not pushing hard enough. If you never find mistakes, or failures to reproduce in your field, you’re probably not asking the right questions.” For psychologists, who have seen so many results crumble in such a short space of time, the replication crisis could be disheartening. But it also presents a chance to be at the forefront of developing new policies.

Ioannidis tells Quartz that he views the most recent psychology reproducibility failures as a positive. “It shows how much effort and attention has gone towards improving the accuracy of the knowledge produced,” he says. “Psychology is a discipline that has always been very strong methodologically and was at the forefront at describing various biases and better methods. Now they are again taking the lead in improving their replication record.” For example, there’s already widespread discussion within psychology about pre-registering trials (which would prevent researchers from shifting their methods so as to capture more eye-catching results), making data and scientific methods more open, making sample sizes larger and more representative, and promoting collaboration.

Dorothy Bishop, a professor of developmental neuropsychology at Oxford University, tells Quartz that several funding bodies and journals seem to be receptive to these ideas and that, once one or two adopt such policies, she expects them to spread rapidly.

In 2005, John Ioannidis, a professor of medicine at Stanford University, published a paper, “Why most published research findings are false,” mathematically showing that a huge number of published papers must be incorrect. He also looked at a number of well-regarded medical research findings, and found that, of 34 that had been retested, 41% had been contradicted or found to be significantly exaggerated.  Since then, researchers in several scientific areas have consistently struggled to reproduce major results of prominent studies. By some estimates, at least 51%—and as much as 89%—of published papers are based on studies and experiments showing results that cannot be reproduced.

Researchers have recreated prominent studies from several scientific fields and come up with wildly different results. And psychology has become something of a poster child for the “reproducibility crisis” since Brian Nosek, a psychology professor at the University of Virginia, coordinated a Reproducibility Initiative project to repeat 100 psychological experiments, and could only successfully replicate 40%. Now, an attempt to replicate another key psychological concept (ego depletion: the idea that willpower is finite and can be worn down with overuse) has come up short. Martin Hagger, psychology professor at Curtin University in Australia, led researchers from 24 labs in trying to recreate a key effect, but found nothing. Their findings are due to be published in Perspectives on Psychological Science in the coming weeks.

Frank Voehl

Frank Voehl is the President and CEO of Strategy Associates and an Author and Series Editor of more than forty books covering the subjects of quality, innovation, change and business-cycle management. He is a former Chief Operating Officer and founding General Manager of FPL's Qualtec Quality Services, a Grand Master Black Belt in Lean Six Sigma, and a counselor/advisor to business and industry since 1985, both in the public and private sector. His academic background is in industrial engineering, math, philosophy, and law. He received his undergraduate degree from St. John's University, and did some graduate and theological studies. He is currently enrolled in the FSU/JMI Entrepreneurial Development Studies Program, and is a Senior Mentor with Take Stock in Children.