This is the continuation of issues researchers are having replicating studies. In a significantly important number of cases, the results are just not able to be replicated.
Problem 1: Replication attempts are uncommon for a few reasons listed by two researchers:
Brian D. Earp and Jim A. C. Everett enumerated five points as to why replication attempts are uncommon
“Independent, direct replications of others’ findings can be time-consuming for the replicating researcher
“[Replications] are likely to take energy and resources directly away from other projects that reflect one’s own original thinking
“[Replications] are generally harder to publish (in large part because they are viewed as being unoriginal)
“Even if [replications] are published, they are likely to be seen as ‘bricklaying’ exercises, rather than as major contributions to the field
“[Replications] bring less recognition and reward, and even basic career security, to their authors”
For these reasons the authors contend that psychology is facing a disciplinary social dilemma, where the interests of the discipline are at odds with the interests of the individual researcher. (Frontiers in Psychology/Wikipedia)
Problem 2: Results aren’t able to be replicated in many cases. As revealed in 2015 in a special study, 270 researches were able to replicate the results in fewer than half the studies they attempted, results that had been published in three leading psychology journals.
Yet, we’re diagnosing, treating, prescribing, and even changing social and government policy across wide swathes of our culture based on some of these findings.
Like many in the fields of mental health and addiction, I rely quite a bit on “evidence-based” treatment protocols to help patients. As the insurance bureaucracy grows ever more complicated, as each iteration of the DSM eliminates some mental illnesses, reclassifies others, and identifies new ones, as patients present with multiple diagnoses, clinicians must adapt to new requirements sometimes based on research that, quite frankly, doesn’t measure up. To do this with integrity clearly requires personal grit.
In 2009 in Therapy Soup, we wrote that ethical clinicians must keep on educating themselves, continuing to take classes and/or attentively read up on our field. But what do we do if the education itself is flawed?
I’m not sure there is a clear answer, but I do believe that as we try and keep up with the research and the resultant insights and protocols, we must assess ourselves. We must recommit to acting professionally with deep integrity and compassion. And, when our years of experience seem to indicate something quite different from the research, we might have to take the chance to help a patient in a way that’s worked for us before–no matter what the science says.