The replication crisis has been an elephant in the room for social scientists for the last few years—with many influential, bedrock studies appearing to be less rigorous than initially accepted. Some expected as many as two of every three studies couldn’t be replicated.
While the social sciences are a bit adjacent to those in medicine, the implications of the latest research may help all fields improve best practices in designing and executing clinical studies.
A study published Aug. 27 in Nature Human Behaviour showed that scientists are skilled in detected questionable and/or unreliable results. Corresponding author Brian Nosek, with the University of Virginia in Charlottesville, Virginia, and colleagues tested 21 studies from Science and Nature, two highly regarded journals. Most were psychological studies with student subjects.
Experimenters were able to reproduce results of 13 studies, results that were better than previous research.
"A substantial portion of the literature is reproducible," Nosek said in an interview with NPR. "We are getting evidence that someone can independently replicate [these findings]. And there is a surprising number [of studies] that fail to replicate."
Nosek et al. also examined if scientists could predict which experiments would fail replication. Roughly 200 experts took bets on which studies would hold up under closer scrutiny. The experts proved capable of guessing which experiments would and wouldn’t be replicated.
“The prediction market beliefs and the survey beliefs are highly correlated and both are highly correlated with a successful replication,” the authors wrote. [T]hat is, in the aggregate, peers were very effective at predicting future replication success.”
But, according to NPR, such predictions might be possible in medical research. Jonathan Kimmelman of McGill University in Montreal, Canada, noted that such forecasting in medicine often fails.
"That's probably not a skill that's widespread in medicine," he told NPR. The social scientists may have deep skills in analyzing data and statistics, but such expertise doesn’t translate to medical testing