Karl Popper’s contributions to science are many, but of the most important is the requirement for verifiability. He required of scientists and researchers to try time and again that we are wrong. But Wittelston (2016) thinks this is not happening and that the scientific community is plagued by at least 5 related biases that frustrate the process of falsification of study results. He discerned the verification bias, novelty bias, normal science, evidence and market biases as the five biases affecting research.
On verification bias, he writes that researchers are obsessively focused on the verification principle, that is, on trying to prove they are right be generating positives. This process involves HARKing where you develop hypotheses after the fact which is so anti-Popperian. Even unethical. He faults many journal editors who may reject or advise a change when the data doesn’t prove the hypothesis.
The demand for novelty by journals he posits is a second problem. This demand for ground breaking research, breaking new ground and all ignores the fact that most research is incremental. Ground breaking research is rare. He argues there are 3 good reasons to encourage ground laying or incremental research. One is that we don’t know a priori what research will turn out to be cutting edge, the second reason is that groundlaying contributions produce the building blocks for cutting edge research that enters this new ground and finally, the third reason is the role of replication studies both failed and successful ones.
He identified the normal science bias where results that seriously challenge the prevailing paradigm are not welcomed as a step toward further progress, but rather are put aside as mistakes of the scholar. While admitting that normal science practices are not entirely dysfunctional, they end up restricting out of the box thinking.
If only results with positive outcomes are published, then we are certain false positives will get published. This, he names the evidence bias. To treat this bias, he suggests for example a journal of replications where even failed studies will be published.
The desire for impact among top journals leads to the last bias; market bias. The commercial drive that is associated with impact means no journal editor wants to publish journals where the data do not support theory or where replication studies fail to prove the earlier reported results.
He proposes seven solutions to this problem and I will just copy and paste them here
- editorial policies might dispose of their current overly dominant pronovelty and pro-positives biases, and explicitly encourage the publication of replication studies, including failed and unsuccessful ones that report null and negative findings.
- an option is to stimulate pre-reviewing/pre-publishing of a study’s theory and design,
- open access publication by funding agencies and research institutes of all work produced prior to journal submission could provide access to studies not published in journals.
- all raw data, protocols and data analysis codes of accepted journal articles should be made available to the journal (which may collaborate with an established archive consortium) in order to make the execution of independent replication studies a way easier endeavor.
- a tradition of meta-analyses that correct for publication bias has to be established, similar to that in Medicine.
- reporting significance only is inadequate, as the p-statistic is anything but uncontroversial….. Additionally, therefore, I would support Hubbard and Armstrong’s (1997, p. 337) earlier plea for “reporting effect sizes and confidence intervals […] If statistical tests are used, power tests should accompany them.”
- journals may appoint a replication section editor
What do you fellows think and I am looking at you, Mike and Neil.
It seems like the core problem is having gatekeepers in this digital age.
With the rise of preprint servers, I’m wondering how long traditional journals are going to last, or at least dominate the scientific conversation as much as they have historically. Increasingly the official publication seems anti-climatic, happening long after most of the scientific discussion. (An online discussion (typically Twitter or, these days, Mastodon) that seems to be preempting the official peer review process.)
That said, I’m not sure how pervasive preprint culture is across all fields. It sure seems prevalent in the sciences, although I think there are still a few prestigious journals that won’t publish anything that’s been pre-released.
Unless the Twitter discussion is by a community of practitioners, I don’t think it can replace the role of peer reviewers. But I think there is need to rethink some of the concerns raised in the journal.
It tends to include the practitioners, with the rest of us able to watch and participate. And I get the impression there’s a lot of discussion happening between the practitioners via direct messaging, email, etc.
Maybe I will return to twitter on some future date. Or mastodon once I know what to do with it.
LikeLiked by 1 person
Here’s Dr Richard Horton, editor of the Lancet, with his thoughts on bad science – up to 50% worthless papers.
Click to access PIIS0140-6736%2815%2960696-1.pdf
The Lancet has also been recently been accused by Professor Norman Fenton of poor science. And the general public, not versed in the scientific method, are not in a position to judge the veracity of researchers’ ‘proofs’. Anything can be foisted on us, and usually is.
This is good, Tish. The article I referenced talks about articles on psychology, medicine and other natural sciences as being shoddy work. I think there really should be a change on how we do science
It’s a lot to do with creating new products/techniques/inventions and the patent system, and thus MONEY in royalties. It seems researchers can take out patents even though they may be working in public funded institutions, and so they have a vested interest in ‘proving’ their particular brand of science. Once something it taken up as a commerical product/intervention, the first big hit is what counts. Even better if you can get the mainstream media to do the marketing. We are now in the era of fake science, and most of us don’t know how to spot it. It’s not helped either by the computer generated ‘scientific’ papers flooding the internet.
Money kills good research. And you are right, most of us cannot detect bad from good and if the gatekeepers/ reviewers are not doing a good job at it, then we might as well just close shop.
LikeLiked by 1 person
I’m also wondering how much independent research actually happens these days. Most university research requires big funding, and from big donors like Gates. There must surely be a tendency, if only a subliminal one, to find the solutions the funder is looking for. If whole departments are funded this way, then you can see how it is that independently minded scientists who challenge protocol and findings are sacked and vilified. This particularly happens in medical research and climate science, wherein it is claimed to have ‘settled science’, a term which is actually anti-science, and more akin to religion/dogma than the scientific method.
In most of our universities, most of the research is self funded, especially graduate research. Other research I am not so sure. But you make a good point. When you are funded by industry, you have to manage a balancing act between the requirements of your sponsor and the academy or you will fail both sides.
Settled science is a misnomer because all answers can be improved on.
LikeLiked by 1 person
The never-ending fight of the overinflated egos at all those research facilities over funding and accolades at all costs deprives integrity of its moral value. Theories are hotly contested until one with a triumphant ego takes on a victorious position. You mentioned facts? That’s where the manipulations are most prevalent, using obfuscation as a nebuliser.
we need to be better at doing science
LikeLiked by 1 person