
Description
The science you can come across today can often appear to be full of contradictory claims. One study tells you red wine is good for…
In Nautilus on Wednesday, Haixin Dang and Liam Kofi Bright published a “general audience” version of their fascinating recent paper on the philosophy of academic publishing. The paper’s title, Scientific Conclusions Need Not Be Accurate, Justified, or Believed by their Authors, helpfully gets right to the point. It’s a compelling argument, and one that feels philosophically sound to me, especially in the example the pair cites:
For scientists to collectively inquire effectively, they need to communicate interesting ideas to each other that are worth pursuing. Consider Avi Loeb, a theoretical astrophysicist who proposed the provocative hypothesis, not without some supporting data, that ‘Oumuamua wasn’t a comet but an alien light-sail. He, presumably, knew that more data would need to be gathered, and a more thorough study would need to be conducted, before the hypothesis could be justifiably believed.
Nonetheless, perhaps it was appropriate for Loeb to publish his data and his hypothesis. He himself might even be agnostic toward the truth of the hypothesis. He likely knew that most of his colleagues would dispute his interpretation of the data, and with good cause. In spite of all this, it was still valuable for him to publicly communicate the possibility of a new hypothesis, because it can—and maybe actually did—spur more research into, and garner attention for, astronomy. Publishing those findings was not about communicating the truth but about saying that there is something exciting and interesting that requires further inquiry.
But there’s a problem with this more permissive theory of publishing, which Dang and Kofi Bright readily acknowledge. As science becomes more visible to lay audiences via breathless news reporting and preprint servers, misinterpreting conjecture and exploration as fact becomes more likely. When those “facts” are reversed by subsequent studies or publicly knocked down by well-meaning scientists, it leads to an erosion of trust in science.
Dang and Kofi Bright read the problem as a mismatch between how scientists write and how the rest of us read:
Why might it be worth worrying about how and when scientists decide to share their work? The threat of misinformation spreading, you might say. But that isn’t all. As it stands, there is a mismatch between the rules scientists write by compared with those that laymen read by. And given that non-scientists are daily called on to make important decisions on the basis of scientific results, the potential for miscommunication makes possible momentous mistakes.
There’s some truth to that view, to be sure. Except, as University of Texas psychology professor Tal Yarkoni notes on Twitter, that interpretation doesn’t match how scientists actually publish:
Why don’t authors behave this way? Later in the Twitter thread, Trinity College psychology post-doc Richard Lombard Vance puts it this way:
I’m willing to be a bit more charitable than Lombard Vance. In an ideal world, scientists ought to be able to publish conclusions that acknowledge considerable uncertainty and nuance, both narratively and statistically. And they ought to be rewarded for doing so.
In the real world, researchers can’t write like that and expect to put food on the table. Academic research is an infamously competitive, underpaid, and unfair industry. Grant money and jobs go to the people who get the most attention, via citations and media mentions. To maximize their chances of attention, and thus making a living, scientists play fast and loose with statistics and overstate results within the bounds of acceptable behavior. It’s only human.
Thus the mismatch isn’t, as Dang and Kofi Bright contend, between how scientists write and how the rest of us read. The mismatch is between how scientists ought to write and how they must write to keep food on the table.
How do we solve for that mismatch? Honestly, it’s a little frustrating to write this again here, because it’s become a cliché among folks who spend their time thinking about academic infrastructure. But here we go again: the only way is to address the root of the problem and change the way researchers are evaluated and compensated to incentivize the kind of responsible (and, frankly, more interesting!) research we’d all like to see.
Doing so will require a fundamental shift in the business models of research, publishing and impact evaluation. There are lots of organizations, including my own, doing their part to initiate these shifts. To get there, though, we’ll need well-meaning folks like Dang and Kofi Bright to dig a little deeper in their analyses of the industry to help bring others along.