Last week, Science Twitter was roiled by claims that “disruptive science” was on the wane and that this might be reversed by “reading widely”, taking “year long sabbaticals” and “focussing less on quantity … and more on …quality”. It blew up, which is probably not surprising given that it first pandered to our collective angst and then suggested some highly congenial remedies.

The Nature paper that kicked off this storm in our social media teacup is profusely illustrated with graphs and charts. The data is not uninteresting and does suggest that something about the practice of science has changed over the course of the last eight or nine decades. The problem is that it could also be Exhibit A in a demonstration of how data science can generate buzz while remaining largely disconnected from reality.

“Disruption” is a useful framework for discussing technological innovation (digital cameras render film obsolete; Netflix kills your neighbourhood video store, streaming music replaces CDs) but it is less clear to me that it can be applied directly to high-value science. “What is good?” is perhaps the oldest question in the book but the paper seems to skate past it.

The problem is (at least as I see it) many if not most scientific breakthroughs [1] extend the frontiers of knowledge rather than demolishing their forebears [2]. Even the biggest “paradigm shifts” often left their predecessors largely intact. Einstein arguably “disrupted” Newton but while film cameras and vinyl records are now the preserve of hipsters and purists, Newtonian physics is still at the heart of the field – as anyone who has taken first year physics or built a bridge that stood up can attest.

Similarly, quantum mechanics shattered the then-prevailing clockwork conception of the cosmos. However, its technical content was effectively a greenfield development since at a detailed level there was nothing for quantum mechanics to replace. By the end of the 1920s, however, quantum mechanics had given us the tools to explain almost everything that happens inside of an atom.

Consequently, as I see it, neither relativity or quantum mechanics really fits a conventional understanding of “disruption” even though they combine to create one the biggest revolutions ever seen in science. So that should be a problem if you are using “disruption” as a template for identifying interesting and important science.

Rather than making a qualitative assessment, the authors deploy a metric to measure disruption based on citation counts [3] – a widely cited paper whose own bibliographic antecedents then become less prominent is judged to be “disruptive” [4]. This leads to plots like the one below which focuses on Nobel winning papers and three “prestige” journals (Figure 5 from the paper).

If we take this study at its word, “disruption” has largely flatlined for the last fifty years. But one of the specific papers they identify – Riess et al.’s co-discovery of “dark energy” (or, more properly, observations suggesting that the rate at which the universe expands is picking up speed) is not rated as “disruptive” despite being the biggest upheaval in our understanding of the cosmos in a couple of generations.

Conversely, the discovery of the DNA double helix is measured to be “disruptive” — and it is certainly a watershed in our understanding of the the chemistry of life. The authors explain that it displaced an earlier “triple helix” model proposed by Linus Pauling – but Pauling’s scenario was less than a year old at this point so it was hardly an established incumbent knocked off its perch by a unexpected upstart. In fact, Watson and Crick’s 1953 discovery paper has only six references, and only one of those was published prior to 1952. Dirac’s 1928 paper scores well and it likewise has a handful of references and most those were similarly only a year or so old at the time of publication. However, the “disruption metric” looks for changes in citation patterns five years either side of publication. Consequently, even though there is no way their metric can produce meaningful data for these papers (given its reliance on a five year before-and-after comparison of citation counts) they single them out for special attention rather than filtering them and papers like them from their dataset.

What this suggests to me is that there has not been a sufficiently rigorous sniff-testing of the output of this algorithm. So on top of adopting a model of progress without really asking whether or not it captures the essence of “breakthrough” science the output of the metric used to assess it was often reverse-engineered to justify the numerical values it yields.

The concern that science is increasingly driven by “bean counting” and a publish or perish mentality that is at odds with genuine progress is widespread, and my own view (like most scientists, I would guess) is that there is truth to it. There is certainly a lot of frog-boiling in academia: it is indeed a challenge for working scientists to get long periods to reflect and explore and junior scientists are locked into a furiously competitive job market that offers little security to its participants.

Ironically, though, one key contributor to this pressure-cooker in which we find ourselves is Nature itself, the journal that published this paper. And Nature not only published it but hyped it in a news article – an incestuous coupling between peer reviewed content and “news” that can make the careers of those fortunate enough to participate in it. However, it is widely argued that this practice makes Nature itself a contributor to any decline of scientific quality that may be taking place by nudging authors to hype their work in ways not fully justified by their actual results. But “turning off the hype machine” is not one of the proposed solutions to our problems — and a cynic might suggest that this could be because it would also disable the money spigot that generates many millions of dollars a year for Nature’s very-definitely for-profit owners.

To some extent this is just me being cranky, since I spent part of last week at a slow simmer every time I saw this work flash by on a screen. But it matters, because this sort of analysis can find its way into debates about how to “fix” the supposed problems of science. And there certainly are many ways in which we could make science better. But before we prescribe we would be wise to accurately determine the symptoms of its illness. Coming up with numerical metrics to measure quality and impact in science is enormously tempting since it converts an otherwise laborious and qualitative process into something that it is both quantitative and automated [5] — but it is also very difficult, and it hasn’t happened here.

Ironically, the authors of this work are a professor in a management school, his PhD student and a sociologist who claim all expertise in “innovation” and “entrepreneurship”. Physicists are often seen as more willing than most to have opinions on matters outside of our professional domain and we are increasingly likely to be rebuked for failures to “stay in our lane”. But that advice cuts both ways; if you want to have opinions on science maybe you should work with people who have real expertise in the fields you hope to assess?


[1] I am going to focus on physics, since that is what I know best – but the pattern is claimed to be largely field-independent.

[2] There are exceptions. The heliocentric solar system supplanted the geocentric view and “caloric fluid” is no longer seen as a useful description of heat, but the norm for physics (and much of 20th century chemistry and biology, so far as I can see) is to “amend and extend”. There are often competing explanations for a phenomenon – e.g. Big Bang cosmology v. Steady State – only one of which can “win”, but these more closely resemble rivalries like the contest between BetaMax and VHS than “disruption”.

[3] They also make an argument that the language we use to talk about scientific results has changed over time, but most of the story has been based on their “disruption” metric.

[4] It had been used previously on patent applications (which must list “prior art”) by one of the authors, where it may actually make more sense.

[5] See also my views on the h-index.

Banner image: https://memory-alpha.fandom.com/wiki/Disruptor