Featured in an Editorial in the journal Nature on June 2014.
Measures of research impact are improving, but universities should be wary of their limits.
With the FIFA World Cup well under way in Brazil (and certain teams already on their way home) there is much analysis of what went wrong for some and what is going right for others. In a parallel effort, marketing departments across the globe are engaged in a final push to link the events on the field with their brands and products.
The result can be a curious, even surreal, blend. Witness the announcement by analytics company Thomson Reuters, for instance, that it was kicking off its own World Cup — of research performance. In the first round, the firm announced that England could have reversed its disappointing loss against Italy had it been playing on the basis of research citation impact. By the same comparison, Australia would have defeated Chile — but the Netherlands would still have crushed Spain. Four more rounds of elimination will pit countries against each other in terms of their proportion of international collaborations, highly cited papers and relative world impact. Who can wait for the United States–Switzerland semi-final?
This gimmicky tie-in illustrates a trend that deserves attention: the retooling of the bibliometrics industry. Thomson Reuters was promoting the updated range of bibliometric indicators in its research analytics service, InCites. And it is not the only company to have refreshed its bibliometrics offerings for 2014.
Since January, Elsevier has been promoting the next generation of its SciVal product, and on 12 June, Altmetric (supported by Macmillan Science and Education, which owns Nature Publishing Group, the publishers of Nature) launched its own commercial offering for research institutions: a tool to track the online impact of faculty members’ academic papers.
One intriguing metric that was launched in April for the Lens database, run by the non-profit organization Cambia in Canberra, allows researchers to freely examine how many patents have cited their papers, although currently only the life sciences are included.
Products offered by commercial analytics firms are worth watching because they generally determine how research institutions track and assess their scholars. The latest products mean that it is now easier than ever to calculate a dizzying range of metrics for any group of papers — including aggregations of papers at the level of the individual or the faculty, as well as the country.
On the plus side, this new generation of products is more sophisticated and takes into account the insights and criticisms of bibliometrics experts. The tools now tend to focus on individual research papers as a core unit of output, rather than the journal in which a paper is published. And they increasingly recognize that it is only meaningful to compare metrics in context: for instance, normalizing for performance relative to papers of a similar age, research field and publication venue.
"It is easier than ever to calculate a dizzying range of metrics for any group of papers.
But on the minus side, it is still easy to misuse these offerings. The latest breed of bibliometrics is useful as a marketing tool for individuals, as a way to spot unnoticed pockets of excellence and, yes, even to rank countries or institutions in a research World Cup. But there is a risk that universities are buying increasingly sophisticated products to track their performance without really understanding the limitations of such metrics. On page 470, Jonathan Adams reviews Beyond Bibliometrics, a book that outlines the history and future direction of attempts to measure scholarly impact. The editors, Blaise Cronin and Cassidy Sugimoto, urge caution. Adams asks: “Even after decades of use, do we really understand what citation data are and what we do with them? Do those who use bibliometrics have clear criteria for how they employ and interpret them?”
As this journal has noted before (see www.nature.com/metrics), many researchers feel under stress from their attempts to maximize the metrics by which their academic success is judged. It has never been more important to demand clarity and transparency from research managers on exactly which metrics they are using to evaluate scholars, and why. With this next generation of commercial tools, it does not seem so fanciful to picture a senior manager somewhere in your research institution engaged in a kind of perpetual departmental World Cup, scrolling through screeds of bar charts and playing ‘fantasy faculty’ with the careers of researchers.