A cancer of quantification is spreading
Last week, an impressive array of scientific organisations aligned themselves to a statement, the San Francisco Declaration on Research Assessment, which seeks to address the growing use of the journal impact factor (JIF) as a surrogate measure for research quality.
The European Molecular Biology Laboratory, the London-based Wellcome Trust and the European Mathematical Society are among the prestigious signatories to the declaration, which calls on universities, research agencies and others to desist from using JIFs, or other journal-based metrics, in assessing individuals’ research performance.
JIFs are compiled by the publisher Thomson Reuters, and reflect the mean number of citations for each paper published by a journal over the previous two years. The impact factor was initially designed as an aid to librarians who pay for journal subscriptions. It was never meant to be a sound metric of the quality of a research paper.
The weaknesses of the impact factor as a quality metric are manifest. Average citations vary sharply by discipline, so no journal in mathematics or ecology, for example, will ever have a high impact factor. The use of the mean, not the median, means that one or two high-impact papers distort the data. Most perniciously of all, perhaps, the thirst for citations tends, by definition, to herd researchers into well-populated fields, and provides an active disincentive to the exploration of original territory.
As so often happens, however, the mere availability of the metric has enabled it to take on a life of its own. Many researchers list the JIF of each journal in the publication list in their CVs. Some institutions demand that papers need to be submitted in journals with a JIF of more than, say, five.
The dynamic at play here, unfortunately, is that if something—anything—is being measured, then people who ought to know better will end up modifying their behaviour in order to conform to the metric. This phenomenon has become widespread elsewhere in global research assessment: various specious university ranking systems could be subjected to the criticisms that the San Francisco declaration makes of impact factors.
Perhaps the greatest risk is in emerging economies, where government officials—impatient for progress and bamboozled by the complexities of research assessment—are taking refuge in quantitative metrics.
Strong, well-established research systems are better placed to implement assessment sensibly, and temper their use of the available numbers, such as JIFs, with common sense. The European Commission’s research directorate has thus far, to its credit, resisted quantitative quality control of research programmes.
With funding tight and politicians eager for results, however, there’s a clear and present danger that greater reliance on quantitative metrics will exert ever-stronger pressure on researchers to adjust their work.
The San Francisco declaration is, at least, an indication that a number of researchers and research institutions are ready to fight back against this trend. Its publication is an opportunity for university departments, as well as research agencies, to take stock, and refrain from over-reliance on quantitative metrics in assessing research quality.