Relative Citation Ratio is here. Goodbye, Journal Impact Factor?

Journal impact factor has fallen out of favor with many scientists and the National Institutes of Health as an indicator of scientific influence. The JIF is an indicator of the average number of citations that articles from a given journal receive in a two year period. So the JIF is an indicator of influence at the journal level but was never intended to serve as an indicator of the influence of individual papers.

Researchers at the National Institutes of Health have developed a new metric called the Relative Citation Ratio for measuring the influence of a scientific paper. The new RCR, outlined in a paper published on the preprint website bioRxiv, is a ratio of the number of citations received by an individual article divided by an article-specific custom-built citation network so it is an indicator of influence at the paper level. The RCR takes into account how often the article is cited and compares it to how often articles in its own field are cited.

The NIH has been trying to develop a reasonable metric of scientific influence and the RCR comes close to hitting the mark. The NIH-defined requirements of a good metric include:

  • The indicator must reflect the influence of an individual article, not the journal
  • The indicator should be able to be normalized to the field in which the article was written
  • The metric should be scalable so that adjustment for the size of the field should not introduce bias into the measurement
  • The metric should be benchmarked to peer performance so that it can be used for comparison and those conclusions should be correlated with expert opinion
  • The indicator should be freely accessible and calculated in a transparent way

The researchers responsible for developing the RCR claim it meets all of the NIH’s criteria.

Stefano Bertuzzi, executive director of the American Society for Cell Biology and coordinator of the San Francisco Declaration on Research Assessment, which called for improvements in scientific research evaluation, wrote that “[RCR] will gain currency contributing to a new and better understanding of impact in science. The RCR provides us a new sophisticated analytical tool, which I hope will put another nail into the coffin of the phony metric of the journal impact factor.”

But some feel that the RCR falls short in fully satisfying the NIH’s own criteria for a good metric of scientific influence. Ludo Waltham, a bibliometrics expert at Leiden University of the Netherlands, pointed out that the researchers did not compare the RCR to other existing metrics and that, compared to those metrics, he believed the RCR would be less powerful.  Waltham also wrote that calculating the RCR was not transparent enough, calling the algorithm “fairly complex.”

George Santangelo, director of the NIH Office of Portfolio Analysis and co-author of the RCR paper, said that he believes that the RCR is a starting point. He also said that he would be happy if other researchers improved the algorithm to be used in other fields. As written, the metric can be used only in biomedical research. He said he does not “suggest [the RCR] is the final answer to measuring. This is just a tool. No one metric is ever going to be used in isolation by the NIH.”

The development of the RCR is an important step toward improving metrics to more fairly judge the effect of journal articles on the scientific community.

Leave a Reply

Your email address will not be published. Required fields are marked *