Validating research performance metrics against peer rankings

Ethics in Science and Environmental Politics, 8, 103–107 (2008) .


A rich and diverse set of potential bibliometric and scientometric predictors of research performance quality and importance are emerging today—from the classic metrics (publication counts, journal impact factors and individual article/author citation counts) to promising new online metrics such as download counts, hub/authority scores and growth/decay chronometrics. In and of themselves, however, metrics are circular: They need to be jointly tested and validated against what it is that they purport to measure and predict, with each metric weighted according to its contribution to their joint predictive power. The natural criterion against which to validate metrics is expert evaluation by peers; a unique opportunity to do this is offered by the 2008 UK Research Assessment Exercise, in which a full spectrum of metrics can be jointly tested, field by field, against peer rankings.

Add your rating and review

If all scientific publications that you have read were ranked according to their scientific quality and importance from 0% (worst) to 100% (best), where would you place this publication? Please rate by selecting a range.

0% - 100%

This publication ranks between % and % of publications that I have read in terms of scientific quality and importance.

Keep my rating and review anonymous
Show publicly that I gave the rating and I wrote the review

Ratings & reviews

  • Validating research performance metrics against peer rankings
    100 90 0
100 90 0 1 1


  • Peer-provided ratings, using the rating scale developed by Epistemio (Florian, 2015), are a suitable reference for validating bibliometric indicators.


    • Florian, R. V. (2015). A new scale for rating scientific publications. In Proceedings of ISSI 2015: 15th International Society of Scientometrics and Informetrics Conference (p. 419-420). Istanbul, Turkey: Boğaziçi University.
  • 1