There is no perfect metric. There is no number or score which fully encapsulates the value, impact, or importance of a piece of research.
While this statement might appear obvious, research evaluation and measurement are a fact of life for the scientific research community. The administrative work in faculty recruitment, promotion, and tenure are coupled with activity reporting and institutional benchmarking – and these measures are increasingly central to a Research Office’s existence. The two most commonly used metrics used to evaluate research output are focused on funding and publications.
Funding is central to many disciplines: it is a fairly simple measure – do you have it, and how much? How do you compare with your peers? What percentage of your salary have you managed to cover through grants? If you have lab space at your institution, what is your grant-generated-dollars-per-square-foot-of-lab-space ratio?
“historical highlights for achievement were associated with volume (how much are you publishing), prestige (which journals published your work), and citations (who is referencing your work.)”
But publications presents many more nuanced challenges. In particular, the historical highlights for achievement were associated with volume (how much are you publishing), prestige (which journals published your work), and citations (who is referencing your work.) These three measures all have cautionary tales of note. While volume of publications are important, associated factors about the area of science and the use of co-authorship and collaboration may all be considered. But here, the number of publications is relatively straightforward – similar to funding disciplines. But quality measures like, “where you are publishing,”, “who is citing your work,” present unique challenges.