Metrics
Metrics form part of an evolving and increasingly digital research environment, where data and analysis are important. However, the current description, production, and use of these metrics are experimental and open to misunderstanding. They can lead to negative effects and behaviours as well as positive ones.
Metrics fall under two areas – traditional, or bibliometrics – largely based on citations, and alternative metrics – largely based on the attention that an output receives.
Responsible metrics
Responsible metrics can be defined by the following key principles (outlined in The Metric Tide):
- Robustness – basing metrics on the best possible data in terms of accuracy and scope
- Humility – recognising that quantitative evaluation should support, but not supplant, qualitative, expert assessment
- Transparency – that those being evaluated can test and verify the results
- Diversity – accounting for variation by research field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
- Reflexivity – recognising and anticipating the systemic and potential effects of indicators, and updating them in response
Some common metrics to consider are:
- Citations per publication: average number of citations received per publication
- Collaboration impact: The average number of citations received by publications that have international, national or institutional co-authorship
- Field-weighted citation impact: The ratio of citations received relative to the expected world average for the subject field, publication type and publication year
- Outputs in the top citation percentiles: Publications that have reached a particular threshold of citations received
- Outputs in the top journal percentiles: Publications that have been published in the world’s top journals
- Scholarly output: the number of publications