TY - UNPB
T1 - Between certainty and comprehensiveness in evaluating the societal impact of humanities research
AU - Benneworth, Paul Stephen
PY - 2015
Y1 - 2015
N2 - Research evaluation is a tool that can be used for many different purposes, with every different kind trading off comparing and understanding activities and seeking to treat evaluation subjects fairly. Evaluation problems can emerge when an approach that seeks to give one kind of fairness is used for a set of purposes demanding an alternative perspective on fairness. These problems of course afflict all kinds of evaluation, not just in research, but there has been in the last decade an increasing awareness that they are prevalent in research evaluation systems that seek to make judgements within national research systems. Clearly there are the risks that problems may emerge when attempting to use these very limited indicators to measure and reward university research impact in a systemic way. This paper therefore asks how can evaluation of research impact at the systems level –deal with the problem of the very different mechanisms by which different kinds of research produce their impact? We explore this question via a case study of the Netherlands, where policy-maker driven attempts to capture impact within the research evaluation system awoke fears amongst the humanities research community that they would not be treated fairly. On this basis, the paper argues that more reflection is demanded of scholars on what kinds of research impact matters in their field, and how that messiness of impact generation legitimates a multi-disciplinary, judgement- and discretion-based system that ultimately values activities and outcomes which lie beyond the pale of their own scholarly norms
AB - Research evaluation is a tool that can be used for many different purposes, with every different kind trading off comparing and understanding activities and seeking to treat evaluation subjects fairly. Evaluation problems can emerge when an approach that seeks to give one kind of fairness is used for a set of purposes demanding an alternative perspective on fairness. These problems of course afflict all kinds of evaluation, not just in research, but there has been in the last decade an increasing awareness that they are prevalent in research evaluation systems that seek to make judgements within national research systems. Clearly there are the risks that problems may emerge when attempting to use these very limited indicators to measure and reward university research impact in a systemic way. This paper therefore asks how can evaluation of research impact at the systems level –deal with the problem of the very different mechanisms by which different kinds of research produce their impact? We explore this question via a case study of the Netherlands, where policy-maker driven attempts to capture impact within the research evaluation system awoke fears amongst the humanities research community that they would not be treated fairly. On this basis, the paper argues that more reflection is demanded of scholars on what kinds of research impact matters in their field, and how that messiness of impact generation legitimates a multi-disciplinary, judgement- and discretion-based system that ultimately values activities and outcomes which lie beyond the pale of their own scholarly norms
KW - METIS-320808
KW - IR-97205
U2 - 10.3990/4.2589-9716.2015.02
DO - 10.3990/4.2589-9716.2015.02
M3 - Working paper
T3 - CHEPS working paper series
SP - -
BT - Between certainty and comprehensiveness in evaluating the societal impact of humanities research
ER -