Between certainty and comprehensiveness in evaluating the societal impact of humanities research

Research output: Working paperProfessional

43 Downloads (Pure)

Abstract

Research evaluation is a tool that can be used for many different purposes, with every different kind trading off comparing and understanding activities and seeking to treat evaluation subjects fairly. Evaluation problems can emerge when an approach that seeks to give one kind of fairness is used for a set of purposes demanding an alternative perspective on fairness. These problems of course afflict all kinds of evaluation, not just in research, but there has been in the last decade an increasing awareness that they are prevalent in research evaluation systems that seek to make judgements within national research systems. Clearly there are the risks that problems may emerge when attempting to use these very limited indicators to measure and reward university research impact in a systemic way. This paper therefore asks how can evaluation of research impact at the systems level –deal with the problem of the very different mechanisms by which different kinds of research produce their impact? We explore this question via a case study of the Netherlands, where policy-maker driven attempts to capture impact within the research evaluation system awoke fears amongst the humanities research community that they would not be treated fairly. On this basis, the paper argues that more reflection is demanded of scholars on what kinds of research impact matters in their field, and how that messiness of impact generation legitimates a multi-disciplinary, judgement- and discretion-based system that ultimately values activities and outcomes which lie beyond the pale of their own scholarly norms
Original languageEnglish
Pages-
Number of pages30
DOIs
Publication statusPublished - 2015

Publication series

NameCHEPS working paper series
No.02
Volume2015
ISSN (Electronic)2589-9716

Fingerprint

evaluation research
impact research
evaluation
fairness
university research
reward
Netherlands
anxiety
community
Values

Keywords

  • METIS-320808
  • IR-97205

Cite this

@techreport{fbcaee95c29d439b9552ebf1eda7068d,
title = "Between certainty and comprehensiveness in evaluating the societal impact of humanities research",
abstract = "Research evaluation is a tool that can be used for many different purposes, with every different kind trading off comparing and understanding activities and seeking to treat evaluation subjects fairly. Evaluation problems can emerge when an approach that seeks to give one kind of fairness is used for a set of purposes demanding an alternative perspective on fairness. These problems of course afflict all kinds of evaluation, not just in research, but there has been in the last decade an increasing awareness that they are prevalent in research evaluation systems that seek to make judgements within national research systems. Clearly there are the risks that problems may emerge when attempting to use these very limited indicators to measure and reward university research impact in a systemic way. This paper therefore asks how can evaluation of research impact at the systems level –deal with the problem of the very different mechanisms by which different kinds of research produce their impact? We explore this question via a case study of the Netherlands, where policy-maker driven attempts to capture impact within the research evaluation system awoke fears amongst the humanities research community that they would not be treated fairly. On this basis, the paper argues that more reflection is demanded of scholars on what kinds of research impact matters in their field, and how that messiness of impact generation legitimates a multi-disciplinary, judgement- and discretion-based system that ultimately values activities and outcomes which lie beyond the pale of their own scholarly norms",
keywords = "METIS-320808, IR-97205",
author = "Benneworth, {Paul Stephen}",
year = "2015",
doi = "10.3990/4.2589-9716.2015.02",
language = "English",
series = "CHEPS working paper series",
number = "02",
pages = "--",
type = "WorkingPaper",

}

Between certainty and comprehensiveness in evaluating the societal impact of humanities research. / Benneworth, Paul Stephen.

2015. p. - (CHEPS working paper series; Vol. 2015, No. 02).

Research output: Working paperProfessional

TY - UNPB

T1 - Between certainty and comprehensiveness in evaluating the societal impact of humanities research

AU - Benneworth, Paul Stephen

PY - 2015

Y1 - 2015

N2 - Research evaluation is a tool that can be used for many different purposes, with every different kind trading off comparing and understanding activities and seeking to treat evaluation subjects fairly. Evaluation problems can emerge when an approach that seeks to give one kind of fairness is used for a set of purposes demanding an alternative perspective on fairness. These problems of course afflict all kinds of evaluation, not just in research, but there has been in the last decade an increasing awareness that they are prevalent in research evaluation systems that seek to make judgements within national research systems. Clearly there are the risks that problems may emerge when attempting to use these very limited indicators to measure and reward university research impact in a systemic way. This paper therefore asks how can evaluation of research impact at the systems level –deal with the problem of the very different mechanisms by which different kinds of research produce their impact? We explore this question via a case study of the Netherlands, where policy-maker driven attempts to capture impact within the research evaluation system awoke fears amongst the humanities research community that they would not be treated fairly. On this basis, the paper argues that more reflection is demanded of scholars on what kinds of research impact matters in their field, and how that messiness of impact generation legitimates a multi-disciplinary, judgement- and discretion-based system that ultimately values activities and outcomes which lie beyond the pale of their own scholarly norms

AB - Research evaluation is a tool that can be used for many different purposes, with every different kind trading off comparing and understanding activities and seeking to treat evaluation subjects fairly. Evaluation problems can emerge when an approach that seeks to give one kind of fairness is used for a set of purposes demanding an alternative perspective on fairness. These problems of course afflict all kinds of evaluation, not just in research, but there has been in the last decade an increasing awareness that they are prevalent in research evaluation systems that seek to make judgements within national research systems. Clearly there are the risks that problems may emerge when attempting to use these very limited indicators to measure and reward university research impact in a systemic way. This paper therefore asks how can evaluation of research impact at the systems level –deal with the problem of the very different mechanisms by which different kinds of research produce their impact? We explore this question via a case study of the Netherlands, where policy-maker driven attempts to capture impact within the research evaluation system awoke fears amongst the humanities research community that they would not be treated fairly. On this basis, the paper argues that more reflection is demanded of scholars on what kinds of research impact matters in their field, and how that messiness of impact generation legitimates a multi-disciplinary, judgement- and discretion-based system that ultimately values activities and outcomes which lie beyond the pale of their own scholarly norms

KW - METIS-320808

KW - IR-97205

U2 - 10.3990/4.2589-9716.2015.02

DO - 10.3990/4.2589-9716.2015.02

M3 - Working paper

T3 - CHEPS working paper series

SP - -

BT - Between certainty and comprehensiveness in evaluating the societal impact of humanities research

ER -