Understanding how and why practitioners evaluate SDI performance

Kate Trinka Lance, Yola Georgiadou, Arnold Bregt

Research output: Contribution to journalArticleAcademicpeer-review

1 Downloads (Pure)

Abstract

Practitioners around the world are building frameworks for spatial data interoperability and cross-agency coordination, referred to as spatial data infrastructure (SDI). In this study, we attempt to understand how and why SDI practitioners ‘on the ground’ are evaluating their ‘own’ efforts in developing such frameworks. For this purpose, we mobilize concepts from ‘control’ evaluation, as well as from public sector evaluation research, because ‘control’ evaluation appears to be the approach most favored by SDI practitioners, and SDI evaluation is unfolding within public sector settings. ‘Control’ evaluation emphasizes operations, supports rationalistic investment decisions and efficiency analysis, and typically is based on measures such as ratios, percentages, and indexes; evaluators act as auditors, controlling, ranking or assessing success.
We examine and classify several recent examples of SDI ‘control’ evaluation by using the concepts of ‘timing’, ‘perspective’, ‘formal demand’, ‘use’, and ‘input specificity’. Our study reveals that the most comprehensive practices have resulted when ‘control’ evaluations have been in compliance with a demand from an executive agency, such as a central budget agency, and when there has been specificity of inputs. We anticipate that these dimensions are key to the institutionalization of SDI evaluation and point to the need for further research to understand how such evaluation practices emerge.
Original languageEnglish
Pages (from-to)65-104
JournalInternational journal of spatial data infrastructures research
Volume1
Publication statusPublished - 2006

Keywords

  • ADLIB-ART-3113
  • PGM

Fingerprint

Dive into the research topics of 'Understanding how and why practitioners evaluate SDI performance'. Together they form a unique fingerprint.

Cite this