Materializing Interpretability: Exploring Meaning in Algorithmic Systems

Jesse Josua Benjamin, Claudia Müller-Birn

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

2 Citations (Scopus)

Abstract

Interpretability has become a key objective in the research, development and implementation of machine learning algorithms. However, existing notions of interpretability may not be conducive to how meaning emerges in algorithmic systems that employ ML algorithms. In this provocation, we suggest that hermeneutic analysis can be used to probe assumptions in interpretability. First, we propose three levels of interpretability that may be analyzed: formality, achievability, and linearity. Second, we discuss how the three levels have surfaced in prior work, in which we conducted an explicitation interview with a developer to understand decision-making in an algorithmic system implementation. Third, we suggest that design practice may be needed to move beyond analytic deconstruction, and showcase two design projects that exemplify possible strategies. In concluding, we suggest how the proposed approach may be taken up in future work and point to research avenues.
Original languageEnglish
Title of host publicationDIS '19 Companion
EditorsSteve Harrison, Shaowen Bardzell
PublisherACM SigCHI
Pages123-127
Number of pages5
ISBN (Print)978-1-4503-6270-2
DOIs
Publication statusPublished - 2019
Externally publishedYes
Event2019 Designing Interactive Systems Conference, DIS 2019 - San Diego, United States
Duration: 23 Jun 201928 Jun 2019

Conference

Conference2019 Designing Interactive Systems Conference, DIS 2019
Abbreviated titleDIS
CountryUnited States
CitySan Diego
Period23/06/1928/06/19

Keywords

  • Interpretability
  • Hermeneutics
  • Design
  • Materiality
  • Explainable Al

Cite this