Reflecting on Algorithmic Bias with Design Fiction: the MiniCoDe Workshops

T. Turchi, A. Malizia, S. Borsci

Research output: Contribution to journalArticleAcademicpeer-review

6 Citations (Scopus)
121 Downloads (Pure)

Abstract

In an increasingly complex everyday life, algorithms-often learned from data, i.e., machine learning (ML)-are used to make or assist with operational decisions. However, developers and designers usually are not entirely aware of how to reflect on social justice while designing ML algorithms and applications. Algorithmic social justice-i.e., designing algorithms including fairness, transparency, and accountability-aims at helping expose, counterbalance, and remedy bias and exclusion in future ML-based decision-making applications. How might we entice people to engage in more reflective practices that examine the ethical consequences of ML algorithmic bias in society? We developed and tested a design-fiction-driven methodology to enable multidisciplinary teams to perform intense, workshop-like gatherings to let potential ethical issues emerge and mitigate bias through a series of guided steps. With this contribution, we present an original and innovative use of design fiction as a method to reduce algorithmic bias in co-design activities.

Original languageEnglish
Pages (from-to)40-50
Number of pages11
JournalIEEE intelligent systems
Volume39
Issue number2
Early online date11 Jan 2024
DOIs
Publication statusPublished - 1 Mar 2024

Keywords

  • Conferences
  • Decision making
  • Ethics
  • Intelligent systems
  • Machine learning
  • Machine learning algorithms
  • Standards

Fingerprint

Dive into the research topics of 'Reflecting on Algorithmic Bias with Design Fiction: the MiniCoDe Workshops'. Together they form a unique fingerprint.

Cite this