The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition

Rosalie Waelen*, Michał Wieczorek

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
17 Downloads (Pure)

Abstract

AI systems have often been found to contain gender biases. As a result of
these gender biases, AI routinely fails to adequately recognize the needs,
rights, and accomplishments of women. In this article, we use Axel Honneth’s
theory of recognition to argue that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they
resemble forms of misrecognition that can hurt women’s self-development
and self-worth. Furthermore, we argue that Honneth’s theory of recognition
offers a fruitful framework for improving our understanding of the psychological and normative implications of gender bias in modern technologies.
Moreover, our Honnethian analysis of gender bias in AI shows that the goal
of responsible AI requires us to address these issues not only through technical interventions, but also through a change in how we grant and deny recognition to each other.
Original languageEnglish
Article number53
JournalPhilosophy & technology
Volume35
DOIs
Publication statusPublished - 3 Jun 2022

Keywords

  • Artificial intelligence
  • gender bias
  • ethics
  • struggle for recognition
  • Axel Honneth
  • UT-Hybrid-D

Fingerprint

Dive into the research topics of 'The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition'. Together they form a unique fingerprint.

Cite this