Health Care, capabilities and AI assistive technologies.

Mark Coeckelbergh

Research output: Contribution to journalArticleAcademicpeer-review

50 Citations (Scopus)
569 Downloads (Pure)

Abstract

Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements of human care, they demand us to clarify what is at stake, to develop more comprehensive criteria for good care, and to rethink existing practices of care. In response to these challenges, I propose a (modified) capabilities approach to care and emphasize the inherent social dimension of care. I also discuss the demand for real care by introducing the ‘Care Experience Machine’ thought experiment. I conclude that if we set the standards of care too high when evaluating the introduction of AI assistive technologies in health care, we have to reject many of our existing, low-tech health care practices.
Original languageUndefined
Pages (from-to)-
JournalEthical theory and moral practice
Volumejuly 2009
Issue number2
DOIs
Publication statusPublished - 2009

Keywords

  • METIS-259495
  • IR-76109

Cite this