Artificial companions: empathy and vulnerability mirroring in human-robot relations

Mark Coeckelbergh

Research output: Contribution to journalArticleAcademicpeer-review

25 Downloads (Pure)


Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends (among other things) on the robot’s capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robot mirrors human vulnerabilities. For the purpose of these arguments, I make a distinction between empathy-as-cognition and empathy-as-feeling, connecting the latter to the moral sentiment tradition and its concept of “fellow feeling.” Furthermore, I sympathise with the intuition that vulnerability mirroring raises the ethical issue of deception. However, given the importance of appearance in social relations, problems with the concept of deception, and contemporary technologies that question the artificial-natural distinction, we cannot easily justify the underlying assumptions of the deception objection. If we want to hold on to them, we need convincing answers to these problems
Original languageEnglish
Article number2
JournalStudies in ethics, law, and technology
Issue number3
Publication statusPublished - 2010


  • Robots
  • Artificial companions
  • Ethics
  • Empathy
  • Vulnerability


Dive into the research topics of 'Artificial companions: empathy and vulnerability mirroring in human-robot relations'. Together they form a unique fingerprint.

Cite this