Just research into killer robots

Patrick Taylor Smith*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)

Abstract

This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems (LAWS). Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions are satisfied by at least some potential LAWS development programs. More specifically, since LAWS will lead to greater force protection, warfighters are free to become more risk-acceptant in protecting civilian lives and property. Further, various malicious motivations that lead to war crimes will not apply to LAWS or will apply to no greater extent than with human warfighters. Finally, intrinsic objections—such as the claims that LAWS violate human dignity or that it creates ‘responsibility gaps’—are rejected on the basis that they rely upon implausibly idealized and atomized understandings of human decision-making in combat.

Original languageEnglish
JournalEthics and information technology
DOIs
Publication statusE-pub ahead of print/First online - 23 Jul 2018

Keywords

  • UT-Hybrid-D
  • Lethal autonomous weapon systems
  • Military ethics
  • Ethics and information technology

Fingerprint

Dive into the research topics of 'Just research into killer robots'. Together they form a unique fingerprint.

Cite this