Just research into killer robots

Patrick Taylor Smith*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

7 Citations (Scopus)
86 Downloads (Pure)


This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems (LAWS). Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions are satisfied by at least some potential LAWS development programs. More specifically, since LAWS will lead to greater force protection, warfighters are free to become more risk-acceptant in protecting civilian lives and property. Further, various malicious motivations that lead to war crimes will not apply to LAWS or will apply to no greater extent than with human warfighters. Finally, intrinsic objections—such as the claims that LAWS violate human dignity or that it creates ‘responsibility gaps’—are rejected on the basis that they rely upon implausibly idealized and atomized understandings of human decision-making in combat.

Original languageEnglish
Pages (from-to)281–293
JournalEthics and information technology
Early online date23 Jul 2018
Publication statusPublished - Dec 2019


  • UT-Hybrid-D
  • Lethal autonomous weapon systems
  • Military ethics
  • Ethics and information technology
  • 22/4 OA procedure


Dive into the research topics of 'Just research into killer robots'. Together they form a unique fingerprint.

Cite this