This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems (LAWS). Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions are satisfied by at least some potential LAWS development programs. More specifically, since LAWS will lead to greater force protection, warfighters are free to become more risk-acceptant in protecting civilian lives and property. Further, various malicious motivations that lead to war crimes will not apply to LAWS or will apply to no greater extent than with human warfighters. Finally, intrinsic objections—such as the claims that LAWS violate human dignity or that it creates ‘responsibility gaps’—are rejected on the basis that they rely upon implausibly idealized and atomized understandings of human decision-making in combat.
- Lethal autonomous weapon systems
- Military ethics
- Ethics and information technology
- 22/4 OA procedure