THE MORAL PERMISSIBILITY OF AUTOMATED RESPONSES DURING CYBERWARFARE |
| |
Authors: | David Danks Joseph H. Danks |
| |
Affiliation: | 1. Department of Philosophy , Carnegie Mellon University , Pittsburgh , PA , United States;2. Institute for Human &3. Machine Cognition , Pensacola , FL , United States ddanks@cmu.edu;5. Center for Advanced Study of Language , University of Maryland , College Park , MD , United States |
| |
Abstract: | Automated responses are an inevitable aspect of cyberwarfare, but there has not been a systematic treatment of the conditions in which they are morally permissible. We argue that there are three substantial barriers to the moral permissibility of an automated response: the attribution, chain reaction, and projection bias problems. Moreover, these three challenges together provide a set of operational tests that can be used to assess the moral permissibility of a particular automated response in a specific situation. Defensive automated responses will almost always pass all three challenges, while offensive automated responses typically face a substantial positive burden in order to overcome the chain reaction and projection bias challenges. Perhaps the most interesting cases arise in the middle ground between cyber-offense and cyber-defense, such as automated cyber-exploitation responses. In those situations, much depends on the finer details of the response, the context, and the adversary. Importantly, however, the operationalizations of the three challenges provide a clear guide for decision-makers to assess the moral permissibility of automated responses that could potentially be implemented. |
| |
Keywords: | Automated responses cyberwarfare cyber-defense cyber-exploitation attribution problem projection bias chain reactions moral permissibility |
|
|