Can the use of autonomous weapon systems absolve our responsibility for killing in war?
London
I was fortunate to meet Tom Simpson at Oxford, a philosopher who between his degrees (from Bachelors to PhD) served as an officer in the Royal Marines, I’d read a paper he co-authored in 2017 called Just War and Robots' Killings (with V. Müller) Philosophical Quarterly 66: 302-22 (as before all papers will be linked below). We met a couple of times and discussed his position and I later wrote an essay with the title—Can the use of autonomous weapon systems absolve our responsibility for killing in war? I’ve finished here for the term and looking forward to more reading and learning. Future modules include Ethics of Conflict in Trinity. But back to today.
We’re seeing more autonomous weapon systems (AWS) which include drones and robots being used in war, Tom told me of reports from Ukraine and Israel. From Allen and Baker (2023, pg 179), we learn the the first recorded strike of an AWS against a human occurred in March 2020 in Libya. I started my essay using Allen and Baker’s (2023, pg 172) assertion that the ethics of war are not fixed, instead they’re an “ongoing negotiation” between knowing we will always go to war and working out how to minimise the killing. And then added that some say using utilitarianism, that being the greatest good for the greatest number of people, there is an easy calculation to be made, that is if one side can afford to build and use robots and drones and the other can only afford to use humans then less humans are harmed overall and that is better—the ethics of this position could be debated for a time. The main question I wanted to tackle, however, was whether by using robots we can absolve our responsibility.
This debate seems to have been sparked by an influential paper by Sparrow simply called Killer Robots (2007) where he concludes after using the hypothetical situation where an AWS has committed a war crime and arguing we cannot assign responsibility to any party, not the manufacturer since they can argue it was not meant to behave that way, they were simply tasked to build a weapon capable of autonomy, not the military commander who tasked it to its mission since the weapon’s ability to be autonomous means the commander cannot predict exactly what it will do, and if the technology progresses and an AWS is given instructions from another smart system then the chain of responsibility to the commander has been broken, finally Sparrow turns to the robot itself and finds no meaningful way that humans would understand that it could hold responsibility nor be punished. There exists for Sparrow a responsibility gap. If we cannot blame anyone or anything is it then a blameless war?
Simpson and Müller argue the military commander is responsible after showing there are acts of God where people are blameless, they use the example of a bridge collapsing after experiencing rainfall not seen in three-hundred years, and after acknowledging there is a gap they conclude there needs to be someone responsible and in the use of AWS that is the military commander. Another paper from Himmelreich (2019) also blames the military commander by going further than Müller and Simpson and proving responsibility over regulation, that the AWS was used in a war for the purposes of killing and there was a “probabilistic outcome” for what the AWS did, and given that military commanders are generally thought of responsible for their troops and actions, for this point he uses the United States Army’s regulations, then it must be the commander who is answerable. That whatever the AWS does, even committing a war crime, was within the parameters of possibility, the robot was set off to kill on a battlefield, after all. Something I did not explore in the essay as I was sticking to the question was the possible implications of this, would there be more commanders refusing to act?
One paper, which I disagreed with from Zając (2020) where he argues that taking a robot out of service after committing a war crime is in effect punishing it. It is difficult to believe the family of an innocent civilian killed by a robot thinking this fair.
All of the above quoted literature made a similar argument that if a robot eventually becomes totally autonomous to how we understand ourselves to be then it would be as us, have to be afforded the same rights and we would lose all the benefits of using a machine against man since it would have to be protected too. But this is where we turned to whether we would believe that. Symons and Abumusab (2024) posited technology driven systems using artificial intelligence do not act ‘for a reason’ as humans do and so cannot be regarded as agents in the same way we understand ourselves to be. Further, Anderson and Waxman (2012, pg 42) highlighted a moral objection arguing no matter how good a machine becomes it cannot be as true a moral agent as a human which would result in a human always being part of the decision-making process in warfare. I used these positions to conclude that we do not see robots as ourselves and not capable of responsibility as us, responsibility being a state we find essential and in practising war we need someone to hold that, it cannot be the robot nor the manufacturer and has to be the military commander.
If you are interested in reading further all the links are below. Thank you for the thoughtful replies to last week’s post Can a Jew be ethical?, please do email any thoughts you may have on AWS and I’ll see you back here soon,
Adnan
References
Allen, J. and Baker, D. (2023) ‘Can the Robots Save the City?’ in Stanar, D. and Tonn, K. (eds) The Ethics of Urban Warfare, The Netherlands: Brill Nijhoff, pp 172-186
Anderson, K. and Waxman, M. (2012) Law and Ethics for Robot Soldiers, [Online] Columbia Law School. Available at: https://scholarship.law.columbia.edu/faculty_scholarship/1742/. Accessed 25 November 2024.
Himmelreich, J. (2019) Responsibility for Killer Robots, [Online] Ethical Theory and Moral Practice (2019) 22:731–747. Available at: https://doi.org/10.1007/s10677-019-10007-9. Accessed 24 November 2024.
Müller, V.C. and Simpson, T.W. (2016) Just War and Robots’ Killing, [Online] The Philosophical quarterly, 2016-04, Vol.66 (263), p.302-322. Available at: https://www-jstor-org.ezproxy-prd.bodleian.ox.ac.uk/stable/24672810. Accessed 4 November 2024
Sparrow, R. (2007) Killer Robots [Online] Journal of Applied Philosophy, 2007, Vol. 24, No. 1 (2007), pp. 62-77. Available at: http://www.jstor.org/stable/24355087. Accessed 23 November 2024.
Symons, J. and Abumusab, S. (2024) Social Agency for Artifacts: Chatbots and the Ethics of Artificial Intelligence. [Online] Digital Society, Volume 3, article number 2, (2024). Available at: https://link-springer-com.ezproxy-prd.bodleian.ox.ac.uk/article/10.1007/s44206-023-00086-8. Accessed 24 November 2024.
Zając, M. (2020) Punishing Robots – Way Out of Sparrow’s Responsibility Attribution Problem, [Online] Journal of Military Ethics, 19:4, 285-291. Available at: DOI: 10.1080/15027570.2020.1865455. Accessed 23 November 2024