Philosopher A.C. Grayling has a just-released opinion piece on the New Scientist site arguing that we should regulate armed military robots before they are responsible for, presumably, what would otherwise be classified as war crimes.
As we reported in 2007, a military robot has already malfunctioned and ended up killing nine people with gunfire.
Grayling notes that military robots are already deployed on ‘active duty’ and that we need to regulate the consequences of an increasingly mechanised military that relies on artificial intelligence technology to engage its firepower.
Robot sentries patrol the borders of South Korea and Israel. Remote-controlled aircraft mount missile attacks on enemy positions. Other military robots are already in service, and not just for defusing bombs or detecting landmines: a coming generation of autonomous combat robots capable of deep penetration into enemy territory raises questions about whether they will be able to discriminate between soldiers and innocent civilians…
In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?
Most thinking about the implications of robotics tends to take sci-fi forms: robots enslave humankind, or beautifully sculpted humanoid machines have sex with their owners and then post-coitally tidy the room and make coffee. But the real concern lies in the areas to which the money already flows: the military and the police.