A war of algorithms

The New Atlantis magazine has a fantastic article on the increasing use of robots and artificial intelligence systems in warfare and how they bring the fog of war to the murky area of military ethics and international law.

This comes as the The New York Times has just run a report on a recent closed meeting where some of the world’s top artificial intelligence researchers gathered to discuss what limits should be placed on the development of autonomous AI systems.

The NYT article frames the issue as a worry over whether machines will ‘outsmart’ humans, but the issue is really whether machines will outdumb us, as it is a combination of the responsibilities assigned to them and their limitations which pose the greatest threat.

One particularly difficulty is the unpredictability of AI systems. For example, you may be interested to know that while we can define the mathematical algorithms for simple artificial neural networks, exactly how the network is representing the knowledge it has learnt through training can be a mystery.

If you examine the ‘weights’ of connections across different instances of the same network after being trained, you can find differences in how they’re distributed even though they seem to be completing the task in the same way.

In other words, simply because we have built and trained something, it does not follow that we can fully control its actions or understand its responses in all situations.

In light of this, it is now worryingly common for militaries to publicly deploy or request armed autonomous weapons systems based, at least partly, on similar technologies.

Only recently this has included Israel, South Korea, the US, Australia and South Africa – the latter of which suffered the deaths of nine soldiers when a robot cannon was affected by a software error.

Of course, the use of technology of assist medical decision-making and safety control is also a key issue, but it is the military use of robots which is currently causing the most concern.

And it is exactly this topic that military researcher Peter W. Singer tackles in his engaging article for The New Atlantis magazine.

He traces the history of robot weapons systems, including the little known deployment of unmanned weapons systems in World War Two and Vietnam, and gives some excellent coverage of the latest in war zone robots and how they are being deployed in current conflicts.

Interestingly, the article claims that remotely-controlled drone missions now outnumber manned aircraft missions in the US military, with battles increasingly being fought through pixelated screens and image processing algorithms.

Singer makes the point that the rules of war become murky when the fighting is carried out by software. Copyright lawyer Lawrence Lessig has highlighted how social and legal rules are becoming effectively implemented as software (‘Code is Law‘) but the same point can be extended to armed conflict if the Geneva convention is being entrusted to algorithms.

The New Atlantis article is taken from a new book by Singer called Wired for War and if you’d like more on the ethics of AI systems the Association for the Advancement of Artificial Intelligence has a fantastic and very complete reading list covering all the major issues.

Correction: I originally thought the author was the philosopher Peter Singer and linked to his Wikipedia entry. It turns out it is Peter W. Singer the defence and foreign policy expert. The link has now been fixed!

Link to excellent Peter Singer article in The New Atlantis.
Link to NYT piece on AI limits conference.
Link to AAAI reading list on ethics and AI.

2 thoughts on “A war of algorithms”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: