The open letter signed by more than 12,000 prominent people calling for a ban on artificially intelligent killer robots, connected to arguments for a UN ban on the same, is misguided and perhaps even reckless.
Wait, misguided? Reckless? Let me offer some context. I am a robotics researcher and have spent much of my career reading and writing about military robots, fuelling the very scare campaign that I now vehemently oppose.
I was even one of the hundreds of people who, in the early days of the debate, gave their support to the International Committee for Robot Arms Control (ICRAC) and the Campaign to Stop Killer Robots.
But I’ve changed my mind.
Why the radical change in opinion? In short, I came to realise the following.
The human connection
The signatories are just scaremongers who are trying to ban autonomous weapons that “select and engage targets without human intervention”, which they say will be coming to a battlefield near you within “years, not decades”.
But, when you think about it critically, no robot can really kill without human intervention. Yes, robots are probably already capable of killing people using sophisticated mechanisms that resemble those used by humans, meaning that humans don’t necessarily need to oversee a lethal system while it is in use. But that doesn’t mean that there is no human in the loop.
We can model the brain, human learning and decision making to the point that these systems seem capable of generating creative solutions to killing people, but humans are very much involved in this process.
Indeed, it would be preposterous to overlook the role of programmers, cognitive scientists, engineers and others involved in building these autonomous systems. And even if we did, what of the commander, military force and government that made the decision to use the system? Should we overlook them, too?
We already have automatic killing machines
We already have weapons of the kind for which a ban is sought.
The Australian Navy, for instance, has successfully deployed highly automated weapons in the form of close-in weapons systems (CIWS) for many years. These systems are essentially guns that can fire thousands of rounds of ammunition per minute, either autonomously via a computer-controlled system or under manual control, and are designed to provide surface vessels with a last defence against anti-ship missiles.
The Phalanx is just one of several close-in weapon systems used by the Australian Navy.
When engaged autonomously, CIWSs perform functions normally performed by other systems and people, including search, detection, threat assessment, acquisition, targeting and target destruction.
This system would fall under the definition provided in the open letter if we were to follow the signatories’ logic. But you don’t hear of anyone objecting to these systems. Why? Because they’re employed far out at sea and only in cases where an object is approaching in a hostile fashion, usually descending in the direction of the ship at rapid speed.
That is, they’re employed only in environments and contexts whereby the risk of killing an innocent civilian is virtually nil, much less than in regular combat.
So why can’t we focus on existing laws, which stipulate that they be used in the most particular and narrow circumstances?
The real fear is of non-existent thinking robots
It seems that the real worry that has motivated many of the 12,000-plus individuals to sign the anti-killer-robot petition is not about machines that select and engage targets without human intervention, but rather the development of sentient robots.
Given the advances in technology over the past century, it is tempting to fear thinking robots. We did leap from the first powered flight to space flight in less than 70 years, so why can’t we create a truly intelligent robot (or just one that’s too autonomous to hold a human responsible but not autonomous enough to hold the robot itself responsible) if we have a bit more time?
There are a number of good reasons why this will never happen. One explanation might be that we have a soul that simply can’t be replicated by a machine. While this tends to be the favourite of spiritual types, there are other natural explanations. For instance, there is a logical argument to suggest that certain brain processes are not computational or algorithmic in nature and thus impossible to truly replicate.
Once people understand that any system we can conceive of today – whether or not it is capable of learning or highly complex operation – is the product of programming and artificial intelligence programs that trace back to its programmers and system designers, and that we’ll never have genuine thinking robots, it should become clear that the argument for a total ban on killer robots rests on shaky ground.
Who plays by the rules?
UN bans are also virtually useless. Just ask anyone who’s lost a leg to a recently laid anti-personnel mine. The sad fact of the matter is that “bad guys” don’t play by the rules.
Now that you understand why I changed my mind, I invite the signatories to the killer robot petition to note these points, reconsider their position and join me on the “dark side” in arguing for more effective and practical regulation of what are really just highly automated systems.
Jai Galliott is Research Fellow in Indo-Pacific Defence at UNSW Australia.
This article was originally published on The Conversation. Read the original article.
Comments