In an article in yesterday’s Washington Post, Peter Finn examined the possibilities, for better and worse, of fully automated drones participating in future conflicts. As a bonus, he even makes reference to The Terminator.
Currently, there are a few automated systems already at work on the battlefield: robotic sentries, antimissile batteries, and surveillance drones that fly along a programmed route. Most drones are controlled remotely by human operators though. These operators determine whether and how a drone will fire upon a target. Human operators are bound by international humanitarian law, which requires them to act with discrimination and proportionality. In ridiculously oversimplified terms, this means they can only fire on bad guys, not good guys, and only using so much force as necessary; they shouldn’t fire on civilians or after the enemy has surrendered.
As humans remove themselves from decisions traditionally made on the battlefront, the potential for deadly error increases. The fact that some drone operators work out of bases in the U.S. and can return home between shifts has led some to claim that such personnel will treat war like a video game, leaving them desensitized and weakening their ability to discriminate between civilians and combatants. Removing humans completely is likely to create a number of risks.
Militaries run the risk of hacked or buggy software, malfunctions (the article notes that, “In South Africa in 2007, a semiautonomous cannon fatally shot nine friendly soldiers.”), or even atrocities. A small group, the International Committee for Robot Arms Control (ICRAC), has already formed, and is seeking to ban autonomous lethal machines before they become commonplace on the battlefield.
DARPA is already experimenting with autonomous vehicles, using a series of races. In 2004, the “winning” team drove just over seven miles of a 150 mile desert course. In 2005, five vehicles completed the course (but the winning time was just under seven hours, or a pace of about 20 mph). In 2007, an urban course was set up, and the winning team managed a blistering 14 mph while obeying all traffic rules. I’m downplaying the results, but these races are amazing feats of robotics. However, those challenges pale in comparison to a machine that has to “return fire with proportionality, minimize collateral damage, recognize surrender, and, in the case of uncertainty, maneuver to reassess or wait for a human assessment.” Also facial recognition software must improve dramatically. In my home state of Virginia, the DMV uses software that can be confused by smiling, and I’ve already talked about using dazzle camouflage to defeat facial recognition software.
Some degree of automation is probably inevitable. Such machines will be dependent on specialized hardware and software, each susceptible to a number of vulnerabilities. At what point will the risks of using such machines be outweighed by the benefits? This is an easier calculation when it comes to using microdrones to inspect buildings or tunnels or machines to remove wounded personnel from danger. It’s a far more difficult calculation when it means letting the machine decide whether to fire a missile into a building or fire upon a group of armed young men. This is a tricky subject. If you’d like to learn more, I would recommend Wired for War by P.W. Singer, perhaps the go-to guy on this subject.
What if fully automated drones begin to operate on the battlefield? What would early generation systems look like? A combination of flying and driving drones, each equipped with cameras and weapons? What if their communications were cut off, disrupted, or replaced with false signals? How would enemies learn to dupe the machine’s software and protocols? Could drones be captured, hacked, and turned into robotic “double agents” or remote controlled IEDs? What if a reliance on automated drones made war seem less violent, making small skirmishes more common? What would counter-drone drones look like?
What would later generation drones look like? How would the drones adapt, both hardware and software? What if such drones proved themselves on the battlefield and started being employed by law enforcement organizations (just as military gear has filtered down to local SWAT teams)? Would civilians have to start studying the methods of former enemies?
What do you think of this topic? What do you see as the future of automated killing machines?