The ethics of using ‘bomb robots’ as killing machines

A state police swat team member readies a robot on May 10, 2013 in Trenton, New Jersey.
Image: AP Photo/The Trentonian, Scott Ketterer

When Dallas police officers told their chief, David Brown, that they planned to attach an explosive to a remote-controlled robot and kill the man who they believed had fatally shot five officers on Thursday night, he gave his approval, and told them not to “bring the building down.”

That’s how the chief described the situation at a press conference on Monday. The goal is always to “make sure we do everything we can to go home to our loved ones,” he said.

The plan worked, marking what is believed to be the first time a United States police department killed a person by attaching an explosive to a robot.

Dallas Police Chief David Brown.

Image: AP Photo/Eric Gay

Much of the resulting conversation has centered around the robot, which, as many have reported, basically amounted to a remote-controlled car with an arm and a bomb, not something sophisticated, such as a drone flying above the clouds.

But the legal questions surrounding the situation have much more to do with the use of the bomb than the robot itself.

Police officers are permitted to use lethal force when someone is imminently threatening their lives or the lives of civilians, or if someone is imminently threatening great bodily harm.

The robot that was used was a REMOTEC Andros Mark V-A1, Dallas police tweeted.

Image: NORTHROP GRUMMAN

In Dallas, the robot was employed to reduce imminent danger to police officers, Brown said.

Officers had been negotiating with the shooter 25-year-old Army veteran Micah Johnson, who was holed up in a Dallas community college building for two hours before discussions deteriorated. By sending in an improvised bomb strapped to a robot, they believed they were reducing the threat of imminent danger to police officers and the public.

In a press conference early Friday morning, Chief Brown said Johnson told negotiators that “the end is coming, and he’s going to hurt and kill more of us meaning law enforcement and that there are bombs all over the place, in this garage and downtown.”

Brown later told CNN police felt they had no other options. “I began to feel that it was only at a split second he would charge us and take out many more before we would kill him,” he added.

Dallas Mayor Mike Rawlings supported the police department’s decision. “So it was very important that we realize that he may not be bluffing,” He told CBS’ Face the Nation. So we ask him, ‘Do you want to come out safely or do you want to stay there and we’re going to take you down?’ And he chose the latter.”

The definition of “imminent,” however, changes depending on the situation and perspective, meaning it’s difficult to convince

that danger in a particular situation was or was not imminent after the fact.

An FBI evidence response team works the crime scene in Dallas on July 10, 2016.

Image: AP Photo/Gerald Herbert

What’s more concerning to experts in robotics, police reform and civil liberties, though, is how police determine imminent danger in a situation such as this.

“Criminal suspects again, presumed innocent until proven guilty are not enemy combatants, and police officers are not judge, jury, nor executioners,” Patrick Lin, a philosophy professor at California Polytechnic State University and the head of the school’s Ethics and Emerging Sciences Group, wrote in a blog post about the use of the explosive in Dallas.

Sometimes, though, police officers may have to use enough force against a suspect to kill him or her, to protect innocent people in imminent danger. Thats regrettable but again is not, or should not be, an easy choice,” Lin wrote.

By taking out the suspect using novel means, the Dallas police entered uncharted legal territory, experts told Mashable.

We don’t know what does or should go into that choice because there are no overarching guidelines for when it becomes permissible to strap an explosive to a remote control device and kill someone.

When things get easier to do, they tend to be done too much.

Ryan Calo, a faculty co-director at the University of Washington’s Tech Policy Lab who spent 2000 to 2003 investigating claims of misconduct at the New York Police Department, says he’s concerned police were allowed to purchase a robot without a defined use for the machine.

“What are the guidelines around this?” he asked. “When you put these kind of tools into the hands of police, you need to be judicious.”

Limited standards lead to the possibility that actions will influence policy more than the other way around.

Ian Kerr, a law and philosophy professor at the University of Ottawa, is concerned that “emerging technologies” could change “perceptions of what is necessary when it comes to the projection of lethal force.

And he’s not alone.

The American Civil Liberties Union (ACLU) was hesitant to comment on the specifics of whether or not the use of deadly force in Dallas was necessary. But they did raise concerns about the potential growth of this practice.

“… Because ground robots may allow deadly force to be applied more safely and easily, they raise the danger that they will be over used,” ACLU Senior Policy Analyst and Editor Jay Stanley said in a statement provided to Mashable. “When things get easier to do, they tend to be done too much.”

That idea may have precedent in the way the U.S. military and intelligence agencies use drones for targeted killing operations. Drones allow U.S. forces to project lethal force onto a battlefield without risking the lives of any service members, not unlike the way the so-called “bomb robot” allowed the police to use lethal force without exposing officers to more danger.

Experts have other concerns, too. Would it be more ethical, for example, to arm robots with less lethal weapons such as tasers or pepper spray, in order to incapacitate a suspect, or are the ethics involved with that also shady? Will suspects now be wary of police when officers use a robot to try to send food and water during a negotiation, as has been done in the past?

Without guidelines for how and when to use robots armed with explosives, police may also find themselves facing unforeseen obstacles, such as the potential for the devices to be hacked.

“These robots are not designed to be weapons platforms,” Bill Smart, a professor at Oregon State University who researches robotics and machine learning, told Mashable. “They’re controlled over wireless and that wireless could be hacked.”

Smart also worries the Dallas Police Department’s actions could encourage copycats outside of law enforcement to test out their own remote-controlled bombs. “Any kid could build something largely similar to this in a weekend,” he said.

With no clear guidelines, police officers, legal experts and others will likely be left with many more questions about the ethical use of this robotic killing machine.

Chief Brown, however, did not second guess the decision, which he was convinced saved officers’ lives: “I approved it and I’ll do it again if presented with the same circumstances,” he told CNN on Monday.

Have something to add to this story? Share it in the comments.

Read more: http://mashable.com/2016/07/13/police-robot-bomb-future-ethics/