A human will always decide when a robot kills you (December 13 2012) A human will always decide when a robot kills you (December 13 2012)

Spencer Ackerman the American national security reporter and blogger has published an article on Wired titled ‘Pentagon: A Human Will Always Decide When a Robot Kills You’. In the article Ackerman states “The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much. …the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being. The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it. …Human Rights Watch…among the most influential non-governmental institutions in the world, issued a report warning that new developments in drone autonomy represented the demise of established “legal and non-legal checks on the killing of civilians.” Its solution: “prohibit the “development, production, and use of fully autonomous weapons through an international legally binding instrument.”

 

Inspired by Wired ow.ly/fS3oZ image source Wikipedia ow.ly/fS3eB