Who counts as a killer robot?

 •  Filed under digital markets, technology, foreign policy

A front-page article in the FT today was titled US rules out 'Terminator' troops (in print). The digital version's title - "US to deploy robot combat strategists" - suggests that robots will not be asked to kill (yet) but they'll be asked to do other things. In the article itself, we learn:

The US military’s use of artificial intelligence and advanced robotics will not include creating Terminator-style robots, the Pentagon’s second-in-command has said, as concerns increase over the role AI should play in modern warfare. ... “We will use artificial intelligence in the sense that it makes human decisions better,” Mr Work said.

Last year more than 1,000 of the biggest names in science and technology — including cosmologist Stephen Hawking and Mr Musk — signed an open letter calling for a global ban on “killer robots”, following concerns that it could trigger an international arms race.

Caution on US military's side sounds reasonable, but what about private actors?

If Joe programs his fridge to act in the house's defense when there is an intruder, who will tell Joe what lines of code are appropriate?

When the fridge covers the floor with ice cubes (with the intention to cause injury, or at least make the robber's movement more difficult), isn't the appliance acting as a "security guard"? If it smashes the thief's face by waving its door open, doesn't the 'smart fridge' become a potentially lethal robot?

The bigger issue is that some fridge could "learn" that are more efficient ways to demobilize intruders. After altering its own code, it could become a lot more creative than its original programmer had intended.