A front-page article in the FT today was titled US rules out 'Terminator' troops (in print). The digital version's title - "US to deploy robot combat strategists" - suggests that robots will not be asked to kill (yet) but they'll be asked to do other things. In the article itself, we learn:
The US military’s use of artificial intelligence and advanced robotics will not include creating Terminator-style robots, the Pentagon’s second-in-command has said, as concerns increase over the role AI should play in modern warfare. ... “We will use artificial intelligence in the sense that it makes human decisions better,” Mr Work said.
Last year more than 1,000 of the biggest names in science and technology — including cosmologist Stephen Hawking and Mr Musk — signed an open letter calling for a global ban on “killer robots”, following concerns that it could trigger an international arms race.
Caution on US military's side sounds reasonable, but what about private actors?
If Joe programs his fridge to carry out certain actions in the house's defense when there is an intruder, who is to tell Joe what lines of code are inappropriate?
When the fridge covers the floor with ice cubes (with the intention to cause injury, or at least make the robber's movement more difficult), isn't the appliance acting as a "security guard"? If the fridge (or the front gate) smashes the thief's face by waving its door open, doesn't the 'smart fridge' become a potentially lethal robot?
The bigger issue is that any appliance could learn -- presumably from experience or from simulations -- that there are more efficient ways to demobilize intruders. After altering its own code, it could become a lot more creative than its original programmer had intended.