- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
– Isaac Asimov’s three laws of robotics.
I had read about the laws a few years back when the movie I, Robot had released. I had also read about the many different types of robots that keep getting showcased at Japanese exhibitions. A few more robots I used to hear about were industrial robots. But these still seemed like just machines and nothing to worry about.
But now that I have seen more closely what a robot really can be, I am pretty sure that Asimov can’t be wrong. Humanoid robots may or may not sit on thrones and rule us lesser mortals; but an intelligence greater than ours, probably disorganized and random in nature will more likely be in control (or probably, out of control).
While the Roomba can automate cleaning, a few others can automate killing! The latter article says “Drones flying over Afghanistan, Pakistan and Yemen can already move automatically from point to point, and it is unclear what surveillance or other tasks, if any, they perform while in autonomous mode. Even when directly linked to human operators, these machines are producing so much data that processors are sifting the material to suggest targets, or at least objects of interest. That trend toward greater autonomy will only increase as the U.S. military shifts from one pilot remotely flying a drone to one pilot remotely managing several drones at once.”.
The controllers of these robots may say that they are still in control of the robots. But, haven’t we seen what happened to people who were in control of nuclear bombs. Also, why not automate the role of those pilots who manage the drones!