A Quote by Isaac Asimov

The Three Laws of Robotics: 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; 3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law; The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. [The Second Law of Robotics]
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In the smart home of the future, there should be a robot designed to talk to you. With enough display technology, connectivity, and voice recognition, this human-interface robot or head-of-household robot will serve as a portal to the digital domain. It becomes your interface to your robot-enabled home.
With regard to robots, in the early days of robots people said, 'Oh, let's build a robot' and what's the first thought? You make a robot look like a human and do human things. That's so 1950s. We are so past that.
As a human, you don't have to be too conscious of your movement. I think it's tougher playing a robot than a human, and even tougher playing a robot who begins showing traces of being a human.
What did everyone think robot vacuuming was going to be? Well, they think Rosie the Robot from 'The Jetsons,' a human robot that pushed a vacuum. That was never going to happen.
For now, we assume that self-evolving robots will learn to mimic human traits, including, eventually, humor. And so, I can't wait to hear the first joke that one robot tells to another robot.
So we and our elaborately evolving computers may meet each other halfway. Someday a human being, named perhaps Fred White, may shoot a robot named Pete Something-or-other, which has come out of a General Electric factory, and to his surprise see it weep and bleed. And the dying robot may shoot back and, to its surprise, see a wisp of gray smoke arise from the electric pump that it supposed was Mr. White's beating heart. It would be rather a great moment of truth for both of them.
And finally remember that nothing harms him who is really a citizen, which does not harm the state; nor yet does anything harm the state which does not harm law [order]; and of these things which are called misfortunes not one harms law. What then does not harm law does not harm either state or citizen.
Some people think that, inevitably, every robot that does any task is a bad thing for the human race, because it could be taking a job away. But that isn't necessarily true. You can also think of the robot as making a person more productive and enabling people to do things that are currently economically infeasible. But a person plus a robot or a fleet of robots could do things that would be really useful.
A computer shall not harm your work or, through inaction, allow your work to come to harm.
And once an intelligent robot exists, it is only a small step to a robot species - to an intelligent robot that can make evolved copies of itself.
The world is against individuality. It is against your being just your natural self. It wants you just to be a robot, and because you have agreed to be a robot you are in trouble. You are not a robot.
A robot-arm in a factory doesn't decide minute by minute whether to rivet or revolt - it just does the job is has literally been trained to do. It's if and when we build a conscious robot that we may have to worry.
Zeroth law: You must play the game First law: You can't win Second law: You can't break even Third law: You can't quit the game.
This site uses cookies to ensure you get the best experience. More info...
Got it!