or Create a profile
02 Mar, 2016 11:24 AM
Isaac Asimov, the famed science fiction author, did more than write novels. He is also credited with coming up with the three laws of robotics:
1.A robot may not injure a human being, or through inaction, allow a human being to come to harm.
2.A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
3.A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
According to Asimov, these laws will need to govern the use of robotics to keep both them and humanity safe. A major fear is that artificially intelligent robots could eventually pose a threat to humans either by actively seeking to harm them, or by failing to act in a manner that would preserve human life. Because humans are beginning to give robots control over essential infrastructure, the latter is as especially big concern.
Recently, an employee at a Volkswagen plant in Germany was crushed when he became trapped in a robot arm. The machine was only doing what it was programmed to do and wasn’t able to alter its programming even when a human’s life was in danger. To make robots safer for humans, robotics researchers at Tufts University are working on developing artificial intelligence that can deviate from its programming if the circumstances warrant it. The technology is still primitive but it’s an important step if artificially intelligent robots are going to be coexisting with humans someday.
How it works
Researchers at Tufts University’s Human-Robot Interaction lab designed a robot that recognizes that it is allowed to disobey orders when there is a good reason. For example, when facing a ledge, the robot will refuse to walk forward even when ordered to. Not only will the robot refuse, but he is programmed to state the reason—that he would fall if he were to obey. To understand how the robot is able to do this, we have to first understand the concept of “felicity conditions.” Felicity conditions refer to the distinction between understanding the command being given, and the implications of following that command. To design a robot that could refuse to obey certain orders, the researchers programmed the robot to go through five logical steps when given a command:
1.Do I know how to do X
2.Am I physically able to do X now? Am I normally physically able to do X?
3.Am I able to do X right now?
4.Am I obligated based on my social role to do X?
5.Does it violate any normative principle to do X?
This five step logical process enables the robot to determine whether or not a command would cause harm to itself or a human before following an order. The researchers recently presented their work at the AI for Human-Robot Interaction Symposium in Washington DC.
Artificial Intelligence News brought to you by artificialbrilliance.com
Formatting help /
(switch to plain text)
(switch to Markdown)
You can attach files up to 10MB
If you don't have an account yet, we need to confirm you're human and not a machine trying to post spam.
A conversation has been started with the LESS staff to resolve this discussion.
This discussion is private.
Only you and LESS support staff can see and reply to it.
This discussion is public. Everyone can see and reply to it.
You can use Command ⌘ instead of Control ^ on Mac
Powered by Tender™.