Laws of robotics

Laws of Robotics are a set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.

Isaac Asimov's "Three Laws of Robotics"

This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics.

The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Near the end of his book Foundation and Earth, a zeroth law was introduced:

0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

Adaptations and extensions exist based upon this framework. As of 2011 they remain a "fictional device".[1]

EPSRC / AHRC principles of robotics

In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC) of Great Britain jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:[2][3][1]

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

The messages intended to be conveyed were:

  1. We believe robots have the potential to provide immense positive impact to society. We want to encourage responsible robot research.
  2. Bad practice hurts us all.
  3. Addressing obvious public concerns will help us all make progress.
  4. It is important to demonstrate that we, as roboticists, are committed to the best possible standards of practice.
  5. To understand the context and consequences of our research, we should work with experts from other disciplines, including: social sciences, law, philosophy and the arts.
  6. We should consider the ethics of transparency: are there limits to what should be openly available?
  7. When we see erroneous accounts in the press, we commit to take the time to contact the reporting journalists.

Judicial development

Another comprehensive terminological codification for the legal assessment of the technological developments in the robotics industry has already begun mainly in Asian countries.[4] This progress represents a contemporary reinterpretation of the law (and ethics) in the field of robotics, an interpretation that assumes a rethinking of traditional legal constellations. These include primarily legal liability issues in civil and criminal law.

Satya Nadella's laws

In June 2016, Satya Nadella, a CEO of Microsoft Corporation at the time, had an interview with the Slate magazine and roughly sketched five rules for artificial intelligences to be observed by their designers:[5][6]

  1. "A.I. must be designed to assist humanity" meaning human autonomy needs to be respected.
  2. "A.I. must be transparent" meaning that humans should know and be able to understand how they work.
  3. "A.I. must maximize efficiencies without destroying the dignity of people".
  4. "A.I. must be designed for intelligent privacy" meaning that it earns trust through guarding their information.
  5. "A.I. must have algorithmic accountability so that humans can undo unintended harm".
  6. "A.I. must guard against bias" so that they must not discriminate people.

Tilden's "Laws of Robotics"

Mark W. Tilden proposed three guiding principles/rules for robots, which do not pertain to humans or humanity, but to robots themselves:

  1. A robot must protect its existence at all costs.
  2. A robot must obtain and maintain access to its own power source.
  3. A robot must continually search for better power sources.

See also

References

  1. 1 2 Stewart, Jon (2011-10-03). "Ready for the robot revolution?". BBC News. Retrieved 2011-10-03.
  2. "Principles of robotics: Regulating Robots in the Real World". Engineering and Physical Sciences Research Council. Retrieved 2011-10-03.
  3. Winfield, Alan. "Five roboethical principles – for humans". New Scientist. Retrieved 2011-10-03.
  4. bcc.co.uk: Robot age poses ethical dilemma. Link
  5. Nadella, Satya (2016-06-28). "The Partnership of the Future". Slate. ISSN 1091-2339. Retrieved 2016-06-30.
  6. Vincent, James (2016-06-29). "Satya Nadella's rules for AI are more boring (and relevant) than Asimov's Three Laws". The Verge. Vox Media. Retrieved 2016-06-30.
This article is issued from Wikipedia - version of the 10/30/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.