Eliezer Yudkowsky

Eliezer Yudkowsky
Born (1979-09-11) September 11, 1979
Nationality American
Spouse Brienne Yudkowsky (m. 2013)[1]

Eliezer Shlomo Yudkowsky (born September 11, 1979) is an American artificial intelligence researcher known for popularizing the idea of friendly artificial intelligence.[2][3] He is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California.[4]

Work in artificial intelligence safety

Goal learning and incentives in software systems

Yudkowsky's views on the safety challenges posed by future generations of AI systems are discussed in the standard undergraduate textbook in AI, Stuart Russell and Peter Norvig's Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time:

Yudkowsky (2008)[5] goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design – to design a mechanism for evolving AI under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.[2]

Citing Steve Omohundro's idea of instrumental convergence, Russell and Norvig caution that autonomous decision-making systems with poorly designed goals would have default incentives to treat humans adversarially, or as dispensable resources, unless specifically designed to counter such incentives: "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards".[2][6]

In response to the instrumental convergence concern, Yudkowsky and other MIRI researchers have recommended that work be done to specify software agents that converge on safe default behaviors even when their goals are misspecified.[7] The Future of Life Institute (FLI) summarizes this research program in the Open Letter on Artificial Intelligence research priorities document:

If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal (and conversely, seeking unconstrained situations is sometimes a useful heuristic). This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes. Systems that do not exhibit these behaviors have been termed corrigible systems, and both theoretical and practical work in this area appears tractable and useful. For example, it may be possible to design utility functions or decision processes so that a system will not try to avoid being shut down or repurposed, and theoretical frameworks could be developed to better understand the space of potential systems that avoid undesirable behaviors.[8]

Yudkowsky argues that as AI systems become increasingly intelligent, new formal tools will be needed in order to avert default incentives for harmful behavior, as well as to inductively teach correct behavior.[7][9] These lines of research are discussed in MIRI's 2015 technical agenda.[10]

System reliability and transparency

Yudkowsky studies decision theories that achieve better outcomes than causal decision theory in Newcomblike problems.[11] This includes decision procedures that allow agents to cooperate with equivalent reasoners in the one-shot prisoner's dilemma.[12] Yudkowsky has also written on theoretical prerequisites for self-verifying software.[13][9]

Yudkowsky argues that it is important for advanced AI systems to be cleanly designed and transparent to human inspection, both to ensure stable behavior and to allow greater human oversight and analysis.[9] Citing papers on this topic by Yudkowsky and other MIRI researchers, the FLI research priorities document states that work on defining correct reasoning in embodied and logically non-omniscient agents would be valuable for the design, use, and oversight of AI agents.[8][14]

Capabilities forecasting

In their discussion of Omohundro and Yudkowsky's work, Russell and Norvig cite I. J. Good's 1965 prediction that when computer systems begin to outperform humans in software engineering tasks, this may result in a feedback loop of increasingly capable AI systems. This raises the possibility that AI's impact could increase very quickly after it reaches a certain level of capability.[2]

In the intelligence explosion scenario inspired by Good's hypothetical, recursively self-improving AI systems quickly transition from subhuman general intelligence to superintelligent.[9] Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in greater detail, while making a broader case for expecting AI systems to eventually outperform humans across the board. Bostrom cites writing by Yudkowsky on inductive value learning and on the risk of anthropomorphizing advanced AI systems, e.g.: "AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of 'village idiot' and 'Einstein' as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general."[5][15]

The Open Philanthropy Project, an offshoot of the charity evaluator GiveWell, credits Yudkowsky and Bostrom with several (paraphrased) arguments for expecting future AI advances to have a large societal impact:[16]

Over a relatively short geological timescale, humans have come to have enormous impacts on the biosphere, often leaving the welfare of other species dependent on the objectives and decisions of humans. It seems plausible that the intellectual advantages humans have over other animals have been crucial in allowing humans to build up the scientific and technological capabilities that have made this possible. If advanced artificial intelligence agents become significantly more powerful than humans, it seems possible that they could become the dominant force in the biosphere, leaving humans' welfare dependent on their objectives and decisions. As with the interaction between humans and other species in the natural environment, these problems could be the result of competition for resources rather than malice.

In comparison with other evolutionary changes, there was relatively little time between our hominid ancestors and the evolution of humans. There was therefore relatively little time for evolutionary pressure to lead to improvements in human intelligence relative to the intelligence of our hominid ancestors, suggesting that the increases in intelligence may be small on some absolute scale. [...T]his makes it seem plausible that creating intelligent agents that are more intelligent than humans could have dramatic real-world consequences even if the difference in intelligence is small in an absolute sense.[14]

Russell and Norvig raise the objection that there are known limits to intelligent problem-solving from computational complexity theory; if there are strong limits on how efficiently algorithms can solve various computer science tasks, then intelligence explosion may not be possible.[2] Yudkowsky has debated the likelihood of intelligence explosion with economist Robin Hanson, who argues that AI progress is likely to accelerate over time, but is not likely to be localized or discontinuous.[17]

In a 2013 report, Intelligence Explosion Microeconomics, Yudkowsky suggests that more formalized microeconomic model of cognition and self-improvement could yield better predictions about whether intelligence explosion is feasible. In contrast, Yudkowsky expresses skepticism about our ability to predict when AI algorithms are likely to exceed humans in general intelligence, even if we attempt to improve our timelines with more formal models.[16]

Rationality writing

Between 2006 and 2009, Yudkowsky and Hanson were the principal contributors to Overcoming Bias,[18] a cognitive and social science blog sponsored by the Future of Humanity Institute of Oxford University. In February 2009, Yudkowsky founded LessWrong,[19] a "community blog devoted to refining the art of human rationality".[20] Overcoming Bias has since functioned as Hanson's personal blog. LessWrong has been covered in depth in Business Insider,[21] and core concepts from LessWrong have been referenced in columns in The Guardian.[22][23]

Yudkowsky has also written several works of fiction.[24] His fan fiction story, Harry Potter and the Methods of Rationality, uses plot elements from J.K. Rowling's Harry Potter series to illustrate topics in science.[20][25][26][27][28][29][30] The New Yorker describes Harry Potter and the Methods of Rationality as a retelling of Rowling's original "in an attempt to explain Harry's wizardry through the scientific method".[31]

Over 300 blogposts by Yudkowsky have been released as six books, collected in a single ebook titled Rationality: From AI to Zombies by the Machine Intelligence Research Institute in 2015.[32]

Yudkowsky identifies as an atheist,[33] and a small-l libertarian.[34]

Academic publications

See also

References

  1. Yudkowsky, Eliezer. "Eliezer S. Yudkowsky". yudkowsky.net. Retrieved October 7, 2015.
  2. 1 2 3 4 5 Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  3. Leighton, Jonathan (2011). The Battle for Compassion: Ethics in an Apathetic Universe. Algora. ISBN 978-0-87586-870-7.
  4. Kurzweil, Ray (2005). The Singularity Is Near. New York City: Viking Penguin. ISBN 0-670-03384-7.
  5. 1 2 Yudkowsky, Eliezer (2008). "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF). In Bostrom, Nick; Ćirković, Milan. Global Catastrophic Risks. Oxford University Press. ISBN 978-0199606504.
  6. Omohundro, Steve (2008). "The Basic AI Drives" (PDF). Proceedings of the First AGI Conference. IOS Press.
  7. 1 2 Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015). "Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
  8. 1 2 Future of Life Institute (2015). Research priorities for robust and beneficial artificial intelligence (PDF) (Report). Retrieved October 12, 2015.
  9. 1 2 3 4 Yudkowsky, Eliezer (2013). "Five theses, two lemmas, and a couple of strategic implications". MIRI Blog. Retrieved October 12, 2015.
  10. Soares, Nate; Fallenstein, Benja (2015). "Aligning Superintelligence with Human Interests: A Technical Research Agenda" (PDF). In Miller, James; Yampolskiy, Roman; Armstrong, Stuart; et al. The Technological Singularity: Managing the Journey. Springer.
  11. Soares, Nate; Fallenstein, Benja (2015). "Toward Idealized Decision Theory". arXiv:1507.01986Freely accessible [cs.AI].
  12. LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014). "Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications.
  13. Fallenstein, Benja; Soares, Nate (2015). Vingean Reflection: Reliable Reasoning for Self-Improving Agents (PDF) (Technical report). Machine Intelligence Research Institute. 2015-2.
  14. 1 2 GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved October 12, 2015.
  15. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. ISBN 0199678111.
  16. 1 2 Yudkowsky, Eliezer (2013). Intelligence Explosion Microeconomics (PDF) (Technical report). Machine Intelligence Research Institute. 2013-1.
  17. Hanson, Robin; Yudkowsky, Eliezer (2013). The Hanson-Yudkowsky AI Foom Debate. Machine Intelligence Research Institute.
  18. "Overcoming Bias: About". Robin Hanson. Retrieved February 1, 2012.
  19. "Where did Less Wrong come from? (LessWrong FAQ)". Retrieved September 11, 2014.
  20. 1 2 Miller, James (2012). Singularity Rising. ISBN 978-1936661657.
  21. Miller, James (July 28, 2011). "You Can Learn How To Become More Rational". Business Insider. Retrieved March 25, 2014.
  22. Burkeman, Oliver (July 8, 2011). "This column will change your life: Feel the ugh and do it anyway. Can the psychological flinch mechanism be beaten?". The Guardian. Retrieved March 25, 2014.
  23. Burkeman, Oliver (March 9, 2012). "This column will change your life: asked a tricky question? Answer an easier one. We all do it, all the time. So how can we get rid of this eccentricity?". The Guardian. Retrieved March 25, 2014.
  24. Eliezer S. Yudkowsky. "Fiction". Yudkowsky. Retrieved September 14, 2015.
  25. David Brin (June 21, 2010). "CONTRARY BRIN: A secret of college life... plus controversies and science!". Davidbrin.blogspot.com. Retrieved August 31, 2012."'Harry Potter' and the Key to Immortality", Daniel Snyder, The Atlantic
  26. Authors (April 2, 2012). "Rachel Aaron interview (April 2012)". Fantasybookreview.co.uk. Retrieved August 31, 2012.
  27. "Civilian Reader: An Interview with Rachel Aaron". Civilian-reader.blogspot.com. May 4, 2011. Retrieved August 31, 2012.
  28. Hanson, Robin (October 31, 2010). "Hyper-Rational Harry". Overcoming Bias. Retrieved August 31, 2012.
  29. Swartz, Aaron. "The 2011 Review of Books (Aaron Swartz's Raw Thought)". archive.org. Archived from the original on March 16, 2013. Retrieved October 4, 2013.
  30. "Harry Potter and the Methods of Rationality". fanfiction.net. February 28, 2010. Retrieved December 29, 2014.
  31. Packer, George (2011). "No Death, No Taxes: The Libertarian Futurism of a Silicon Valley Billionaire". The New Yorker: 54. Retrieved October 12, 2015.
  32. Rationality: From AI to Zombies, MIRI, 2015-03-12
  33. http://lesswrong.com/lw/1kh/the_correct_contrarian_cluster/
  34. http://www.cato-unbound.org/2011/09/07/eliezer-yudkowsky/true-rejection

External links

This article is issued from Wikipedia - version of the 11/26/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.