Singleton (global governance)

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term has first been defined by Nick Bostrom.[1][2][3][4][5][6][7][8]

An artificial general intelligence having undergone an intelligence explosion could form a singleton, as could a world government armed with mind control and social surveillance technologies. A singleton need not directly micromanage everything in its domain; it could allow diverse forms of organization within itself, albeit guaranteed to function within strict parameters. A singleton need not support a civilization, and in fact could obliterate it upon coming to power.

A singleton has both potential risks and potential benefits. Notably, a suitable singleton could solve world coordination problems that would not otherwise be solvable, opening up otherwise unavailable developmental trajectories for civilization. For example, Ben Goertzel, an AGI researcher, suggests humans may instead decide to create an "AI Nanny" with "mildly superhuman intelligence and surveillance powers", to protect the human race from existential risks like nanotechnology and to delay the development of other (unfriendly) artificial intelligences until and unless the safety issues are solved.[9] Furthermore, Bostrom suggests that a singleton could hold Darwinian evolutionary pressures in check, preventing agents interested only in reproduction from coming to dominate.[10]

Yet Bostrom also regards the possibility of a stable, repressive, totalitarian global regime as a serious existential risk.[11] The very stability of a singleton makes the installation of a bad singleton especially catastrophic, since the consequences can never be undone. Bryan Caplan writes that "perhaps an eternity of totalitarianism would be worse than extinction".[12]

Similarly Hans Morgenthau stressed that the mechanical development of weapons, transportation, and communication makes "the conquest of the world technically possible, and they make it technically possible to keep the world in that conquered state". Its lack was the reason why great ancient empires, though vast, failed to complete universal conquest of their world and perpetuate the conquest. Now, however, this is possible. Technology undoes both geographic and climatic barriers. "Today no technological obstacle stands in the way of a world-wide empire", as "modern technology makes it possible to extend the control of mind and action to every corner of the globe regardless of geography and season."[13] Morgenthau continued on the technological progress:

It has also given total war that terrifying, world-embracing impetus which seems to be satisfied with nothing less than world dominion… The machine age begets its own triumphs, each forward step calling forth two or more on the road of technological progress. It also begets its own victories, military and political; for with the ability to conquer the world and keep it conquered, it creates the will to conquer it.[14]

See also

References

  1. Nick Bostrom (2006). "What is a Singleton?". Linguistic and Philosophical Investigations 5(2): 48-54.
  2. Dvorsky, George. "7 Totally Unexpected Outcomes That Could Follow the Singularity". io9. Retrieved 3 February 2016.
  3. Miller, James D. (6 September 2011). "The Singleton Solution". hplusmagazine.com. Retrieved 3 February 2016.
  4. Thiel, Thomas (21 December 2014). "Die Superintelligenz ist gar nicht super". Frankfurter Allgemeine Zeitung. Retrieved 3 February 2016.
  5. Barrat, James. Our Final Invention: Artificial Intelligence and the End of the Human Era. ISBN 978-0312622374. Retrieved 3 February 2016.
  6. Haggstrom, Olle. Here Be Dragons: Science, Technology and the Future of Humanity. ISBN 9780198723547. Retrieved 3 February 2016.
  7. O'Mathúna, Dónal. Nanoethics: Big Ethical Issues with Small Technology. p. 185. ISBN 9781847063953. Retrieved 3 February 2016.
  8. Könneker, Carsten (19 November 2015). "Fukushima der künstlichen Intelligenz". Spektrum. Retrieved 3 February 2016.
  9. Goertzel, Ben. "Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?", Journal of consciousness studies 19.1-2 (2012): 1-2.
  10. Nick Bostrom (2004). "The Future of Human Evolution". Death and Anti-Death: Two Hundred Years After Kant, Fifty Years After Turing, ed. Charles Tandy (Ria University Press: Palo Alto, California): 339-371.
  11. Nick Bostrom (2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology 9(1).
  12. Bryan Caplan (2008). "The totalitarian threat". Global Catastrophic Risks, eds. Bostrom & Cirkovic (Oxford University Press): 504-519. ISBN 9780198570509
  13. Politics Among Nations: The Struggle for Power and Peace, 4th edition, New York: Alfred A. Knopf, 1967, p 358-365.
  14. Politics Among Nations, p 369-370.
This article is issued from Wikipedia - version of the 6/9/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.