Singularity Hypotheses: A Scientific and Philosophical Assessment

Last updated
Singularity Hypotheses:
A Scientific and Philosophical Assessment
Singularity Hypotheses A Scientific and Philosophical Assessment.jpg
Hardcover edition
AuthorAmnon H. Eden, James H Moor, Johnny H. Soraker, and Eric Steinhart
CountryUnited States
LanguageEnglish
Publisher Springer
Publication date
April 3, 2013
Media typePrint (hardback)
Pages441
ISBN 978-3642325595

Singularity Hypotheses: A Scientific and Philosophical Assessment is a 2013 book written by Amnon H. Eden, James H. Moor, Johnny H. Soraker, and Eric Steinhart. It focuses on conjectures about the intelligence explosion, transhumanism, and whole brain emulation.

The book features essays and commentary on the technological singularity. One version of the hypothesis explored in the book states that humans will one day make an intelligent agent that will enter runaway self-improvement cycles, with each new version appearing more rapidly, causing an intelligence explosion and resulting in an intelligence that will far surpass us. [1]

The book's contributing authors include computational biologist Dennis Bray, artificial intelligence researcher Ben Goertzel, neuroscientist Randal A. Koene, philosophers Diane Proudfoot and David Pearce, among others. [1]

Related Research Articles

<span class="mw-page-title-main">Falsifiability</span> Property of a statement that can be logically contradicted

Falsifiability is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book The Logic of Scientific Discovery (1934). A theory or hypothesis is falsifiable if it can be logically contradicted by an empirical test.

In philosophy, Occam's razor is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as the principle of parsimony or the law of parsimony. Attributed to William of Ockham, a 14th-century English philosopher and theologian, it is frequently cited as Entia non sunt multiplicanda praeter necessitatem, which translates as "Entities must not be multiplied beyond necessity", although Occam never used these exact words. Popularly, the principle is sometimes paraphrased as "The simplest explanation is usually the best one."

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a positive feedback loop of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which ultimately results in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Eliezer Yudkowsky</span> American AI researcher and writer (born 1979)

Eliezer S. Yudkowsky is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence. He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion influenced philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies.

<i>The Logic of Scientific Discovery</i> 1959 book by Karl Popper

The Logic of Scientific Discovery is a 1959 book about the philosophy of science by the philosopher Karl Popper. Popper rewrote his book in English from the 1934 German original, titled Logik der Forschung. Zur Erkenntnistheorie der modernen Naturwissenschaft, which literally translates as, "Logic of Research: On the Epistemology of Modern Natural Science"'.

<span class="mw-page-title-main">Mind uploading</span> Hypothetical process of digitally emulating a brain

Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.

Scientific evidence is evidence that serves to either support or counter a scientific theory or hypothesis, although scientists also use evidence in other ways, such as when applying theories to practical problems. Such evidence is expected to be empirical evidence and interpretable in accordance with the scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.

The Omega Point is a theorized future event in which the entirety of the universe spirals toward a final point of unification. The term was invented by the French Jesuit Catholic priest Pierre Teilhard de Chardin (1881–1955). Teilhard argued that the Omega Point resembles the Christian Logos, namely Christ, who draws all things into himself, who in the words of the Nicene Creed, is "God from God", "Light from Light", "True God from True God", and "through him all things were made". In the Book of Revelation, Christ describes himself three times as "the Alpha and the Omega, the beginning and the end". Several decades after Teilhard's death, the idea of the Omega Point was expanded upon in the writings of John David Garcia (1971), Paolo Soleri (1981), Frank Tipler (1994), and David Deutsch (1997).

<span class="mw-page-title-main">David Pearce (philosopher)</span> British transhumanist

David Pearce is a British transhumanist philosopher. He is the co-founder of the World Transhumanist Association, currently rebranded and incorporated as Humanity+. Pearce approaches ethical issues from a lexical negative utilitarian perspective.

<span class="mw-page-title-main">I. J. Good</span> British statistician and cryptographer (1916–2009)

Irving John Good was a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing. After the Second World War, Good continued to work with Turing on the design of computers and Bayesian statistics at the University of Manchester. Good moved to the United States where he was a professor at Virginia Tech.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Intelligence has been defined in many ways: the capacity for abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers remain popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Testability is a primary aspect of science and the scientific method. There are two components to testability:

  1. Falsifiability or defeasibility, which means that counterexamples to the hypothesis are logically possible.
  2. The practical feasibility of observing a reproducible series of such counterexamples if they do exist.
<span class="mw-page-title-main">Simulation hypothesis</span> Hypothesis that reality could be a computer simulation

The simulation hypothesis proposes that what humans experience as the world is actually a simulated reality, such as a computer simulation in which humans themselves are constructs. There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing.

Henry Cosad Harpending was an American anthropologist, population geneticist, and writer. He was a distinguished professor at the University of Utah, and formerly taught at Penn State and the University of New Mexico. He was a member of the National Academy of Sciences. He is known for the book The 10,000 Year Explosion, which he co-authored with Gregory Cochran.

<span class="mw-page-title-main">Newtonianism</span> Philosophical principle of applying Newtons methods in a variety of fields

Newtonianism is a philosophical and scientific doctrine inspired by the beliefs and methods of natural philosopher Isaac Newton. While Newton's influential contributions were primarily in physics and mathematics, his broad conception of the universe as being governed by rational and understandable laws laid the foundation for many strands of Enlightenment thought. Newtonianism became an influential intellectual program that applied Newton's principles in many avenues of inquiry, laying the groundwork for modern science, in addition to influencing philosophy, political thought and theology.

In the field of artificial intelligence (AI) design, AI capability control proposals, also referred to as AI confinement, aim to increase our ability to monitor and control the behavior of AI systems, including proposed artificial general intelligences (AGIs), in order to reduce the danger they might pose if misaligned. However, capability control becomes less effective as agents become more intelligent and their ability to exploit flaws in human control systems increases, potentially resulting in an existential risk from AGI. Therefore, the Oxford philosopher Nick Bostrom and others recommend capability control methods only as a supplement to alignment methods.

<span class="mw-page-title-main">Ravi Gomatam</span>

Ravi Veeraraghavan Gomatam is the director of Bhaktivedanta Institute and the newly formed Institute of Semantic Information Sciences and Technology, Mumbai. He teaches graduate-level courses at these institutes. He was an adjunct professor at Birla Institute of Technology & Science (BITS), Pilani, Rajasthan, India (1993–2015).

References