Ben Goertzel | |
---|---|
Born | 8 December 1966 |
Occupation(s) | CEO and founder of SingularityNET |
Ben Goertzel is a computer scientist, artificial intelligence researcher, and businessman. He helped popularize the term 'artificial general intelligence'. [1] [ non-primary source needed ]
Three of Goertzel's Jewish great-grandparents emigrated to New York from Lithuania and Poland. [2] Goertzel's father is Ted Goertzel, a former professor of sociology at Rutgers University. [3] Goertzel left high school after the tenth grade to attend Bard College at Simon's Rock, where he graduated with a bachelor's degree in Quantitative Studies. [4] Goertzel graduated with a PhD in mathematics from Temple University under the supervision of Avi Lin in 1990, at age 23. [5]
Goertzel is the founder and CEO of SingularityNET, a project which was founded to distribute artificial intelligence data via blockchains. [6] He is a leading developer of the OpenCog framework for artificial general intelligence. [7] [ non-primary source needed ]
Goertzel was the Chief Scientist of Hanson Robotics, the company that created the Sophia robot. [8] As of 2018, Sophia's architecture includes scripting software, a chat system, and OpenCog, an AI system designed for general reasoning. [9] Experts in the field have treated the project mostly as a PR stunt, stating that Hanson's claims that Sophia was "basically alive" are "grossly misleading" because the project does not involve AI technology, [10] while Meta's chief AI scientist called the project "complete bullshit". [11]
In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. [12] He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of "achieving complex goals in complex environments". [13] A "baby-like" artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life [14] to produce a more powerful intelligence. [15] Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as "attention values", with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming. [16]
The 2012 documentary The Singularity by independent filmmaker Doug Wolens discussed Goertzel's views on AGI. [17] [18]
In 2023 Goertzel postulated that artificial intelligence could replace up to 80 percent of human jobs in the coming years "without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature".[ citation needed ] At the Web Summit 2023 in Rio de Janeiro, Goertzel spoke out against efforts to curb AI research and that AGI is only a few years away. Goertzel's belief is that AGI will be a net positive for humanity by assisting with societal problems such as, but not limited to, climate change. [19] [20]
The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.
Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.
Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. AGI is considered one of the definitions of strong AI.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.
An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.
Bill Hibbard is a scientist at the University of Wisconsin–Madison Space Science and Engineering Center working on visualization and machine intelligence. He is principal author of the Vis5D, Cave5D, and VisAD open-source visualization systems. Vis5D was the first system to produce fully interactive animated 3D displays of time-dynamic volumetric data sets and the first open-source 3D visualization system.
An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.
The following outline is provided as an overview of and topical guide to artificial intelligence:
A probabilistic logic network (PLN) is a conceptual, mathematical and computational approach to uncertain inference. It was inspired by logic programming and it uses probabilities in place of crisp (true/false) truth values, and fractional uncertainty in place of crisp known/unknown values. In order to carry out effective reasoning in real-world circumstances, artificial intelligence software handles uncertainty. Previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN encompasses uncertain logic with such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.
OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License.
In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.
The Conference on Artificial General Intelligence is a meeting of researchers in the field of Artificial General Intelligence organized by the AGI Society, steered by Marcus Hutter and Ben Goertzel. It has been held annually since 2008. The conference was initiated by the 2006 Bethesda Artificial General Intelligence Workshop and has been hosted at the University of Memphis ; Arlington, Virginia ; Lugano, Switzerland ; Google headquarters in Mountain View, California ; the University of Oxford, United Kingdom ; and at Peking University, Beijing, China, Quebec City, Canada. The AGI-23 conference was held in Stockholm, Sweden.
The Singularity is a 2012 documentary film about the technological singularity, produced and directed by Doug Wolens. The film has been called "a large-scale achievement in its documentation of futurist and counter-futurist ideas”.
Existential risk from AI refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.
Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London, in the Department of Computing, and a senior scientist at DeepMind. He researches artificial intelligence, robotics, and cognitive science.
Sophia is a female social humanoid robot developed in 2016 by the Hong Kong–based company Hanson Robotics. Sophia was activated on February 14, 2016, and made her first public appearance in mid-March 2016 at South by Southwest (SXSW) in Austin, Texas, United States. Sophia was marketed as a "social robot" who can mimic social behavior and induce feelings of love in humans.
Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.