Ben Goertzel

Last updated

Ben Goertzel
2019 - Centre Stage - Day 2 VJR21570 (49024153726).jpg
Goertzel in 2019
Born (1966-12-08) 8 December 1966 (age 57)
Occupation(s)CEO and founder of SingularityNET

Ben Goertzel is a computer scientist, artificial intelligence researcher, and businessman. He helped popularize the term 'artificial general intelligence'. [1] [ non-primary source needed ]

Contents

Early life and education

Three of Goertzel's Jewish great-grandparents emigrated to New York from Lithuania and Poland. [2] Goertzel's father is Ted Goertzel, a former professor of sociology at Rutgers University. [3] Goertzel left high school after the tenth grade to attend Bard College at Simon's Rock, where he graduated with a bachelor's degree in Quantitative Studies. [4] Goertzel graduated with a PhD in mathematics from Temple University under the supervision of Avi Lin in 1990, at age 23. [5]

Career

7 November 2017; Sophia the Robot, Chief Humanoid, Hanson Robotics & SingularityNET, and Ben Goertzel, Chief Scientist, Hanson Robotics & SingularityNET, at a press conference during the opening day of Web Summit 2017 at Altice Arena in Lisbon. Web Summit 2017 - Press Conferences SM5 7487 (24372825758).jpg
7 November 2017; Sophia the Robot, Chief Humanoid, Hanson Robotics & SingularityNET, and Ben Goertzel, Chief Scientist, Hanson Robotics & SingularityNET, at a press conference during the opening day of Web Summit 2017 at Altice Arena in Lisbon.

Goertzel is the founder and CEO of SingularityNET, a project which was founded to distribute artificial intelligence data via blockchains. [6] He is a leading developer of the OpenCog framework for artificial general intelligence. [7] [ non-primary source needed ]

Sophia the Robot

Goertzel was the Chief Scientist of Hanson Robotics, the company that created the Sophia robot. [8] As of 2018, Sophia's architecture includes scripting software, a chat system, and OpenCog, an AI system designed for general reasoning. [9] Experts in the field have treated the project mostly as a PR stunt, stating that Hanson's claims that Sophia was "basically alive" are "grossly misleading" because the project does not involve AI technology, [10] while Meta's chief AI scientist called the project "complete bullshit". [11]

Views on AI

Ben Goertzel at Brain Bar Ben Goertzel -.jpg
Ben Goertzel at Brain Bar

In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence. [12] He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of "achieving complex goals in complex environments". [13] A "baby-like" artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life [14] to produce a more powerful intelligence. [15] Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as "attention values", with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming. [16]

The 2012 documentary The Singularity by independent filmmaker Doug Wolens discussed Goertzel's views on AGI. [17] [18]

In 2023 Goertzel postulated that artificial intelligence could replace up to 80 percent of human jobs in the coming years "without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature".[ citation needed ] At the Web Summit 2023 in Rio de Janeiro, Goertzel spoke out against efforts to curb AI research and that AGI is only a few years away. Goertzel's belief is that AGI will be a net positive for humanity by assisting with societal problems such as, but not limited to, climate change. [19] [20]

Bibliography

Related Research Articles

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences for human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of self-improvement cycles, each successive; and more intelligent generation appearing more and more rapidly, causing a rapid increase ("explosion") in intelligence which would ultimately result in a powerful superintelligence, qualitatively far surpassing all human intelligence.

Friendly artificial intelligence is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests or contribute to fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. AGI is considered one of the definitions of strong AI.

A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI's work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Bill Hibbard is a scientist at the University of Wisconsin–Madison Space Science and Engineering Center working on visualization and machine intelligence. He is principal author of the Vis5D, Cave5D, and VisAD open-source visualization systems. Vis5D was the first system to produce fully interactive animated 3D displays of time-dynamic volumetric data sets and the first open-source 3D visualization system.

An artificial brain is software and hardware with cognitive abilities similar to those of the animal or human brain.

The following outline is provided as an overview of and topical guide to artificial intelligence:

A probabilistic logic network (PLN) is a conceptual, mathematical and computational approach to uncertain inference. It was inspired by logic programming and it uses probabilities in place of crisp (true/false) truth values, and fractional uncertainty in place of crisp known/unknown values. In order to carry out effective reasoning in real-world circumstances, artificial intelligence software handles uncertainty. Previous approaches to uncertain inference do not have the breadth of scope required to provide an integrated treatment of the disparate forms of cognitively critical uncertainty as they manifest themselves within the various forms of pragmatic inference. Going beyond prior probabilistic approaches to uncertain inference, PLN encompasses uncertain logic with such ideas as induction, abduction, analogy, fuzziness and speculation, and reasoning about time and causality.

<span class="mw-page-title-main">OpenCog</span> Project for an open source artificial intelligence framework

OpenCog is a project that aims to build an open source artificial intelligence framework. OpenCog Prime is an architecture for robot and virtual embodied cognition that defines a set of interacting components designed to give rise to human-equivalent artificial general intelligence (AGI) as an emergent phenomenon of the whole system. OpenCog Prime's design is primarily the work of Ben Goertzel while the OpenCog framework is intended as a generic framework for broad-based AGI research. Research utilizing OpenCog has been published in journals and presented at conferences and workshops including the annual Conference on Artificial General Intelligence. OpenCog is released under the terms of the GNU Affero General Public License.

In futurology, a singleton is a hypothetical world order in which there is a single decision-making agency at the highest level, capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy. The term was first defined by Nick Bostrom.

<span class="mw-page-title-main">Conference on Artificial General Intelligence</span> Annual meeting of researchers of Artificial General Intelligence

The Conference on Artificial General Intelligence is a meeting of researchers in the field of Artificial General Intelligence organized by the AGI Society, steered by Marcus Hutter and Ben Goertzel. It has been held annually since 2008. The conference was initiated by the 2006 Bethesda Artificial General Intelligence Workshop and has been hosted at the University of Memphis ; Arlington, Virginia ; Lugano, Switzerland ; Google headquarters in Mountain View, California ; the University of Oxford, United Kingdom ; and at Peking University, Beijing, China, Quebec City, Canada. The AGI-23 conference was held in Stockholm, Sweden.

<i>The Singularity</i> (film) 2012 film

The Singularity is a 2012 documentary film about the technological singularity, produced and directed by Doug Wolens. The film has been called "a large-scale achievement in its documentation of futurist and counter-futurist ideas”.

Existential risk from AI refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

Shane Legg is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies, and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter.

Murray Patrick Shanahan is a professor of Cognitive Robotics at Imperial College London, in the Department of Computing, and a senior scientist at DeepMind. He researches artificial intelligence, robotics, and cognitive science.

<span class="mw-page-title-main">Sophia (robot)</span> Social humanoid robot

Sophia is a female social humanoid robot developed in 2016 by the Hong Kong–based company Hanson Robotics. Sophia was activated on February 14, 2016, and made her first public appearance in mid-March 2016 at South by Southwest (SXSW) in Austin, Texas, United States. Sophia was marketed as a "social robot" who can mimic social behavior and induce feelings of love in humans.

<span class="mw-page-title-main">AI aftermath scenarios</span> Overview of AIs possible effects on the human state

Some scholars believe that advances in artificial intelligence, or AI, will eventually lead to a semi-apocalyptic post-scarcity and post-work economy where intelligent machines can outperform humans in almost every, if not every, domain. The questions of what such a world might look like, and whether specific scenarios constitute utopias or dystopias, are the subject of active debate.

References

  1. "Who coined the term "AGI"? » goertzel.org". Archived from the original on 28 December 2018. Retrieved 28 December 2018., via Life 3.0: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'
  2. Ben Goertzel: Artificial General Intelligence | AI Podcast #103 with Lex Fridman, YouTube, 22 June 2020
  3. Paulos, John Allen (5 November 1995). "Pauling's Prizes". The New York Times . Retrieved 23 September 2024.
  4. Goertzel, Benjamin (1985). Nonclassical Arithmetics and Calculi. Simon's Rock of Bard College.
  5. Ben Goertzel at the Mathematics Genealogy Project
  6. Popper, Nathaniel (20 October 2018). "How the Blockchain Could Break Big Tech's Hold on A.I." The New York Times. Retrieved 28 May 2020.
  7. "Background Publications - OpenCog". wiki.opencog.org. Retrieved 22 April 2022.
  8. Vincent, James (10 November 2017). "Sophia the robot's co-creator says the bot may not be true AI, but it is a work of art". The Verge. Retrieved 26 November 2023.
  9. Urbi, Jaden; Sigalos, MacKenzie (5 June 2018). "The complicated truth about Sophia the robot — an almost human robot or a PR stunt". CNBC. Archived from the original on 12 May 2020. Retrieved 17 May 2020.
  10. Vincent, James (10 November 2017). "Sophia the robot's co-creator says the bot may not be true AI, but it is a work of art". The Verge. Retrieved 16 January 2024.
  11. Ghosh, Shona (4 January 2018). "Facebook's AI boss described Sophia the robot as 'complete b------t' and 'Wizard-of-Oz AI'". Business Insider. Retrieved 16 January 2024.
  12. Goertzel, Ben (30 May 2007). "Artificial General Intelligence: Now Is the Time". GoogleTalks Archive. Retrieved 31 December 2021 via YouTube.
  13. Roberts, Jacob (2016). "Thinking Machines: The Search for Artificial Intelligence". Distillations. 2 (2): 14–23. Archived from the original on 19 August 2018. Retrieved 22 March 2018.
  14. "Online worlds to be AI incubators", BBC News, 13 September 2007
  15. "Virtual worlds making artificial intelligence apps 'smarter'" Archived 21 October 2007 at the Wayback Machine , Computerworld , 13 September 2007
  16. "Patterns, Hypergraphs and Embodied General Intelligence", Ben Goertzel, WCCI Panel Discussion: "A Roadmap to Human-Level Intelligence" [ permanent dead link ], July 2006
  17. "The Singularity: A Documentary by Doug Wolens". Ieet.org. Archived from the original on 21 October 2013. Retrieved 22 October 2013.
  18. "Pondering Our Cyborg Future in a Documentary About the Singularity – Kasia Cieplak-Mayr von Baldegg". The Atlantic. 8 January 2013. Archived from the original on 21 October 2013. Retrieved 22 October 2013.
  19. "AI could probably make 80% of jobs obsolete: AI guru Ben Goertzel's revelation". businesstoday.in. 11 May 2023. Retrieved 26 November 2023.
  20. "A scientist says the Singularity will happen by 2031". popularmechanics.com. 9 November 2023. Retrieved 26 November 2023.