Subject-matter expert Turing test

Last updated

A subject matter expert Turing test is a variation of the Turing test where a computer system attempts to replicate an expert in a given field such as chemistry or marketing. It is also known as a Feigenbaum test [1] and was proposed by Edward Feigenbaum in a 2003 paper. [2]

Contents

The concept is also described by Ray Kurzweil in his 2005 book The Singularity is Near . Kurzweil argues that machines who pass this test are an inevitable consequence of Moore's Law. [3]

See also

Notes

  1. McCorduck (2004 , pp. 503–505)
  2. Feigenbaum 2003
  3. Kurzweil 2005

Related Research Articles

Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. Leading AI textbooks define the field as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is often used to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving".

Alan Turing English mathematician and computer scientist

Alan Mathison Turing was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was never fully recognised in his home country during his lifetime due to the prevalence of homophobia at the time and because much of his work was covered by the Official Secrets Act.

The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.

Ray Kurzweil American author, scientist, inventor, and futurist

Raymond Kurzweil is an American inventor and futurist. He is involved in fields such as optical character recognition (OCR), text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence (AI), transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.

The technological singularity—also, simply, the singularity—is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

Edward Feigenbaum American computer scientist

Edward Albert "Ed" Feigenbaum is a computer scientist working in the field of artificial intelligence, and joint winner of the 1994 ACM Turing Award. He is often called the "father of expert systems."

Neat and scruffy are labels for two different types of artificial intelligence (AI) research. Neats consider that solutions should be elegant, clear and provably correct. Scruffies believe that intelligence is too complicated to be solved with the sorts of homogeneous system such neat requirements usually mandate.

Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be decades away from AGI.

<i>The Singularity Is Near</i> 2005 book by Raymond Kurzweil

The Singularity Is Near: When Humans Transcend Biology is a 2005 non-fiction book about artificial intelligence and the future of humanity by inventor and futurist Ray Kurzweil.

<i>The Age of Intelligent Machines</i> Non-fiction book by Raymond Kurzweil

The Age of Intelligent Machines is a non-fiction book about artificial intelligence by inventor and futurist Ray Kurzweil. This was his first book and the Association of American Publishers named it the Most Outstanding Computer Science Book of 1990. It was reviewed in The New York Times and The Christian Science Monitor. The format is a combination of monograph and anthology with contributed essays by artificial intelligence experts such as Daniel Dennett, Douglas Hofstadter, and Marvin Minsky.

The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

Artificial intelligence has close connections with philosophy because both use concepts that have the same names and these include intelligence, action, consciousness, epistemology, and even free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community's dismissal of philosophy is detrimental.

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The term was coined by analogy to the idea of a nuclear winter. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.

Hubert Dreyfuss views on artificial intelligence overview about Hubert Dreyfus views on artificial intelligence

Hubert Dreyfus has been a critic of artificial intelligence research since the 1960s. In a series of papers and books, including Alchemy and AI (1965), What Computers Can't Do and Mind over Machine (1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2003), the standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.

This is a timeline of artificial intelligence.

Progress in artificial intelligence emergence and growth of artificial intelligence applications

Artificial intelligence applications have been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 1990s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.

Turing test Test of a machines ability to exhibit intelligent behavior equivalent to that of a human

The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test results do not depend on the machine's ability to give correct answers to questions, only how closely its answers resemble those a human would give.

Sam Adams Award award given annually to an intelligence professional who has taken a stand for integrity and ethics.

The Sam Adams Award is given annually to an intelligence professional who has taken a stand for integrity and ethics. The Award is given by the Sam Adams Associates for Integrity in Intelligence, a group of retired CIA officers. It is named after Samuel A. Adams, a CIA whistleblower during the Vietnam War, and takes the physical form of a "corner-brightener candlestick".

The United States government's Strategic Computing Initiative funded research into advanced computer hardware and artificial intelligence from 1983 to 1993. The initiative was designed to support various projects that were required to develop machine intelligence in a prescribed ten-year time frame, from chip design and manufacture, computer architecture to artificial intelligence software. The Department of Defense spent a total of $1 billion on the project.

Artificial intelligence researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered a part of AI. According to Russell & Norvig, all of the following were originally developed in AI laboratories: time sharing, interactive interpreters, graphical user interfaces and the computer mouse, Rapid application development environments, the linked list data structure, automatic storage management, symbolic programming, functional programming, dynamic programming and object-oriented programming.

References

Further reading