The Stochastic Neural Analog Reinforcement Calculator (SNARC) is a neural-net machine designed by Marvin Lee Minsky. [1] [2] Prompted by a letter from Minsky, George Armitage Miller gathered the funding for the project from the Air Force Office of Scientific Research in the summer of 1951 with the work to be carried out by Minsky, who was then a graduate student in mathematics at Princeton University. At the time, a physics graduate student at Princeton, Dean S. Edmonds, [3] volunteered that he was good with electronics and therefore Minsky brought him onto the project. [4]
During undergraduate years, he was inspired by the 1943 Warren McCulloch and Walter Pitts paper on artificial neurons, and decided to build such a machine. The learning was Skinnerian reinforcement learning, and Minsky talked with Skinner extensively during the development of the machine. They tested the machine on a copy of Shannon's maze, and found that it could learn to solve the maze. Unlike Shannon's maze, this machine did not control a physical robot, but simulated rats running in a maze. The simulation is displayed as an "arrangement of lights", and the circuit was reinforced each time the simulated rat reached the goal.
The machine surprised its creators. "The rats actually interacted with one another. If one of them found a good path, the others would tend to follow it." [4]
The machine itself is a randomly connected network of approximately 40 Hebb synapses. These synapses each have a memory that holds the probability that signal comes in one input and another signal will come out of the output. There is a probability knob that goes from 0 to 1 that shows this probability of the signals propagating. If the probability signal gets through, a capacitor remembers this function and engages an electromagnetic clutch. At this point, the operator will press a button to give reward to the machine. This activates a motor on a surplus Minneapolis-Honeywell C-1 gyroscopic autopilot from a B-24 bomber. [5] The motor turns a chain that goes to all 40 synapse machines, checking if the clutch is engaged or not. As the capacitor can only "remember" for a certain amount of time, the chain only catches the most recent updates of the probabilities.
Each neuron contained 6 vacuum tubes and a motor. The entire machine is "the size of a grand piano" and contained 300 vacuum tubes. The tubes failed regularly, but the machine would still work despite failures. [4]
This machine is considered one of the first pioneering attempts at the field of artificial intelligence. Minsky went on to be a founding member of MIT's Project MAC, which split to become the MIT Laboratory for Computer Science and the MIT Artificial Intelligence Lab, and is now the MIT Computer Science and Artificial Intelligence Laboratory. In 1985 Minsky became a founding member of the MIT Media Laboratory.
According to Minsky, he loaned the machine to students in Dartmouth, and subsequently lost, except for a single neuron. [6] A photo of Minsky's last neuron can be seen here. The photo shows that 6 vacuum tubes, one of which is a Sylvania JAN-CHS-6H6GT/G/VT-90A.
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.
Marvin Lee Minsky was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology's AI laboratory and wrote several texts concerning AI and philosophy.
Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
In artificial intelligence, symbolic artificial intelligence is the term for the collection of all methods in artificial intelligence research that are based on high-level symbolic (human-readable) representations of problems, logic and search. Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems. The Symbolic AI paradigm led to seminal ideas in search, symbolic programming languages, agents, multi-agent systems, the semantic web, and the strengths and limitations of formal knowledge and reasoning systems.
Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.
In the history of artificial intelligence, neat and scruffy are two contrasting approaches to artificial intelligence (AI) research. The distinction was made in the 1970s and was a subject of discussion until the mid-1980s.
Computer Science and Artificial Intelligence Laboratory (CSAIL) is a research institute at the Massachusetts Institute of Technology (MIT) formed by the 2003 merger of the Laboratory for Computer Science (LCS) and the Artificial Intelligence Laboratory. Housed within the Ray and Maria Stata Center, CSAIL is the largest on-campus laboratory as measured by research scope and membership. It is part of the Schwarzman College of Computing but is also overseen by the MIT Vice President of Research.
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human capabilities across a wide range of cognitive tasks. This is in contrast to narrow AI, which is designed for specific tasks. AGI is considered one of various definitions of strong AI.
A physical symbol system takes physical patterns (symbols), combining them into structures (expressions) and manipulating them to produce new expressions.
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The following outline is provided as an overview of and topical guide to artificial intelligence:
Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI(1965), What Computers Can't Do and Mind over Machine(1986), he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including Russell & Norvig (2021), a standard AI textbook, and in Fearn (2007), a survey of contemporary philosophy.
Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.
This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.
Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms, computational models of biological neural networks and actual biological systems. Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.
Nathaniel Rochester was the chief architect of the IBM 701, the first mass produced scientific computer, and of the prototype of its first commercial version, the IBM 702. He wrote the first assembler and participated in the founding of the field of artificial intelligence.
Perceptrons: an introduction to computational geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1988 (ISBN 9780262631112) after the revival of neural networks, containing a chapter dedicated to counter the criticisms made of it in the 1980s.
An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment. A learning rule may accept existing conditions of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.