Frank Rosenblatt

Last updated • 6 min readFrom Wikipedia, The Free Encyclopedia
Frank Rosenblatt
Born
Frank Rosenblatt

(1928-07-11)July 11, 1928
DiedJuly 11, 1971(1971-07-11) (aged 43)
Known for Perceptron
Academic background
Alma mater Cornell University
Thesis The k-Coefficient: Design and Trial Application of a New Technique for Multivariate Analysis  (1956)
Influences Walter Pitts, Warren Sturgis McCulloch, Donald O. Hebb, Friedrich Hayek, Karl Lashley

Frank Rosenblatt (July 11, 1928 July 11, 1971) was an American psychologist notable in the field of artificial intelligence. He is sometimes called the father of deep learning [1] for his pioneering work on artificial neural networks.

Contents

Life and career

Rosenblatt was born into a Jewish family in New Rochelle, New York as the son of Dr. Frank and Katherine Rosenblatt. [2]

After graduating from The Bronx High School of Science in 1946, he attended Cornell University, where he obtained his A.B. in 1950 and his Ph.D. in 1956. [3]

For his PhD thesis he built a custom-made computer, the Electronic Profile Analyzing Computer (EPAC), to perform multidimensional analysis for psychometrics. He used it between 1951 and 1953 to analyze psychometric data collected for his PhD thesis. The data were collected from a paid, 600 item survey of more than 200 Cornell undergraduates. The total computational cost was 2.5 million arithmetic operations, necessitating the use of an IBM CPC as well. [4] It was said that 15 minutes of data processing took just 2 seconds. [5] :32

He subsequently moved to Cornell Aeronautical Laboratory in Buffalo, New York, where he was successively a research psychologist, senior psychologist, and head of the cognitive systems section. It was there that he also conducted the early work on perceptrons, which culminated in the development and hardware construction in 1960 of the Mark I Perceptron, [2] essentially the first computer that could learn new skills by trial and error, using a type of neural network that simulates human thought processes.

Rosenblatt's research interests were exceptionally broad. In 1959 he went to Cornell's Ithaca campus as director of the Cognitive Systems Research Program and lecturer in the Psychology Department. In 1966 he joined the Section of Neurobiology and Behavior within the newly formed Division of Biological Sciences, as associate professor. [2] Also in 1966, he became fascinated with the transfer of learned behavior from trained to naive rats by the injection of brain extracts, a subject on which he would publish extensively in later years. [3]

In 1970 he became field representative for the Graduate Field of Neurobiology and Behavior, and in 1971 he shared the acting chairmanship of the Section of Neurobiology and Behavior. Frank Rosenblatt died in July 1971 on his 43rd birthday, in a boating accident in Chesapeake Bay. [3] He was eulogized on the floor of the House of Representatives, among others by former Senator Eugene McCarthy. [4]

Academic interests

Perceptron

Rosenblatt is best known for the Perceptron, an electronic device which was constructed in accordance with biological principles and showed an ability to learn. Rosenblatt's perceptrons were initially simulated on an IBM 704 computer at Cornell Aeronautical Laboratory in 1957. [6] When a triangle was held before the perceptron's eye, it would pick up the image and convey it along a random succession of lines to the response units, where the image was registered. [7]

He developed and extended this approach in numerous papers and a book called Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published by Spartan Books in 1962. [8] He received international recognition for the Perceptron. The New York Times billed it as a revolution, with the headline "New Navy Device Learns By Doing", [9] and The New Yorker similarly admired the technological advancement. [7]

An elementary Rosenblatt's perceptron. A-units are linear threshold element with fixed input weights. R-unit is also a linear threshold element but with ability to learn according to Rosenblatt's learning rule. Redrawn in from the original Rosenblatt's book. Elementary perceptron.jpg
An elementary Rosenblatt's perceptron. A-units are linear threshold element with fixed input weights. R-unit is also a linear threshold element but with ability to learn according to Rosenblatt's learning rule. Redrawn in from the original Rosenblatt's book.

Rosenblatt proved four main theorems. The first theorem states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set (and sufficiently many independent A-elements). The fourth theorem states convergence of learning algorithm if this realisation of elementary perceptron can solve the problem.

Research on comparable devices was also being conducted in other places such as SRI, and many researchers had big expectations on what they could do. The initial excitement became somewhat reduced, however, when in 1969 Marvin Minsky and Seymour Papert published the book "Perceptrons". Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of A-units receptive fields. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. Thus, Rosenblatt proved omnipotence of the unrestricted elementary perceptrons, whereas Minsky and Papert demonstrated that abilities of perceptrons with restrictions are limited. These results are not contradictory, but the Minsky and Papert book was widely (and wrongly) cited as the proof of strong limitations of perceptrons. (For detailed elementary discussion of the first Rosenblatt's theorem and its relation to Minsky and Papert work we refer to a recent note. [10] )

After research on neural networks returned to the mainstream in the 1980s, new researchers started to study Rosenblatt's work again. This new wave of study on neural networks is interpreted by some researchers as being a contradiction of hypotheses presented in the book Perceptrons, and a confirmation of Rosenblatt's expectations.

The Mark I Perceptron, which is generally recognized as a forerunner to artificial intelligence, currently resides in the Smithsonian Institution in Washington D.C. [3] The Mark I was able to learn, recognize letters, and solve quite complex problems.

Principles of Neurodynamics (1962)

The neuron model employed is a direct descendant of that originally proposed by McCulloch and Pitts. The basic philosophical approach has been heavily influenced by the theories of Hebb and Hayek and the experimental findings of Lashley. The probabilistic approach is shared with theorists such as Ashby, Uttley, Minsky, MacKay, and von Neumann.

Frank Rosenblatt, Principles Of Neurodynamics, page 5

Rosenblatt's book Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published by Spartan Books in 1962, summarized his work on perceptrons at the time. [11] The book was previously issued as an unclassified report No. 1196-G-8, on 1961 March 15, through the Defense Technical Information Center. [12]

The book is divided into four parts. The first gives an historical review of alternative approaches to brain modeling, the physiological and psychological considerations, and the basic definitions and concepts of the perceptron approach. The second covers three-layer series-coupled perceptrons: the mathematical underpinnings, performance results in psychological experiments, and a variety of perceptron variations. The third covers multi-layer and cross-coupled perceptrons, and the fourth back-coupled perceptrons and problems for future study.

Rosenblatt used the book to teach an interdisciplinary course entitled "Theory of Brain Mechanisms" that drew students from Cornell's Engineering and Liberal Arts colleges.

Rat brain experiments

Around the late 1960s, inspired by James V. McConnell's experiments with memory transfer in planarians, Rosenblatt began experiments within the Cornell Department of Entomology on the transfer of learned behavior via rat brain extracts. Rats were taught discrimination tasks such as Y-maze and two-lever Skinner box. Their brains were then extracted, and the extracts and their antibodies were injected into untrained rats that were subsequently tested in the discrimination tasks to determine whether or not there was behavior transfer from the trained to the untrained rats. [13] Rosenblatt spent his last several years on this problem and showed convincingly that the initial reports of larger effects were wrong and that any memory transfer was at most very small. [3]

Other interests

Astronomy

Rosenblatt also had a serious research interest in astronomy and proposed a new technique to detect the presence of stellar satellites. [14] He built an observatory on a hilltop behind his house in Brooktondale about 6 miles east of Ithaca. When construction on the observatory was completed, Rosenblatt began an intensive study on SETI (Search for Extraterrestrial Intelligence). [3] He also studied photometry and developed a technique for "detecting low-level laser signals against a relatively intense background of non-coherent light". [13]

Politics

Rosenblatt was very active in liberal politics. He worked in the Eugene McCarthy primary campaigns for president in New Hampshire and California in 1968 and in a series of Vietnam protest activities in Washington. [15]

IEEE Frank Rosenblatt Award

The Institute of Electrical and Electronics Engineers (IEEE), the world's largest professional association dedicated to advancing technological innovation and excellence for the benefit of humanity, presents annually a IEEE Frank Rosenblatt Award.

See also

Related Research Articles

<span class="mw-page-title-main">Marvin Minsky</span> American cognitive scientist (1927–2016)

Marvin Lee Minsky was an American cognitive and computer scientist concerned largely with research of artificial intelligence (AI). He co-founded the Massachusetts Institute of Technology's AI laboratory and wrote several texts concerning AI and philosophy.

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

<span class="mw-page-title-main">Seymour Papert</span> MIT mathematician, computer scientist, and educator

Seymour Aubrey Papert was a South African-born American mathematician, computer scientist, and educator, who spent most of his career teaching and researching at MIT. He was one of the pioneers of artificial intelligence, and of the constructionist movement in education. He was co-inventor, with Wally Feurzeig and Cynthia Solomon, of the Logo programming language.

In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers. A binary classifier is a function which can decide whether or not an input, represented by a vector of numbers, belongs to some specific class. It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector.

<span class="mw-page-title-main">Connectionism</span> Cognitive science approach

Connectionism is an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

A K-line, or Knowledge-line, is a mental agent which represents an association of a group of other mental agents found active when a subject solves a certain problem or formulates a new idea. These were first described in Marvin Minsky's essay K-lines: A Theory of Memory, published in 1980 in the journal Cognitive Science:

When you "get an idea," or "solve a problem" ... you create what we shall call a K-line. ... When that K-line is later "activated", it reactivates ... mental agencies, creating a partial mental state "resembling the original."

"Whenever you 'get a good idea', solve a problem, or have a memorable experience, you activate a K-line to 'represent' it. A K-line is a wirelike structure that attaches itself to whichever mental agents are active when you solve a problem or have a good idea.

When you activate that K-line later, the agents attached to it are aroused, putting you into a 'mental state' much like the one you were in when you solved that problem or got that idea. This should make it relatively easy for you to solve new, similar problems!"

<span class="mw-page-title-main">Feedforward neural network</span> Type of artificial neural network

A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes and to the output nodes, without any cycles or loops. Modern feedforward networks are trained using backpropagation, and are colloquially referred to as "vanilla" neural networks.

<span class="mw-page-title-main">Neural network (biology)</span> Structure in nervous systems

A neural network, also called a neuronal network, is an interconnected population of neurons. Biological neural networks are studied to understand the organization and functioning of nervous systems.

In deep learning, a multilayer perceptron (MLP) is a name for a modern feedforward neural network consisting of fully connected neurons with nonlinear activation functions, organized in layers, notable for being able to distinguish data that is not linearly separable.

<span class="mw-page-title-main">Stephen Grossberg</span> American scientist (born 1939)

Stephen Grossberg is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University.

<span class="mw-page-title-main">History of artificial intelligence</span>

The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.

In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.

<i>Perceptrons</i> (book) Book by Marvin Minsky and Seymour Papert

Perceptrons: An Introduction to Computational Geometry is a book written by Marvin Minsky and Seymour Papert and published in 1969. An edition with handwritten corrections and additions was released in the early 1970s. An expanded edition was further published in 1988 (ISBN 9780262631112) after the revival of neural networks, containing a chapter dedicated to counter the criticisms made of it in the 1980s.

The IEEE Frank Rosenblatt Award is a Technical Field Award established by the Institute of Electrical and Electronics Engineers Board of Directors in 2004. This award is presented for outstanding contributions to the advancement of the design, practice, techniques, or theory in biologically and linguistically motivated computational paradigms, including neural networks, connectionist systems, evolutionary computation, fuzzy systems, and hybrid intelligent systems in which these paradigms are contained.

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weight and bias levels of a network when it is simulated in a specific data environment. A learning rule may accept existing conditions of the network, and will compare the expected result and actual result of the network to give new and improved values for the weights and biases. Depending on the complexity of the model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.

<span class="mw-page-title-main">Cynthia Solomon</span> Computer scientist

Cynthia Solomon is an American computer scientist known for her work in popularizing computer science for students. She is an innovator in the fields of computer science and educational computing. While working as a researcher at Massachusetts Institute of Technology (MIT), Solomon took it upon herself to understand and program in the programming language Lisp. As she began learning this language, she realized the need for a programming language that was more accessible and understandable for children.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by biological neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling this period an "AI winter".

A neural network is a group of interconnected units called neurons that send signals to one another. Neurons can be either biological cells or mathematical models. While individual neurons are simple, many of them together in a network can perform complex tasks. There are two main types of neural network.

References

  1. Tappert, Charles C. (2019). "Who is the Father of Deep Learning?". 2019 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE. pp. 343–348. doi:10.1109/CSCI49370.2019.00067. ISBN   978-1-7281-5584-5. S2CID   216043128 . Retrieved 31 May 2021.
  2. 1 2 3 Carey, Hugh L. (1971). "Tribute to Dr. Frank Rosenblatt" (PDF). Congressional Record: Proceedings and Debates of the 92d Congress, First Session. US Government Printing Office. pp. 1–7. Archived from the original (PDF) on 26 February 2014. Retrieved 24 Dec 2021.
  3. 1 2 3 4 5 6 Emlen, Stephen T.; Howland, Howard C.; O'Brien, Richard D. "Frank Rosenblatt, July 11, 1928 — July 11, 1971" (PDF). Cornell University. Retrieved 24 Dec 2021.
  4. 1 2 Penn, Jonathan (2021-01-11). Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century (Thesis). [object Object]. doi:10.17863/cam.63087.
  5. "Editor Miscellany", American Scientist 42, no. 1 (January 1954): 32.
  6. "Hyping Artificial Intelligence, Yet Again". newyorker.com. 31 December 2013.
  7. 1 2 Mason, Harding; Stewart, D.; Brendan, Gill (28 November 1958). "Rival". The New Yorker.
  8. Preprint as a military report in 1961-03-15 as Report #1196-0-8
  9. "New Navy Device Learns By Doing". The New York Times . 8 July 1958.
  10. 1 2 Kirdin A, Sidorov S, Zolotykh N (2022). "Rosenblatt's First Theorem and Frugality of Deep Learning". Entropy. 24 (11): 1635. doi: 10.3390/e24111635 . PMC   9689667 . PMID   36359726.
  11. 1 2 Rosenblatt, F. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms ; Spartan Books: Washington, DC, USA, 1962.
  12. Defense Technical Information Center (1961-03-15). DTIC AD0256582: PRINCIPLES OF NEURODYNAMICS. PERCEPTRONS AND THE THEORY OF BRAIN MECHANISMS.
  13. 1 2 Rosenblatt, Frank, and CORNELL UNIV ITHACA NY. Cognitive Systems Research Program . Technical report, Cornell University, 72, 1971.
  14. "Frank Rosenblatt - July 11, 1928-July 11, 1971" (PDF). dspace.library.cornell.edu.
  15. "Frank Rosenblatt - July 11, 1928-July 11, 1971" (PDF). dspace.library.cornell.edu.