Gail Carpenter

Last updated
Gail Alexandra Carpenter
Born1948 (age 7576)
New York City, New York, USA
Citizenship American
Alma mater University of Wisconsin–Madison
University of Colorado-Boulder
Known for Adaptive Resonance Theory (ART), neural network models and applications
Spouse(s) Stephen Grossberg
(m. 1979–)
Children1
AwardsIEEE Neural Networks Pioneer (2008)
Scientific career
Fields Mathematics, Neuroscience
Institutions Boston University, Northeastern University, MIT
Thesis Traveling wave solutions of nerve impulse equations
Academic advisors Charles C. Conley

Gail Alexandra Carpenter (born 1948) is an American cognitive scientist, neuroscientist and mathematician. She is now a "Professor Emerita of Mathematics and Statistics, Boston University." [1] She had also been a Professor of Cognitive and Neural Systems at Boston University, and the director of the Department of Cognitive and Neural Systems (CNS) Technology Lab at Boston University. [2]

Contents

Early life

Gail Carpenter is the only daughter of Chadwick Hunter "Chad" Carpenter (1920-1996) and Ruth M. (née Stevenson) Carpenter (1920-2010). She has four brothers. [3] [4]

Carpenter attended the International School of Geneva (1961-1966) then went to the University of Colorado in Boulder earning a B.A. in 1970 (summa cum laude, mathematics). She then earned a Ph.D. in mathematics at the University of Wisconsin–Madison. Carpenter then taught at MIT and Northeastern University before moving to Boston University. [5]

Carpenter married Stephen Grossberg on June 16, 1979, in Boston University Castle in Boston, Massachusetts. [6]

Adaptive resonance theory

Carpenter's "neural modeling" efforts were clearly seen in her 1974 mathematics PhD thesis on Traveling wave solutions of nerve impulse equations at the University of Wisconsin Department of Mathematics while working with Charles C. Conley. Later she re-defined and expanded her theories in various papers during the mid to late 1970s. She defined the "generalized Hodgkin-Huxley models, used dynamical systems techniques to analyze their solutions, and characterized the qualitative properties of the burst suppression patterns that a typical neuron may propagate: while investigating normal and abnormal signal patterns in nerve cells. [5]

Distributed ART model (dART). Gail A Carpenter, 1996. Distributed ART model (dART).png
Distributed ART model (dART). Gail A Carpenter, 1996.

[7]

Adaptive resonance theory (ART) is a theory developed by Stephen Grossberg and Gail Carpenter on aspects of how the brain processes information. It describes a number of neural network models which use supervised and unsupervised learning methods, and address problems such as pattern recognition and prediction. [8]

The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge that is also called incremental learning. [8]

Academic acknowledgements

Per Boston University, where Carpenter is a "Professor Emerita of Mathematics and Statistics," she is acknowledged as having been the very first woman to receive the Institute of Electrical and Electronics Engineers (IEEE) Neural Networks Pioneer Award in 2008. She was also elected to successive three-year terms on the Board of Governors of the International Neural Network Society (INNS) [9] ) since its founding in 1987, and received the INNS Gabor Award in 1999. She has also served as an elected member of the Council of the American Mathematical Society, and is a charter member of the Association for Women in Mathematics. [10]

Her memberships include [10]

Awards and honors

Selected published articles

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

Neural networks are a branch of machine learning models that are built using the principles of neuronal organization found in the biological neural networks constituting animal brains.

<span class="mw-page-title-main">Cognitive neuroscience</span> Scientific field

Cognitive neuroscience is the scientific field that is concerned with the study of the biological processes and aspects that underlie cognition, with a specific focus on the neural connections in the brain which are involved in mental processes. It addresses the questions of how cognitive activities are affected or controlled by neural circuits in the brain. Cognitive neuroscience is a branch of both neuroscience and psychology, overlapping with disciplines such as behavioral neuroscience, cognitive psychology, physiological psychology and affective neuroscience. Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neurobiology, and computational modeling.

Unsupervised learning is a method in machine learning where, in contrast to supervised learning, algorithms learn patterns exclusively from unlabeled data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and then generate imaginative content from it.

<span class="mw-page-title-main">Jürgen Schmidhuber</span> German computer scientist

Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

Bart Andrew Kosko is a writer and professor of electrical engineering and law at the University of Southern California (USC). He is a researcher and popularizer of fuzzy logic, neural networks, and noise, and the author of several trade books and textbooks on these and related subjects of machine intelligence. He was awarded the 2022 Donald O. Hebb Award for neural learning by the International Neural Network Society.

Winner-take-all is a computational principle applied in computational models of neural networks by which neurons compete with each other for activation. In the classical form, only the neuron with the highest activation stays active while all other neurons shut down; however, other variations allow more than one neuron to be active, for example the soft winner take-all, by which a power function is applied to the neurons.

Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition, image processing or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis.

<span class="mw-page-title-main">Stephen Grossberg</span> American scientist (born 1939)

Stephen Grossberg is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University.

Adaptive resonance theory (ART) is a theory developed by Stephen Grossberg and Gail Carpenter on aspects of how the brain processes information. It describes a number of artificial neural network models which use supervised and unsupervised learning methods, and address problems such as pattern recognition and prediction.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

<span class="mw-page-title-main">Leonid Perlovsky</span>

Leonid Perlovsky is an Affiliated Research Professor at Northeastern University. His research involves cognitive algorithms and modeling of evolution of languages and cultures.

<span class="mw-page-title-main">Yann LeCun</span> French computer scientist (born 1960)

Yann André LeCun is a Turing Award winning French-American computer scientist working primarily in the fields of machine learning, computer vision, mobile robotics and computational neuroscience. He is the Silver Professor of the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta.

Kunihiko Fukushima is a Japanese computer scientist, most noted for his work on artificial neural networks and deep learning. He is currently working part-time as a senior research scientist at the Fuzzy Logic Systems Institute in Fukuoka, Japan.

<span class="mw-page-title-main">Deep learning</span> Branch of machine learning

Deep learning is the subset of machine learning methods based on artificial neural networks (ANNs) with representation learning. The adjective "deep" refers to the use of multiple layers in the network. Methods used can be either supervised, semi-supervised or unsupervised.

Motor babbling is a process of repeatedly performing a random motor command for a short duration. It is similar to the vocal babbling of infants, where the brain learns the relation between vocal muscle activities and the resulting sounds. However, it was found that the general motor-control system is already exploring itself in the womb, in animals, in a similar way. Originally, the random spasms and convulsions of the embryo were seen as the non-functional consequences of growth. Later it was realized that the motor system is already calibrating its sensorimotor system before birth. After birth, motor babbling in primates continues in the random grasping movements towards visual targets, training the hand–eye coordination system. These insights are used since the early nineteen nineties in models of biological movement control and in robotics. In robotics, it is a system of robot learning whereby a robotic system can autonomously develop an internal model of its self-body and its environment. Early work is by Kuperstein (1991) using a robot randomly positioning a stick in its workspace, while being observed by two cameras, using a neural network to associate poses of the stick with joint angles of the arm. This type of research has led to the research field of developmental robotics.

<span class="mw-page-title-main">DeepDream</span> Software program

DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately overprocessed images.

Fusion adaptive resonance theory (fusion ART) is a generalization of self-organizing neural networks known as the original Adaptive Resonance Theory models for learning recognition categories across multiple pattern channels. There is a separate stream of work on fusion ARTMAP, that extends fuzzy ARTMAP consisting of two fuzzy ART modules connected by an inter-ART map field to an extended architecture consisting of multiple ART modules.

In computer science, incremental learning is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.

Thomas Martinetz is a German physicist and neuro-informatician.

Soft computing is an umbrella term used to describe types of algorithms that produce approximate solutions to unsolvable high-level problems in computer science. Typically, traditional hard-computing algorithms heavily rely on concrete data and mathematical models to produce solutions to problems. Soft computing was coined in the late 20th century. During this period, revolutionary research in three fields greatly impacted soft computing. Fuzzy logic is a computational paradigm that entertains the uncertainties in data by using levels of truth rather than rigid 0s and 1s in binary. Next, neural networks which are computational models influenced by human brain functions. Finally, evolutionary computation is a term to describe groups of algorithm that mimic natural processes such as evolution and natural selection.

References

  1. Google Scholar
  2. https://mailman.srv.cs.cmu.edu/pipermail/connectionists/1989-December/012034.html |Wang Institute Conference| 1990 bio| GAIL CARPENTER is Professor of Mathematics and CNS; Co-Director of the
    CNS Graduate Program; 1989 Vice President of the International Neural Network
    Society (INNS); Organization Chairman of the 1988 INNS annual meeting; Session Chairman at the 1989 and 1990 IEEE/INNS International Joint Conference on Neural Networks (IJCNN); one of four technical consultants to the national DARPA Neural Network Study; editor of the journals "Neural Networks", "Neural Computation", and "Neural Network Review"; and a member of the scientific advisory board of HNC. A leading neural architect, Carpenter is especially well-known for her seminal work on developing the adaptive resonance theory architectures (ART 1, ART 2, ART 3) for adaptive pattern recognition.
  3. https://www.ancestry.com/discoveryui-content/view/653020371:61843 | Obituary for Ruth S. Carpenter| Accessed 18 January 2022| See also: April 29, 1947 Marriage at: https://www.ancestry.com/discoveryui-content/view/2268275:61406 [ user-generated source ]
  4. https://www.ancestry.com/discoveryui-content/view/613063443:61843%7C Ancestry.com result for Chadwick Hunter Carpenter obit| Accessed 18 January 2022
  5. 1 2 https://techlab.bu.edu/members/gail/%7C CNS Technology Website| Accessed 18 January 2022
  6. Newspapers.com| The Jackson Hole Guide; Publication Date: 21/ Jun/ 1979; Publication Place: Jackson, Wyoming, USA; URL: https://www.newspapers.com/image/317801978/?article=793bcdc8-8859-426f-a1ad-06890f4299c4&focus=0.036960505,0.094845355,0.2794417,0.50327265&xid=3398
  7. Archival journal article describing the Distributed ART (dART) model is: Carpenter, G.A. (1997). Distributed learning, recognition, and prediction by ART and ARTMAP neural networks. Neural Networks, 10(8), 1473-1494.See Figure 1b.
  8. 1 2 Carpenter, G.A., Grossberg, S., & Reynolds, J.H. (1991), ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network Archived 2006-05-19 at the Wayback Machine , Neural Networks , 4, 565-588
  9. https://www.inns.org/%7C International Neural Network Society (INNS)| Accessed 18 January 2022
  10. 1 2 "Gail Carpenter | PR Social".

Other sources

Carpenter is also cited in the following book: ’’American Men & Women of Science’’ A biographical directory of today's leaders in physical, biological and related sciences. 23rd edition. Eight volumes. Detroit: Thomson Gale, 2006. (AmMWSc 23)