Stephen Grossberg

Last updated

Stephen Grossberg
Grossberg in July, 2016.jpg
Grossberg in July, 2016
Born (1939-12-31) 31 December 1939 (age 84)
New York City, New York

Stephen Grossberg (born December 31, 1939) is a cognitive scientist, theoretical and computational psychologist, neuroscientist, mathematician, biomedical engineer, and neuromorphic technologist. He is the Wang Professor of Cognitive and Neural Systems and a Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering at Boston University. [1]

Contents

Career

Early life and education

Grossberg first lived in Woodside, Queens, in New York City. His father died from Hodgkin's lymphoma when he was one year old. His mother remarried when he was five years old. He then moved with his mother, stepfather, and older brother, Mitchell, to Jackson Heights, Queens. [2] He attended Stuyvesant High School in lower Manhattan after passing its competitive entrance exam. He graduated first in his class from Stuyvesant in 1957. [2]

He began undergraduate studies at Dartmouth College in 1957, where he first conceived of the paradigm of using nonlinear differential equations to describe neural networks that model brain dynamics, as well as the basic equations that many scientists use for this purpose today.[ citation needed ] He then continued to study both psychology and neuroscience. [3] He received a B.A. in 1961 from Dartmouth as its first joint major in mathematics and psychology.

Grossberg then went to Stanford University, from which he graduated in 1964 with an MS in mathematics and transferred to The Rockefeller Institute for Medical Research (now The Rockefeller University) in Manhattan. In his first year at Rockefeller, he wrote a 500 page monograph summarizing his discoveries to that time. It is called The Theory of Embedding Fields with Applications to Psychology and Neurophysiology. Grossberg received a PhD in mathematics from Rockefeller in 1967 for a thesis that proved the first global content addressable memory theorems about the neural learning models that he had discovered at Dartmouth. His PhD thesis advisor was Gian-Carlo Rota.

Entering academia

Grossberg was hired in 1967 as an assistant professor of applied mathematics at MIT following strong recommendations from Mark Kac and Rota. In 1969, Grossberg was promoted to associate professor after publishing a stream of conceptual and mathematical results about many aspects of neural networks, including a series of foundational articles in the Proceedings of the National Academy of Sciences between 1967 and 1971.

Grossberg was hired as a full professor at Boston University in 1975, where he is still on the faculty today. While at Boston University, he founded the Department of Cognitive and Neural Systems, several interdisciplinary research centers, and various international institutions.

Research

Grossberg is a pioneer of the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. His work focuses upon the design principles and mechanisms that enable the behavior of individuals, or machines, to adapt autonomously in real time to unexpected environmental challenges. This research has included neural models of vision and image processing; object, scene, and event learning, pattern recognition, and search; audition, speech and language; cognitive information processing and planning; reinforcement learning and cognitive-emotional interactions; autonomous navigation; adaptive sensory-motor control and robotics; self-organizing neurodynamics; and mental disorders. Grossberg also collaborates with experimentalists to design experiments that test theoretical predictions and fill in conceptually important gaps in the experimental literature, carries out analyses of the mathematical dynamics of neural systems, and transfers biological neural models to applications in engineering and technology. He has published 18 books or journal special issues, over 560 research articles, and has 7 patents.

Grossberg has studied how brains give rise to minds since he took the introductory psychology course as a freshman at Dartmouth College in 1957. At that time, Grossberg introduced the paradigm of using nonlinear systems of differential equations to show how brain mechanisms can give rise to behavioral functions. [4] This paradigm is helping to solve the classical mind/body problem, and is the basic mathematical formalism that is used in biological neural network research today. In particular, in 1957–1958, Grossberg discovered widely used equations for (1) short-term memory (STM), or neuronal activation (often called the Additive and Shunting models, or the Hopfield model after John Hopfield's 1984 application of the Additive model equation); (2) medium-term memory (MTM), or activity-dependent habituation (often called habituative transmitter gates, or depressing synapses after Larry Abbott's 1997 introduction of this term); and (3) long-term memory (LTM), or neuronal learning (often called gated steepest descent learning). One variant of these learning equations, called Instar Learning, was introduced by Grossberg in 1976 into Adaptive Resonance Theory and Self-Organizing Maps for the learning of adaptive filters in these models. This learning equation was also used by Kohonen in his applications of Self-Organizing Maps starting in 1984. Another variant of these learning equations, called Outstar Learning, was used by Grossberg starting in 1967 for spatial pattern learning. Outstar and Instar learning were combined by Grossberg in 1976 in a three-layer network for the learning of multi-dimensional maps from any m-dimensional input space to any n-dimensional output space. This application was called Counter-propagation by Hecht-Nielsen in 1987.

Building on his 1964 Rockefeller PhD thesis, in the 1960s and 1970s, Grossberg generalized the Additive and Shunting models to a class of dynamical systems that included these models as well as non-neural biological models, and proved content addressable memory theorems for this more general class of models. As part of this analysis, he introduced a Liapunov functional method to help classify the limiting and oscillatory dynamics of competitive systems by keeping track of which population is winning through time. This Liapunov method led him and Michael Cohen to discover in 1981 and publish in 1982 and 1983 a Liapunov function that they used to prove that global limits exist in a class of dynamical systems with symmetric interaction coefficients that includes the Additive and Shunting models. This model is often called the Cohen-Grossberg model and Liapunov function. [5] John Hopfield published the special case of the Cohen-Grossberg Liapunov function for the Additive model in 1984. In 1987, Bart Kosko adapted the Cohen-Grossberg model and Liapunov function, which proved global convergence of STM, to define an Adaptive Bidirectional Associative Memory that combines STM and LTM and which also globally converges to a limit.

Grossberg has introduced, and developed with his colleagues, fundamental concepts, mechanisms, models, and architectures across a wide spectrum of topics about brain and behavior. He has collaborated with over 100 PhD students and postdoctoral fellows. [6]

These models have provided unified and principled explanations of psychological and neurobiological data about processes including auditory and visual perception, attention, consciousness, cognition, cognitive-emotional interactions, and action in both typical, or normal, individuals and clinical patients. This work models how particular brain breakdowns or lesions cause behavioral symptoms of mental disorders such as Alzheimer's disease, autism, amnesia, PTSD, ADHD, visual and auditory agnosia and neglect, and slow-wave sleep.

The models have also been applied in many large-scale applications to engineering, technology, and AI. Taken together, they provide a blueprint for designing autonomous adaptive intelligent algorithms, agents, and mobile robots.

These results have been combined in a self-contained and non-technical exposition in a conversational style in Grossberg's 2021 publication Conscious Mind, Resonant Brain: How Each Brain Makes a Mind. This book won the 2022 PROSE book award in Neuroscience of the Association of American Publishers.

Models that Grossberg introduced and helped to develop include:

Career and infrastructure development

Given that there was little or no infrastructure to support the fields that he and other modeling pioneers were advancing, Grossberg founded several institutions aimed at providing interdisciplinary training, research, and publication outlets in the fields of computational neuroscience, connectionist cognitive science, and neuromorphic technology. In 1981, he founded the Center for Adaptive Systems at Boston University and remains its director. In 1991, he founded the Department of Cognitive and Neural Systems at Boston University and served as its chairman until 2007. In 2004, he founded the NSF Center of Excellence for Learning in Education, Science, and Technology (CELEST) [7] and served as its director until 2009. [8]

All of these institutions were aimed at answering two related questions: i) How does the brain control behavior? ii) How can technology emulate biological intelligence?

In 1987, Grossberg founded and was first President of the International Neural Network Society (INNS), which grew to 3700 members from 49 states of the United States and 38 countries during the fourteen months of his presidency. The formation of INNS soon led to the formation of the European Neural Network Society (ENNS) and the Japanese Neural Network Society (JNNS). Grossberg also founded the INNS official journal, [9] and was its Editor-in-Chief from 1987 to 2010. [10] Neural Networks is also the archival journal of ENNS and JNNS.

Grossberg's lecture series at MIT Lincoln Laboratory triggered the national DARPA Neural Network Study in 1987–88, which led to heightened government interest in neural network research. He was General Chairman of the first IEEE International Conference on Neural Networks (ICNN) in 1987 and played a key role in organizing the first INNS annual meeting in 1988, whose fusion in 1989 led to the International Joint Conference on Neural Networks (IJCNN), which remains the largest annual meeting devoted to neural network research. Grossberg has also organized and chaired the annual International Conference on Cognitive and Neural Systems (ICCNS) from 1997 to 2013, as well as many other conferences in the neural networks field. [11]

Grossberg has served on the editorial board of 30 journals, including Journal of Cognitive Neuroscience, Behavioral and Brain Sciences, Cognitive Brain Research, Cognitive Science, Neural Computation, IEEE Transactions on Neural Networks, IEEE Expert, and the International Journal of Humanoid Robotics .

Awards

Awards granted to Grossberg:


Memberships:

ART theory

With Gail Carpenter, Grossberg developed the adaptive resonance theory (ART). ART is a cognitive and neural theory of how the brain can quickly learn, and stably remember and recognize, objects and events in a changing world. ART proposed a solution of the stability-plasticity dilemma; namely, how a brain or machine can learn quickly about new objects and events without just as quickly being forced to forget previously learned, but still useful, memories.

ART predicts how learned top-down expectations focus attention on expected combinations of features, leading to a synchronous resonance that can drive fast learning. ART also predicts how large enough mismatches between bottom-up feature patterns and top-down expectations can drive a memory search, or hypothesis testing, for recognition categories with which to better learn to classify the world. ART thus defines a type of self-organizing production system.

ART was practically demonstrated through the ART family of classifiers (e.g., ART 1, ART 2, ART 2A, ART 3, ARTMAP, fuzzy ARTMAP, ART eMAP, distributed ARTMAP), developed with Gail Carpenter, which has been used in large-scale applications in engineering and technology where fast, yet stable, incrementally learned classification and prediction are needed.

Grossberg has predicted that "all conscious states are resonant states". He has hereby shown how properties of learning without catastrophic forgetting are ensured in ART via attentional matching between bottom-up feature patterns and learned top-down expectations, leading to a resonant state that persists long enough to drive learning between attended critical feature patterns and active recognition categories. Such a resonant state can also lead to conscious awareness when it includes feature-selective cells that represent qualia. In this way, Grossberg has used ART to explain many mind and brain data about how humans consciously see, hear, feel, and know things about their unique changing worlds, while using these conscious representations to plan and act to realize valued goals.

New computational paradigms

Grossberg has introduced and led the development of two computational paradigms that are relevant to biological intelligence and its applications:

Complementary Computing

What is the nature of brain specialization? Many scientists have proposed that our brains possess independent modules, as in a digital computer. The brain's organization into distinct anatomical areas and processing streams shows that brain processing is indeed specialized. However, independent modules should be able to fully compute their particular processes on their own. Much behavioral data argue against this possibility.

Complementary Computing (Grossberg, 2000, [14] 2012 [15] ) concerns the discovery that pairs of parallel cortical processing streams compute complementary properties in the brain. Each stream has complementary computational strengths and weaknesses, much as in physical principles like the Heisenberg Uncertainty Principle. Each cortical stream can also possess multiple processing stages. These stages realize a hierarchical resolution of uncertainty. "Uncertainty" here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage.

Complementary Computing proposes that the computational unit of brain processing that has behavioral significance consists of parallel interactions between complementary cortical processing streams with multiple processing stages to compute complete information about a particular type of biological intelligence.

Laminar Computing

The cerebral cortex, the seat of higher intelligence in all modalities, is organized into layered circuits (often six main layers) that undergo characteristic bottom-up, top-down, and horizontal interactions. How do specializations of this shared laminar design embody different types of biological intelligence, including vision, speech and language, and cognition? Laminar Computing proposes how this can happen (Grossberg, 1999, [16] 2012 [15] ).

Laminar Computing explains how the laminar design of neocortex may realize the best properties of feedforward and feedback processing, digital and analog processing, and bottom-up data-driven processing and top-down attentive hypothesis-driven processing. Embodying such designs into VLSI chips promises to enable the development of increasingly general-purpose adaptive autonomous algorithms for multiple applications.

See also

Related Research Articles

<span class="mw-page-title-main">Cognitive science</span> Interdisciplinary scientific study of cognitive processes

Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."

<span class="mw-page-title-main">Cognitive neuroscience</span> Scientific field

Cognitive neuroscience is the scientific field that is concerned with the study of the biological processes and aspects that underlie cognition, with a specific focus on the neural connections in the brain which are involved in mental processes. It addresses the questions of how cognitive activities are affected or controlled by neural circuits in the brain. Cognitive neuroscience is a branch of both neuroscience and psychology, overlapping with disciplines such as behavioral neuroscience, cognitive psychology, physiological psychology and affective neuroscience. Cognitive neuroscience relies upon theories in cognitive science coupled with evidence from neurobiology, and computational modeling.

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

Bio-inspired computing, short for biologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates to connectionism, social behavior, and emergence. Within computer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset of natural computation.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. Recent advances have even discovered ways to mimic the human nervous system through liquid solutions of chemical systems.

Terrence Joseph Sejnowski is the Francis Crick Professor at the Salk Institute for Biological Studies where he directs the Computational Neurobiology Laboratory and is the director of the Crick-Jacobs center for theoretical and computational biology. He has performed pioneering research in neural networks and computational neuroscience.

A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. These formalized models can be used to further refine comprehensive theories of cognition and serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.

<span class="mw-page-title-main">Neural network (biology)</span> Structure in nervous systems

A neural network, also called a neuronal network, is an interconnected population of neurons. Biological neural networks are studied to understand the organization and functioning of nervous systems.

Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.

Adaptive resonance theory (ART) is a theory developed by Stephen Grossberg and Gail Carpenter on aspects of how the brain processes information. It describes a number of artificial neural network models which use supervised and unsupervised learning methods, and address problems such as pattern recognition and prediction.

Neuroinformatics is the emergent field that combines informatics and neuroscience. Neuroinformatics is related with neuroscience data and information processing by artificial neural networks. There are three main directions where neuroinformatics has to be applied:

Neural computation is the information processing performed by networks of neurons. Neural computation is affiliated with the philosophical tradition known as Computational theory of mind, also referred to as computationalism, which advances the thesis that neural computation explains cognition. The first persons to propose an account of neural activity as being computational was Warren McCullock and Walter Pitts in their seminal 1943 paper, A Logical Calculus of the Ideas Immanent in Nervous Activity.

<span class="mw-page-title-main">Echo state network</span> Type of reservoir computer

An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier.

Gail Alexandra Carpenter is an American cognitive scientist, neuroscientist and mathematician. She is now a "Professor Emerita of Mathematics and Statistics, Boston University." She had also been a Professor of Cognitive and Neural Systems at Boston University, and the director of the Department of Cognitive and Neural Systems (CNS) Technology Lab at Boston University.

<span class="mw-page-title-main">Massimiliano Versace</span>

Massimiliano Versace is the co-founder and the CEO of Neurala Inc, a Boston-based company building Artificial Intelligence emulating brain function in software and used in automating the process of visual inspection in manufacturing. He is also the founding Director of the Boston University Neuromorphics Lab. Massimiliano Versace is a Fulbright scholar and holds two PhD in Experimental Psychology from the University of Trieste, Italy and Cognitive and Neural Systems from Boston University, USA. He obtained his BSc from the University of Trieste, Italy.

A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.

Fusion adaptive resonance theory (fusion ART) is a generalization of self-organizing neural networks known as the original Adaptive Resonance Theory models for learning recognition categories across multiple pattern channels. There is a separate stream of work on fusion ARTMAP, that extends fuzzy ARTMAP consisting of two fuzzy ART modules connected by an inter-ART map field to an extended architecture consisting of multiple ART modules.

In computer science, incremental learning is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms.

<span class="mw-page-title-main">Amir Hussain (cognitive scientist)</span>

Amir Hussain is a cognitive scientist, the director of Cognitive Big Data and Cybersecurity (CogBID) Research Lab at Edinburgh Napier University He is a professor of computing science. He is founding Editor-in-Chief of Springer Nature's internationally leading Cognitive Computation journal and the new Big Data Analytics journal. He is founding Editor-in-Chief for two Springer Book Series: Socio-Affective Computing and Cognitive Computation Trends, and also serves on the Editorial Board of a number of other world-leading journals including, as Associate Editor for the IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Systems, Man, and Cybernetics (Systems) and the IEEE Computational Intelligence Magazine.

References

  1. Faculty page at Boston University Archived 8 May 2012 at the Wayback Machine
  2. 1 2 "Neuroscientist Steve Grossberg, Recipient of the Lifetime Achievement Award of the Society of Experimental Psychologists | The Brink". Boston University. Retrieved 13 December 2019.
  3. Grossberg Interests
  4. Towards building a neural networks community
  5. Cohen-Grossberg theorem
  6. Grossberg's PhD students and postdocs
  7. CELEST at Boston University
  8. "$36.5 Million for Three Centers to Explore How Humans, Animals, and Machines Learn", National Science Foundation, cited at Newswise, September 30, 2004
  9. Neural Networks journal Archived 22 June 2006 at the Wayback Machine
  10. "Elsevier Announces New Co-Editor-In-Chief for Neural Networks", Elsevier, December 23, 2010
  11. Grossberg conferences
  12. "SEP Lifetime Achievement Award". Archived from the original on 12 May 2015. Retrieved 19 June 2015.
  13. SEP Lifetime Achievement Award Acceptance Speech
  14. The complementary brain: Unifying brain dynamics and modularity.
  15. 1 2 Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world.
  16. How does the cerebral cortex work? Learning, attention and grouping by the laminar circuits of visual cortex.