Kanaka Rajan | |
---|---|
Born | India |
Nationality | American |
Alma mater | Anna University Brandeis University Columbia University Icahn School of Medicine at Mount Sinai |
Known for | Recurrent Neural Network (RNN) models of the brain |
Scientific career | |
Fields | Computational and Theoretical Neuroscience |
Institutions | Harvard University |
Kanaka Rajan is a computational neuroscientist in the Department of Neurobiology at Harvard Medical School and founding faculty in the Kempner Institute for the Study of Natural and Artificial Intelligence [1] at Harvard University. [2] Rajan trained in engineering, biophysics, and neuroscience, and has pioneered novel methods and models to understand how the brain processes sensory information. Her research seeks to understand how important cognitive functions — such as learning, remembering, and deciding — emerge from the cooperative activity of multi-scale neural processes, and how those processes are affected by various neuropsychiatric disease states. The resulting integrative theories about the brain bridge neurobiology and artificial intelligence.
Rajan was born and raised in India. She completed a Bachelors of Technology (B.Tech.) from the Center for Biotechnology at Anna University in Tamil Nadu, India in 2000, majoring in Industrial Biotechnology and graduating with distinction. [3] [4]
In 2002, Rajan pursued a post-graduate degree in neuroscience at Brandeis University, where she did experimental rotations with Eve Marder and Gina G. Turrigiano, before joining Larry Abbott's laboratory where she completed her master's degree (MA). [3] In 2005 she transferred to the Ph.D. program in Neuroscience at Columbia University when Dr. Abbott moved from Brandeis to Columbia, and began her Ph.D. with Abbott at the Center for Theoretical Neuroscience. [5]
In Rajan's graduate work, she used mathematical modelling to address neurobiological questions. [6] The main component of her thesis was the development of a theory for how the brain interprets subtle sensory cues within the context of its internal experiential and motivational state to extract unambiguous representations of the external world. [7] This line of work focused on the mathematical analysis of neural networks containing excitatory and inhibitory types to model neurons and their synaptic connections. Her work showed that increasing the widths of the distributions of excitatory and inhibitory synaptic strengths dramatically changes the eigenvalue distributions. [8] In a biological context, these findings suggest that having a variety of cell types with different distributions of synaptic strength would impact network dynamics and that synaptic strength distributions can be measured to probe the characteristics of network dynamics. [8] Electrophysiology and imaging studies in many brain regions have since validated the predictions of this phase transition hypothesis.
To do this work, powerful methods from random matrix theory [8] and statistical mechanics [9] were employed. Rajan's early, influential work [10] with Abbott and Haim Sompolinsky integrated physics methodology into mainstream neuroscience research — initially by creating experimentally verifiable predictions, and today by cementing these tools as an essential component of the data modelling arsenal. Rajan completed her Ph.D. in 2009. [3]
From 2010 to 2018, Rajan worked as a postdoctoral research fellow at Princeton University with theoretical biophysicist William Bialek and neuroscientist David W. Tank. [11] At Princeton, she and her colleagues developed and employed a broad set of tools from physics, engineering, and computer science to build new conceptual frameworks for describing the relationship between cognitive processes and biophysics across many scales of biological organization. [12]
In Rajan's postdoctoral work with Bialek, she explored an innovative method for modelling the neural phenomenon of feature selectivity. [13] Feature selectivity is the idea that neurons are tuned to respond to specific and discrete components of the incoming sensory information, and later these individual components are merged to generate an overall perception of the sensory landscape. [13] To understand how the brain might receive complex inputs but detect individual features, Rajan treated the problem like a dimensionality reduction instead of the typical linear model approach. [13] Rajan showed, using quadratic forms as features of a stimulus, that the maximally informative variables can be found without prior assumptions of their characteristics. [13] This approach allows for unbiased estimates of the receptive fields for stimuli. [13]
Rajan then worked with David Tank to show that sequential activation of neurons, a common feature in working memory and decision making, can be demonstrated when starting from neural network models with random connectivity. [14] The process, termed “Partial In-Network Training”, is used as both model and to match real neural data from the posterior parietal cortex during behavior. [14] Rather than feedforward connections, the neural sequences in their model propagate through the network via recurrent synaptic interactions as well as being guided by external inputs. [14] Their modelling highlighted the potential that learning can derive from highly unstructured network architectures. [14] This work uncovered how sensitivity to natural stimuli arises in neurons, how this selectivity influences sensorimotor learning, and how the neural sequences observed in different brain regions arise from minimally plastic, largely disordered circuits – published in Neuron. [14]
In June 2018, Rajan became an assistant professor in the Department of Neuroscience and the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai. As the Principal Investigator of the Rajan Lab for Brain Research and AI in NY (BRAINY), [15] her work focuses on integrative theories to describe how behavior emerges from the cooperative activity of multi-scale neural processes. To gain insight into fundamental brain processes such as learning, memory, multitasking, or reasoning, Rajan develops theories based on neural network architectures inspired by biology as well as mathematical and computational frameworks that are often used to extract information from neural and behavioral data. [16] These theories use neural network models flexible enough to accommodate various levels of biological detail at the neuronal, synaptic, and circuit levels.
She uses a cross-disciplinary approach that provides critical insights into how neural circuits learn and execute functions, ranging from working memory to decision making, reasoning, and intuition, putting her in a unique position to advance our understanding of how important acts of cognition work. [17] Her models are based on experimental data (e.g., calcium imaging, electrophysiology, and behavior experiments) and on new and existing mathematical and computational frameworks derived from machine learning and statistical physics. [16] Rajan continues to apply recurrent neural network modelling to behavioral and neural data. In collaboration with Karl Deisseroth and his team at Stanford University, [18] such models revealed that circuit interactions within the lateral habenula, a brain structure implicated in aversion, were encoding experience features to guide the behavioral transition from active to passive coping – work published in Cell . [19] [20]
In 2019, Rajan was one of twelve investigators to receive funding from the National Science Foundation (NSF) [21] though its participation in the White House's Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The same year, she was also awarded an NIH BRAIN Initiative grant (R01) for Theories, Models, and Methods for Analysis of Complex Data from the Brain. [22] Starting in 2020, Rajan became co-lead of the Computational Neuroscience Working Group, [23] part of the National Institutes of Health's Interagency Modeling and Analysis Group (IMAG). [24]
In 2022, Rajan was promoted to Associate Professor [25] with tenure in the Department of Neuroscience and the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai.
In 2023, Rajan joined the Department of Neurobiology at Harvard Medical School as a Member of the Faculty and Kempner Institute for the Study of Natural and Artificial Intelligence as founding faculty. [2]
Neuroscience is the scientific study of the nervous system, its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences.
Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.
Sensitization is a non-associative learning process in which repeated administration of a stimulus results in the progressive amplification of a response. Sensitization often is characterized by an enhancement of response to a whole class of stimuli in addition to the one that is repeated. For example, repetition of a painful stimulus may make one more responsive to a loud noise.
Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.
Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier.
Synaptic noise refers to the constant bombardment of synaptic activity in neurons. This occurs in the background of a cell when potentials are produced without the nerve stimulation of an action potential, and are due to the inherently random nature of synapses. These random potentials have similar time courses as excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs), yet they lead to variable neuronal responses. The variability is due to differences in the discharge times of action potentials.
In the field of computational neuroscience, brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases. Simulations utilize mathematical models of biological neurons, such as the hodgkin-huxley model, to simulate the behavior of neurons, or other cells within the brain.
A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.
The network of the human nervous system is composed of nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and David Rumelhart, McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.
An autapse is a chemical or electrical synapse from a neuron onto itself. It can also be described as a synapse formed by the axon of a neuron on its own dendrites, in vivo or in vitro.
Phase resetting in neurons is a behavior observed in different biological oscillators and plays a role in creating neural synchronization as well as different processes within the body. Phase resetting in neurons is when the dynamical behavior of an oscillation is shifted. This occurs when a stimulus perturbs the phase within an oscillatory cycle and a change in period occurs. The periods of these oscillations can vary depending on the biological system, with examples such as: (1) neural responses can change within a millisecond to quickly relay information; (2) In cardiac and respiratory changes that occur throughout the day, could be within seconds; (3) circadian rhythms may vary throughout a series of days; (4) rhythms such as hibernation may have periods that are measured in years. This activity pattern of neurons is a phenomenon seen in various neural circuits throughout the body and is seen in single neuron models and within clusters of neurons. Many of these models utilize phase response (resetting) curves where the oscillation of a neuron is perturbed and the effect the perturbation has on the phase cycle of a neuron is measured.
Laurence Frederick Abbott is an American theoretical neuroscientist, who is currently the William Bloor Professor of Theoretical Neuroscience at Columbia University, where he helped create the Center for Theoretical Neuroscience. He is widely regarded as one of the leaders of theoretical neuroscience, and is coauthor, along with Peter Dayan, on the first comprehensive textbook on theoretical neuroscience, which is considered to be the standard text for students and researchers entering theoretical neuroscience. He helped invent the dynamic clamp method alongside Eve Marder.
Rosemary C. Bagot is a Canadian neuroscientist who researches the mechanisms of altered brain function in depression. She is an assistant professor in behavioral neuroscience in the Department of Psychology at McGill University in Montreal, Canada. Her focus in behavioral neuroscience is on understanding the mechanisms of altered brain circuit function in depression. Employing a multidisciplinary approach, Bagot investigates why only some people who experience stress become depressed.
Claudia Clopath is a Professor of Computational Neuroscience at Imperial College London and research leader at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour. She develops mathematical models to predict synaptic plasticity for both medical applications and the design of human-like machines.
Ilana B. Witten is an American neuroscientist and professor of psychology and neuroscience at Princeton University. Witten studies the mesolimbic pathway, with a focus on the striatal neural circuit mechanisms driving reward learning and decision making.
Ila Fiete is an Indian–American physicist and computational neuroscientist as well as a Professor in the Department of Brain and Cognitive Sciences within the McGovern Institute for Brain Research at the Massachusetts Institute of Technology. Fiete builds theoretical models and analyses neural data and to uncover how neural circuits perform computations and how the brain represents and manipulates information involved in memory and reasoning.
Jessica Cardin is an American neuroscientist who is an associate professor of neuroscience at Yale University School of Medicine. Cardin's lab studies local circuits within the primary visual cortex to understand how cellular and synaptic interactions flexibly adapt to different behavioral states and contexts to give rise to visual perceptions and drive motivated behaviors. Cardin's lab applies their knowledge of adaptive cortical circuit regulation to probe how circuit dysfunction manifests in disease models.
Eberhard Erich Fetz is an American neuroscientist, academic and researcher. He is a Professor of Physiology and Biophysics and DXARTS at the University of Washington.
Cyriel Marie Antoine Pennartz is a Dutch neuroscientist serving as professor and head of the Department of Cognitive and Systems Neuroscience at the University of Amsterdam, the Netherlands. He is known for his research on memory, motivation, circadian rhythms, perception and consciousness. Pennartz’ work uses a multidisciplinary combination of techniques to understand the relationships between distributed neural activity and cognition, including in vivo electrophysiology and optical imaging, animal behavior and computational modelling.
{{cite web}}
: Missing or empty |title=
(help)