Kanaka Rajan

Last updated
Kanaka Rajan
K.Rajan Profile Picture.jpg
Born
India
NationalityAmerican
Alma mater Anna University
Brandeis University
Columbia University
Icahn School of Medicine at Mount Sinai
Known forRecurrent Neural Network (RNN) models of the brain
Scientific career
FieldsComputational and Theoretical Neuroscience
InstitutionsHarvard University

Kanaka Rajan is a computational neuroscientist in the Department of Neurobiology at Harvard Medical School and founding faculty in the Kempner Institute for the Study of Natural and Artificial Intelligence [1] at Harvard University. [2] Rajan trained in engineering, biophysics, and neuroscience, and has pioneered novel methods and models to understand how the brain processes sensory information. Her research seeks to understand how important cognitive functions — such as learning, remembering, and deciding — emerge from the cooperative activity of multi-scale neural processes, and how those processes are affected by various neuropsychiatric disease states. The resulting integrative theories about the brain bridge neurobiology and artificial intelligence.

Contents

Early life and education

Rajan was born and raised in India. She completed a Bachelors of Technology (B.Tech.) from the Center for Biotechnology at Anna University in Tamil Nadu, India in 2000, majoring in Industrial Biotechnology and graduating with distinction. [3] [4]

In 2002, Rajan pursued a post-graduate degree in neuroscience at Brandeis University, where she did experimental rotations with Eve Marder and Gina G. Turrigiano, before joining Larry Abbott's laboratory where she completed her master's degree (MA). [3] In 2005 she transferred to the Ph.D. program in Neuroscience at Columbia University when Dr. Abbott moved from Brandeis to Columbia, and began her Ph.D. with Abbott at the Center for Theoretical Neuroscience. [5]

Doctoral research

In Rajan's graduate work, she used mathematical modelling to address neurobiological questions. [6] The main component of her thesis was the development of a theory for how the brain interprets subtle sensory cues within the context of its internal experiential and motivational state to extract unambiguous representations of the external world. [7] This line of work focused on the mathematical analysis of neural networks containing excitatory and inhibitory types to model neurons and their synaptic connections. Her work showed that increasing the widths of the distributions of excitatory and inhibitory synaptic strengths dramatically changes the eigenvalue distributions. [8] In a biological context, these findings suggest that having a variety of cell types with different distributions of synaptic strength would impact network dynamics and that synaptic strength distributions can be measured to probe the characteristics of network dynamics. [8] Electrophysiology and imaging studies in many brain regions have since validated the predictions of this phase transition hypothesis.

To do this work, powerful methods from random matrix theory [8] and statistical mechanics [9] were employed. Rajan's early, influential work [10] with Abbott and Haim Sompolinsky integrated physics methodology into mainstream neuroscience research — initially by creating experimentally verifiable predictions, and today by cementing these tools as an essential component of the data modelling arsenal. Rajan completed her Ph.D. in 2009. [3]

Postdoctoral research

From 2010 to 2018, Rajan worked as a postdoctoral research fellow at Princeton University with theoretical biophysicist William Bialek and neuroscientist David W. Tank. [11] At Princeton, she and her colleagues developed and employed a broad set of tools from physics, engineering, and computer science to build new conceptual frameworks for describing the relationship between cognitive processes and biophysics across many scales of biological organization. [12]

Modelling feature selectivity

In Rajan's postdoctoral work with Bialek, she explored an innovative method for modelling the neural phenomenon of feature selectivity. [13] Feature selectivity is the idea that neurons are tuned to respond to specific and discrete components of the incoming sensory information, and later these individual components are merged to generate an overall perception of the sensory landscape. [13] To understand how the brain might receive complex inputs but detect individual features, Rajan treated the problem like a dimensionality reduction instead of the typical linear model approach. [13] Rajan showed, using quadratic forms as features of a stimulus, that the maximally informative variables can be found without prior assumptions of their characteristics. [13] This approach allows for unbiased estimates of the receptive fields for stimuli. [13]

Recurrent neural network modelling

Rajan then worked with David Tank to show that sequential activation of neurons, a common feature in working memory and decision making, can be demonstrated when starting from neural network models with random connectivity. [14] The process, termed “Partial In-Network Training”, is used as both model and to match real neural data from the posterior parietal cortex during behavior. [14] Rather than feedforward connections, the neural sequences in their model propagate through the network via recurrent synaptic interactions as well as being guided by external inputs. [14] Their modelling highlighted the potential that learning can derive from highly unstructured network architectures. [14] This work uncovered how sensitivity to natural stimuli arises in neurons, how this selectivity influences sensorimotor learning, and how the neural sequences observed in different brain regions arise from minimally plastic, largely disordered circuits – published in Neuron. [14]

Career and research

In June 2018, Rajan became an assistant professor in the Department of Neuroscience and the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai. As the Principal Investigator of the Rajan Lab for Brain Research and AI in NY (BRAINY), [15] her work focuses on integrative theories to describe how behavior emerges from the cooperative activity of multi-scale neural processes. To gain insight into fundamental brain processes such as learning, memory, multitasking, or reasoning, Rajan develops theories based on neural network architectures inspired by biology as well as mathematical and computational frameworks that are often used to extract information from neural and behavioral data. [16] These theories use neural network models flexible enough to accommodate various levels of biological detail at the neuronal, synaptic, and circuit levels.

She uses a cross-disciplinary approach that provides critical insights into how neural circuits learn and execute functions, ranging from working memory to decision making, reasoning, and intuition, putting her in a unique position to advance our understanding of how important acts of cognition work. [17]  Her models are based on experimental data (e.g., calcium imaging, electrophysiology, and behavior experiments) and on new and existing mathematical and computational frameworks derived from machine learning and statistical physics. [16] Rajan continues to apply recurrent neural network modelling to behavioral and neural data. In collaboration with Karl Deisseroth and his team at Stanford University, [18] such models revealed that circuit interactions within the lateral habenula, a brain structure implicated in aversion, were encoding experience features to guide the behavioral transition from active to passive coping – work published in Cell . [19] [20]

In 2019, Rajan was one of twelve investigators to receive funding from the National Science Foundation (NSF) [21] though its participation in the White House's Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The same year, she was also awarded an NIH BRAIN Initiative grant (R01) for Theories, Models, and Methods for Analysis of Complex Data from the Brain. [22] Starting in 2020, Rajan became co-lead of the Computational Neuroscience Working Group, [23] part of the National Institutes of Health's Interagency Modeling and Analysis Group (IMAG). [24]

In 2022, Rajan was promoted to Associate Professor [25] with tenure in the Department of Neuroscience and the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai.

In 2023, Rajan joined the Department of Neurobiology at Harvard Medical School as a Member of the Faculty and Kempner Institute for the Study of Natural and Artificial Intelligence as founding faculty. [2]

Awards and honors

Select publications

Related Research Articles

<span class="mw-page-title-main">Neuroscience</span> Scientific study of the nervous system

Neuroscience is the scientific study of the nervous system, its functions, and its disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developmental biology, cytology, psychology, physics, computer science, chemistry, medicine, statistics, and mathematical modeling to understand the fundamental and emergent properties of neurons, glia and neural circuits. The understanding of the biological basis of learning, memory, behavior, perception, and consciousness has been described by Eric Kandel as the "epic challenge" of the biological sciences.

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

<span class="mw-page-title-main">Neural oscillation</span> Brainwaves, repetitive patterns of neural activity in the central nervous system

Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.

Sensitization is a non-associative learning process in which repeated administration of a stimulus results in the progressive amplification of a response. Sensitization often is characterized by an enhancement of response to a whole class of stimuli in addition to the one that is repeated. For example, repetition of a painful stimulus may make one more responsive to a loud noise.

Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier.

Synaptic noise refers to the constant bombardment of synaptic activity in neurons. This occurs in the background of a cell when potentials are produced without the nerve stimulation of an action potential, and are due to the inherently random nature of synapses. These random potentials have similar time courses as excitatory postsynaptic potentials (EPSPs) and inhibitory postsynaptic potentials (IPSPs), yet they lead to variable neuronal responses. The variability is due to differences in the discharge times of action potentials.

In the field of computational neuroscience, brain simulation is the concept of creating a functioning computer model of a brain or part of a brain. Brain simulation projects intend to contribute to a complete understanding of the brain, and eventually also assist the process of treating and diagnosing brain diseases. Simulations utilize mathematical models of biological neurons, such as the hodgkin-huxley model, to simulate the behavior of neurons, or other cells within the brain.

A Bayesian Confidence Propagation Neural Network (BCPNN) is an artificial neural network inspired by Bayes' theorem, which regards neural computation and processing as probabilistic inference. Neural unit activations represent probability ("confidence") in the presence of input features or categories, synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. This probabilistic neural network model can also be run in generative mode to produce spontaneous activations and temporal sequences.

The network of the human nervous system is composed of nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and David Rumelhart, McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.

An autapse is a chemical or electrical synapse from a neuron onto itself. It can also be described as a synapse formed by the axon of a neuron on its own dendrites, in vivo or in vitro.

<span class="mw-page-title-main">Phase resetting in neurons</span> Behavior observed in neurons

Phase resetting in neurons is a behavior observed in different biological oscillators and plays a role in creating neural synchronization as well as different processes within the body. Phase resetting in neurons is when the dynamical behavior of an oscillation is shifted. This occurs when a stimulus perturbs the phase within an oscillatory cycle and a change in period occurs. The periods of these oscillations can vary depending on the biological system, with examples such as: (1) neural responses can change within a millisecond to quickly relay information; (2) In cardiac and respiratory changes that occur throughout the day, could be within seconds; (3) circadian rhythms may vary throughout a series of days; (4) rhythms such as hibernation may have periods that are measured in years. This activity pattern of neurons is a phenomenon seen in various neural circuits throughout the body and is seen in single neuron models and within clusters of neurons. Many of these models utilize phase response (resetting) curves where the oscillation of a neuron is perturbed and the effect the perturbation has on the phase cycle of a neuron is measured.

Laurence Frederick Abbott is an American theoretical neuroscientist, who is currently the William Bloor Professor of Theoretical Neuroscience at Columbia University, where he helped create the Center for Theoretical Neuroscience. He is widely regarded as one of the leaders of theoretical neuroscience, and is coauthor, along with Peter Dayan, on the first comprehensive textbook on theoretical neuroscience, which is considered to be the standard text for students and researchers entering theoretical neuroscience. He helped invent the dynamic clamp method alongside Eve Marder.

Rosemary C. Bagot is a Canadian neuroscientist who researches the mechanisms of altered brain function in depression. She is an assistant professor in behavioral neuroscience in the Department of Psychology at McGill University in Montreal, Canada. Her focus in behavioral neuroscience is on understanding the mechanisms of altered brain circuit function in depression. Employing a multidisciplinary approach, Bagot investigates why only some people who experience stress become depressed.

Claudia Clopath is a Professor of Computational Neuroscience at Imperial College London and research leader at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour. She develops mathematical models to predict synaptic plasticity for both medical applications and the design of human-like machines.

Ilana B. Witten is an American neuroscientist and professor of psychology and neuroscience at Princeton University. Witten studies the mesolimbic pathway, with a focus on the striatal neural circuit mechanisms driving reward learning and decision making.

<span class="mw-page-title-main">Ila Fiete</span> American physicist

Ila Fiete is an Indian–American physicist and computational neuroscientist as well as a Professor in the Department of Brain and Cognitive Sciences within the McGovern Institute for Brain Research at the Massachusetts Institute of Technology. Fiete builds theoretical models and analyses neural data and to uncover how neural circuits perform computations and how the brain represents and manipulates information involved in memory and reasoning.

Jessica Cardin is an American neuroscientist who is an associate professor of neuroscience at Yale University School of Medicine. Cardin's lab studies local circuits within the primary visual cortex to understand how cellular and synaptic interactions flexibly adapt to different behavioral states and contexts to give rise to visual perceptions and drive motivated behaviors. Cardin's lab applies their knowledge of adaptive cortical circuit regulation to probe how circuit dysfunction manifests in disease models.

<span class="mw-page-title-main">Eberhard Fetz</span> American neuroscientist, academic and researcher

Eberhard Erich Fetz is an American neuroscientist, academic and researcher. He is a Professor of Physiology and Biophysics and DXARTS at the University of Washington.

Cyriel Marie Antoine Pennartz is a Dutch neuroscientist serving as professor and head of the Department of Cognitive and Systems Neuroscience at the University of Amsterdam, the Netherlands. He is known for his research on memory, motivation, circadian rhythms, perception and consciousness. Pennartz’ work uses a multidisciplinary combination of techniques to understand the relationships between distributed neural activity and cognition, including in vivo electrophysiology and optical imaging, animal behavior and computational modelling.

References

  1. chadcampbell (2022-09-23). "Science, Tech and AI Leaders Convene to Launch Kempner Institute". Chan Zuckerberg Initiative. Retrieved 2023-09-15.
  2. 1 2 Institute, Kempner; Lang, Deborah (2023-09-05). "Computational neuroscientist Kanaka Rajan, leader in using AI and machine learning to study the brain, to join Harvard Medical School faculty and serve as a founding faculty member at the Kempner Institute". Kempner Institute. Retrieved 2023-09-15.
  3. 1 2 3 "Princeton Genomics RajanCV" (PDF). Princeton Genomics. Archived from the original (PDF) on January 27, 2018. Retrieved May 10, 2020.
  4. Dutt, Ela (20 February 2019). "12 researchers of Indian-origin win prestigious Sloan fellowships | News India Times". News India Times. Retrieved 2021-03-08.
  5. "Kanaka Rajan | Materials Science and Engineering". mse.stanford.edu. Archived from the original on 2019-12-12. Retrieved 2020-05-13.
  6. Rajan, Kanaka (2009). Spontaneous and stimulus-driven network dynamics (Thesis). Bibcode:2009PhDT........17R. OCLC   420930632. ProQuest   304863659.
  7. Abbott, Larry F.; Rajan, Kanaka; Sompolinsky, Haim (2011). "Interactions between Intrinsic and Stimulus-Evoked Activity in Recurrent Neural Networks". The Dynamic Brain. pp. 65–82. doi:10.1093/acprof:oso/9780195393798.003.0004. ISBN   978-0-19-539379-8.
  8. 1 2 3 Rajan, Kanaka; Abbott, L. F. (2 November 2006). "Eigenvalue Spectra of Random Matrices for Neural Networks". Physical Review Letters. 97 (18): 188104. Bibcode:2006PhRvL..97r8104R. doi:10.1103/PhysRevLett.97.188104. PMID   17155583.
  9. Rajan, Kanaka; Abbott, L. F.; Sompolinsky, Haim (7 July 2010). "Stimulus-dependent suppression of chaos in recurrent neural networks". Physical Review E. 82 (1): 011903. arXiv: 0912.3513 . Bibcode:2010PhRvE..82a1903R. doi:10.1103/PhysRevE.82.011903. PMC   10683875 . PMID   20866644. S2CID   946870.
  10. "Kanaka Rajan - Google Scholar Citations". scholar.google.com. Retrieved 2020-06-10.
  11. "Neuroscience Faculty | Icahn School of Medicine". Icahn School of Medicine at Mount Sinai. Retrieved 2020-05-13.
  12. "566: Dr. Kanaka Rajan: Creating Computational Models to Determine How the Brain Accomplishes Complex Tasks". People Behind the Science Podcast. 2020-08-10. Retrieved 2021-03-08.
  13. 1 2 3 4 5 Rajan, Kanaka; Bialek, William (8 November 2013). "Maximally Informative 'Stimulus Energies' in the Analysis of Neural Responses to Natural Signals". PLOS ONE. 8 (11): e71959. Bibcode:2013PLoSO...871959R. doi: 10.1371/journal.pone.0071959 . PMC   3826732 . PMID   24250780.
  14. 1 2 3 4 5 Rajan, Kanaka; Harvey, Christopher D.; Tank, David W. (April 2016). "Recurrent Network Models of Sequence Generation and Memory". Neuron. 90 (1): 128–142. arXiv: 1603.04687 . doi:10.1016/j.neuron.2016.02.009. PMC   4824643 . PMID   26971945.
  15. "People". Rajan Lab - Brain Research & AI in NY. Retrieved 2020-05-13.
  16. 1 2 "Research". Rajan Lab - Brain Research & AI in NY. Retrieved 2020-06-10.
  17. "BI 054 Kanaka Rajan: How Do We Switch Behaviors? | Brain Inspired" . Retrieved 2020-06-10.
  18. "Deisseroth Lab, Stanford University". web.stanford.edu. Retrieved 2020-06-10.
  19. Andalman, Aaron S.; Burns, Vanessa M.; Lovett-Barron, Matthew; Broxton, Michael; Poole, Ben; Yang, Samuel J.; Grosenick, Logan; Lerner, Talia N.; Chen, Ritchie; Benster, Tyler; Mourrain, Philippe; Levoy, Marc; Rajan, Kanaka; Deisseroth, Karl (May 2019). "Neuronal Dynamics Regulating Brain and Behavioral State Transitions". Cell. 177 (4): 970–985.e20. doi: 10.1016/j.cell.2019.02.037 . PMC   6726130 . PMID   31031000.
  20. "Tracking Information Across the Brain". Simons Foundation. 2020-05-28. Retrieved 2020-06-10.
  21. "Announcements | NSF - National Science Foundation". www.nsf.gov. Retrieved 2020-06-10.
  22. NIH BRAIN Initiative "Multi-region 'Network of Networks' Recurrent Neural Network Models of Adaptive and Maladaptive Learning" Research Grants.
  23. "Computational Neuroscience Working Group | Interagency Modeling and Analysis Group". www.imagwiki.nibib.nih.gov. Retrieved 2020-06-11.
  24. "Home | Interagency Modeling and Analysis Group". www.imagwiki.nibib.nih.gov. Retrieved 2020-06-11.
  25. "Kanaka Rajan | Icahn School of Medicine". Icahn School of Medicine at Mount Sinai. Retrieved 2022-11-14.
  26. "Allen Institute announces 2021 Next Generation Leaders". Allen Institute. 2021-11-08. Archived from the original on 2021-11-08. Retrieved 2021-11-09.
  27. Twitter https://twitter.com/sinaibrain/status/1428100013444456449 . Retrieved 2021-08-19.{{cite web}}: Missing or empty |title= (help)
  28. 1 2 "Funding". Rajan Lab - Brain Research & AI in NY. Retrieved 2020-06-10.
  29. "FBI Newsletter – Spring 2020". Issuu. 2 April 2020. Retrieved 2020-06-10.
  30. "Icahn School of Medicine at Mount Sinai". sloan.org. Retrieved 2020-06-10.
  31. "Two Mount Sinai Neuroscientists Named 2019 Sloan Research Fellows | Mount Sinai - New York". Mount Sinai Health System. Retrieved 2020-06-10.
  32. "Biophysics Theory Postdoc Kanaka Rajan receives Scholar Award from McDonnell Foundation | Neuroscience". pni.princeton.edu. Archived from the original on 2021-09-08. Retrieved 2020-06-10.
  33. "Sloan Research Fellowship", Wikipedia, 2020-04-09, retrieved 2020-06-10