Rosalyn J. Moran | |
---|---|
Alma mater | University College Dublin |
Scientific career | |
Institutions | University of Bristol Virginia Tech Carilion School of Medicine and Research Institute King's College London University College London |
Thesis | (2004) |
Rosalyn J. Moran is an Irish and British neuroscientist and computational psychiatrist. She is deputy director of the King's College London Institute for Artificial Intelligence. Her research looks to understand neural algorithms through brain connectivity.
Moran grew up in Ireland, where she studied applied mathematics at the local boys school. [1] Moran was an undergraduate and postgraduate student in electronic engineering at the University College Dublin. Her doctoral research applied information theory to biomedical signal processing. [2] During her PhD, she met a scientist who was combining electrical and chemical analysis of schizophrenia, and became interested in pursuing a career in neuroscience. [1] She was a postdoctoral researcher at University College London supported by the Wellcome Centre for Human Neuroimaging.[ citation needed ]
Moran moved to Virginia Tech Carilion School of Medicine and Research Institute in 2012, [3] where she spent four years as an assistant professor. She returned to the United Kingdom in 2016 and joined the University of Bristol as a senior lecturer. [4] In 2018, she was made associate professor at King's College London. She became deputy director of the King's Institute for Artificial Intelligence in 2022.[ citation needed ]
Moran's research combines artificial intelligence, Bayesian inference and experimental neurobiology to understand brain connectivity and neural processing. [5] She is interested in how neurotransmitters (e.g. noradrenaline, serotonin) in decision making. She uses deep networks to model diseases, with a focus on neurodegenerative diseases and schizophrenia. [6]
Moran has investigated the free energy principle, an all-purpose mode of the brain and human behaviour. The free energy principle is based on surprise minimisation, brains work to minimise free energy. Moran has argued that the free energy principle offers an alternative rationale for generative artificial intelligence. [7]
Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Functional neuroimaging is the use of neuroimaging technology to measure an aspect of brain function, often with a view to understanding the relationship between activity in certain brain areas and specific mental functions. It is primarily used as a research tool in cognitive neuroscience, cognitive psychology, neuropsychology, and social neuroscience.
Functional integration is the study of how brain regions work together to process information and effect responses. Though functional integration frequently relies on anatomic knowledge of the connections between brain areas, the emphasis is on how large clusters of neurons – numbering in the thousands or millions – fire together under various stimuli. The large datasets required for such a whole-scale picture of brain function have motivated the development of several novel and general methods for the statistical analysis of interdependence, such as dynamic causal modelling and statistical linear parametric mapping. These datasets are typically gathered in human subjects by non-invasive methods such as EEG/MEG, fMRI, or PET. The results can be of clinical value by helping to identify the regions responsible for psychiatric disorders, as well as to assess how different activities or lifestyles affect the functioning of the brain.
Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.
Pendleton Read Montague, Jr. is an American neuroscientist and popular science author. He is the director of the Human Neuroimaging Lab and Computational Psychiatry Unit at the Fralin Biomedical Research Institute at VTC in Roanoke, Virginia, where he also holds the title of the inaugural Virginia Tech Carilion Vernon Mountcastle Research Professor. Montague is also a professor in the department of physics at Virginia Tech in Blacksburg, Virginia and professor of Psychiatry and Behavioral Medicine at Virginia Tech Carilion School of Medicine.
Connectomics is the production and study of connectomes: comprehensive maps of connections within an organism's nervous system. More generally, it can be thought of as the study of neuronal wiring diagrams with a focus on how structural connectivity, individual synapses, cellular morphology, and cellular ultrastructure contribute to the make up of a network. The nervous system is a network made of billions of connections and these connections are responsible for our thoughts, emotions, actions, memories, function and dysfunction. Therefore, the study of connectomics aims to advance our understanding of mental health and cognition by understanding how cells in the nervous system are connected and communicate. Because these structures are extremely complex, methods within this field use a high-throughput application of functional and structural neural imaging, most commonly magnetic resonance imaging (MRI), electron microscopy, and histological techniques in order to increase the speed, efficiency, and resolution of these nervous system maps. To date, tens of large scale datasets have been collected spanning the nervous system including the various areas of cortex, cerebellum, the retina, the peripheral nervous system and neuromuscular junctions.
Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.
Karl John Friston FRS FMedSci FRSB is a British neuroscientist and theoretician at University College London. He is an authority on brain imaging and theoretical neuroscience, especially the use of physics-inspired statistical methods to model neuroimaging data and other random dynamical systems. Friston is a key architect of the free energy principle and active inference. In imaging neuroscience he is best known for statistical parametric mapping and dynamic causal modelling. In October 2022, he joined VERSES Inc, a California-based cognitive computing company focusing on artificial intelligence designed using the principles of active inference, as Chief Scientist.
The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model with the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.
Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion. Note that "generalized coordinates of motion" are related to—but distinct from—generalized coordinates as used in (multibody) dynamical systems analysis. Generalized filtering furnishes posterior densities over hidden states generating observed data using a generalized gradient descent on variational free energy, under the Laplace assumption. Unlike classical filtering, generalized filtering eschews Markovian assumptions about random fluctuations. Furthermore, it operates online, assimilating data to approximate the posterior density over unknown quantities, without the need for a backward pass. Special cases include variational filtering, dynamic expectation maximization and generalized predictive coding.
In neuroscience, predictive coding is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses. Predictive coding is member of a wider set of theories that follow the Bayesian brain hypothesis.
Dynamic causal modeling (DCM) is a framework for specifying models, fitting them to data and comparing their evidence using Bayesian model comparison. It uses nonlinear state-space models in continuous time, specified using stochastic or ordinary differential equations. DCM was initially developed for testing hypotheses about neural dynamics. In this setting, differential equations describe the interaction of neural populations, which directly or indirectly give rise to functional neuroimaging data e.g., functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG) or electroencephalography (EEG). Parameters in these models quantify the directed influences or effective connectivity among neuronal populations, which are estimated from the data using Bayesian statistical methods.
Bayesian model reduction is a method for computing the evidence and posterior over the parameters of Bayesian models that differ in their priors. A full model is fitted to data using standard approaches. Hypotheses are then tested by defining one or more 'reduced' models with alternative priors, which usually – in the limit – switch off certain parameters. The evidence and parameters of the reduced models can then be computed from the evidence and estimated (posterior) parameters of the full model using Bayesian model reduction. If the priors and posteriors are normally distributed, then there is an analytic solution which can be computed rapidly. This has multiple scientific and engineering applications: these include scoring the evidence for large numbers of models very quickly and facilitating the estimation of hierarchical models.
Yee-Whye Teh is a professor of statistical machine learning in the Department of Statistics, University of Oxford. Prior to 2012 he was a reader at the Gatsby Charitable Foundation computational neuroscience unit at University College London. His work is primarily in machine learning, artificial intelligence, statistics and computer science.
Irene Mary Carmel Tracey is Vice-Chancellor of the University of Oxford and former Warden of Merton College, Oxford. She is also Professor of Anaesthetic Neuroscience in the Nuffield Department of Clinical Neurosciences and formerly Pro-Vice-Chancellor at the University of Oxford. She is a co-founder of the Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), now the Wellcome Centre for Integrative Neuroimaging. Her team’s research is focused on the neuroscience of pain, specifically pain perception and analgesia as well as how anaesthetics produce altered states of consciousness. Her team uses multidisciplinary approaches including neuroimaging.
Dimitri Van De Ville is a Swiss and Belgian computer scientist and neuroscientist specialized in dynamical and network aspects of brain activity. He is a professor of bioengineering at EPFL and the head of the Medical Image Processing Laboratory at EPFL's School of Engineering.
The manifold hypothesis posits that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space. As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features.
Martina F. Callaghan is an Irish medical physicist who is the Director of the Wellcome Centre for Human Neuroimaging at the UCL Queen Square Institute of Neurology. Her research considers the development of in-vivo histology using MRI.
Karla Loreen Miller is an American neuroscientist and professor of biomedical engineering at the University of Oxford. Her research investigates the development of neuroimaging techniques, with a particular focus on Magnetic Resonance Imaging (MRI), neuroimaging, diffusion MRI and functional magnetic resonance imaging. She was elected a Fellow of the International Society for Magnetic Resonance in Medicine in 2016.
Alan Anticevic is a Croatian neuroscientist known for his contributions to the fields of cognitive neuroscience, computational psychiatry, and neuroimaging studies of severe psychiatric illnesses.