Predictive coding

Last updated

In neuroscience, predictive coding (also known as predictive processing) is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses. Predictive coding is member of a wider set of theories that follow the Bayesian brain hypothesis.

Contents

Origins

Theoretical ancestors to predictive coding date back as early as 1860 with Helmholtz's concept of unconscious inference. [1] Unconscious inference refers to the idea that the human brain fills in visual information to make sense of a scene. For example, if something is relatively smaller than another object in the visual field, the brain uses that information as a likely cue of depth, such that the perceiver ultimately (and involuntarily) experiences depth. The understanding of perception as the interaction between sensory stimuli (bottom-up) and conceptual knowledge (top-down) continued to be established by Jerome Bruner who, starting in the 1940s, studied the ways in which needs, motivations and expectations influence perception, research that came to be known as 'New Look' psychology. In 1981, McClelland and Rumelhart examined the interaction between processing features (lines and contours) which form letters, which in turn form words. [2] While the features suggest the presence of a word, they found that when letters were situated in the context of a word, people were able to identify them faster than when they were situated in a non-word without semantic context. McClelland and Rumelhart's parallel processing model describes perception as the meeting of top-down (conceptual) and bottom-up (sensory) elements.

In the late 1990s, the idea of top-down and bottom-up processing was translated into a computational model of vision by Rao and Ballard. [3] Their paper demonstrated that there could be a generative model of a scene (top-down processing), which would receive feedback via error signals (how much the visual input varied from the prediction), which would subsequently lead to updating the prediction. The computational model was able to replicate well-established receptive field effects, as well as less understood extra-classical receptive field effects such as end-stopping.

In 2004, [4] Rick Grush proposed a model of neural perceptual processing according to which the brain constantly generates predictions based on a generative model (what Grush called an ‘emulator’), and compares that prediction to the actual sensory input. The difference, or ‘sensory residual’ would then be used to update the model so as to produce a more accurate estimate of the perceived domain. On Grush’s account, the top-down and bottom up signals would be combined in a way sensitive to the expected noise (aka uncertainty) in the bottom-up signal, so that in situations in which the sensory signal was known to be less trustworthy, the top-down prediction would be given greater weight, and vice-versa. The emulation framework was also shown to be hierarchical, with modality-specific emulators providing top-down expectations for sensory signals as well as higher-level emulators providing expectations of the distal causes of those signals. Grush applied the theory to visual perception, visual and motor imagery, language, and theory of mind phenomena.

General framework

Conceptual Schematic of Predictive Coding with 2 levels PredictiveCodingX2.svg
Conceptual Schematic of Predictive Coding with 2 levels

Predictive coding was initially developed as a model of the sensory system, where the brain solves the problem of modelling distal causes of sensory input through a version of Bayesian inference. It assumes that the brain maintains an active internal representations of the distal causes, which enable it to predict the sensory inputs. [5] A comparison between predictions and sensory input yields a difference measure (e.g. prediction error, free energy, or surprise) which, if it is sufficiently large beyond the levels of expected statistical noise, will cause the internal model to update so that it better predicts sensory input in the future.

If, instead, the model accurately predicts driving sensory signals, activity at higher levels cancels out activity at lower levels, and the internal model remains unchanged. Thus, predictive coding inverts the conventional view of perception as a mostly bottom-up process, suggesting that it is largely constrained by prior predictions, where signals from the external world only shape perception to the extent that they are propagated up the cortical hierarchy in the form of prediction error.

Prediction errors can not only be used for inferring distal causes, but also learning them via neural plasticity. [3] Here the idea is that the representations learned by cortical neurons reflect the statistical regularities in the sensory data. This idea is also present in many other theories of neural learning, such as sparse coding, with the central difference being that in predictive coding not only the connections to sensory inputs are learned (i.e., the receptive field), but also top-down predictive connections from higher-level representations. This makes predictive coding similar to some other models of hierarchical learning, such as Helmholtz machines and Deep belief networks, which however employ different learning algorithms. Thus, the dual use of prediction errors for both inference and learning is one of the defining features of predictive coding. [6]

Precision weighting

The precision of incoming sensory input is their predictability based on signal noise and other factors. Estimates of the precision are crucial for effectively minimizing prediction error, as it allows to weight sensory inputs and predictions according to their reliability. [7] For instance, the noise in the visual signal varies between dawn and dusk, such that greater conditional confidence is assigned to sensory prediction errors in broad daylight than at nightfall. [8] Similar approaches are successfully used in other algorithms performing Bayesian inference, e.g., for Bayesian filtering in the Kalman filter.

It has also been proposed that such weighting of prediction errors in proportion to their estimated precision is, in essence, attention, [9] and that the process of devoting attention may be neurobiologically accomplished by ascending reticular activating systems (ARAS) optimizing the “gain” of prediction error units. However, it has also been argued that precision weighting can only explain "endogenous spatial attention", but not other forms of attention. [10]

Active inference

The same principle of prediction error minimization has been used to provide an account of behavior in which motor actions are not commands but descending proprioceptive predictions. In this scheme of active inference, classical reflex arcs are coordinated so as to selectively sample sensory input in ways that better fulfill predictions, thereby minimizing proprioceptive prediction errors. [9] Indeed, Adams et al. (2013) review evidence suggesting that this view of hierarchical predictive coding in the motor system provides a principled and neurally plausible framework for explaining the agranular organization of the motor cortex. [11] This view suggests that “perceptual and motor systems should not be regarded as separate but instead as a single active inference machine that tries to predict its sensory input in all domains: visual, auditory, somatosensory, interoceptive and, in the case of the motor system, proprioceptive." [11]

Neural theory in predictive coding

Much of the early work that applied a predictive coding framework to neural mechanisms came from sensory processing, particularly in the visual cortex. [3] [12] These theories assume that the cortical architecture can be divided into hierarchically stacked levels, which correspond to different cortical regions. Every level is thought to house (at least) two types of neurons: "prediction neurons", which aim to predict the bottom-up inputs to the current level, and "error neurons", which signal the difference between input and prediction. These neurons are thought to be mainly non-superficial and superficial pyramidal neurons, while interneurons take up different functions. [12]

Within cortical regions, there is evidence that different cortical layers may facilitate the integration of feedforward and feed-backward projections across hierarchies. [12] These cortical layers have therefor been assumed to be central in the computation of predictions and prediction errors, with the basic unit being a cortical column. [12] [13] A common view is that [12] [14]

However, thus far there is no consensus on how the brain most likely implements predictive coding. Some theories, for example, propose that supragranular layers contain not only error, but also prediction neurons. [12] It is also still debated through which mechanisms error neurons might compute the prediction error. [15] Since prediction errors can be both negative and positive, but biological neurons can only show positive activity, more complex error coding schemes are required. To circumvent this problem, more recent theories have proposed that error computation might take place in neural dendrites instead. [16] [17] The neural architecture and computations proposed in these dendritic theories are similar to what has been proposed in Hierarchical temporal memory theory of cortex.

Applying predictive coding

Perception

The empirical evidence for predictive coding is most robust for perceptual processing. As early as 1999, Rao and Ballard proposed a hierarchical visual processing model in which higher-order visual cortical area sends down predictions and the feedforward connections carry the residual errors between the predictions and the actual lower-level activities. [3] According to this model, each level in the hierarchical model network (except the lowest level, which represents the image) attempts to predict the responses at the next lower level via feedback connections, and the error signal is used to correct the estimate of the input signal at each level concurrently. [3] Emberson et al. established the top-down modulation in infants using a cross-modal audiovisual omission paradigm, determining that even infant brains have expectation about future sensory input that is carried downstream from visual cortices and are capable of expectation-based feedback. [18] Functional near-infrared spectroscopy (fNIRS) data showed that infant occipital cortex responded to unexpected visual omission (with no visual information input) but not to expected visual omission. These results establish that in a hierarchically organized perception system, higher-order neurons send down predictions to lower-order neurons, which in turn sends back up the prediction error signal.

Interoception

There have been several competing models for the role of predictive coding in interoception.

In 2013, Anil Seth proposed that our subjective feeling states, otherwise known as emotions, are generated by predictive models that are actively built out of causal interoceptive appraisals. [19] In relation to how we attribute internal states of others to causes, Sasha Ondobaka, James Kilner, and Karl Friston (2015) proposed that the free energy principle requires the brain to produce a continuous series of predictions with the goal of reducing the amount of prediction error that manifests as “free energy”. [20] These errors are then used to model anticipatory information about what the state of the outside world will be and attributions of causes of that world state, including understanding of causes of others’ behavior. This is especially necessary because, to create these attributions, our multimodal sensory systems need interoceptive predictions to organize themselves. Therefore, Ondobaka posits that predictive coding is key to understanding other people's internal states.

In 2015, Lisa Feldman Barrett and W. Kyle Simmons proposed the Embodied Predictive Interoception Coding model, a framework that unifies Bayesian active inference principles with a physiological framework of corticocortical connections. [21] Using this model, they posited that agranular visceromotor cortices are responsible for generating predictions about interoception, thus, defining the experience of interoception.

Contrary to the inductive notion that emotion categories are biologically distinct, Barrett proposed later the theory of constructed emotion, which is the account that a biological emotion category is constructed based on a conceptual category—the accumulation of instances sharing a goal. [22] [23] In a predictive coding model, Barrett hypothesizes that, in interoception, our brains regulate our bodies by activating "embodied simulations" (full-bodied representations of sensory experience) to anticipate what our brains predict that the external world will throw at us sensorially and how we will respond to it with action. These simulations are either preserved if, based on our brain's predictions, they prepare us well for what actually subsequently occurs in the external world, or they, and our predictions, are adjusted to compensate for their error in comparison to what actually occurs in the external world and how well-prepared we were for it. Then, in a trial-error-adjust process, our bodies find similarities in goals among certain successful anticipatory simulations and group them together under conceptual categories. Every time a new experience arises, our brains use this past trial-error-adjust history to match the new experience to one of the categories of accumulated corrected simulations that it shares the most similarity with. Then, they apply the corrected simulation of that category to the new experience in the hopes of preparing our bodies for the rest of the experience. If it does not, the prediction, the simulation, and perhaps the boundaries of the conceptual category are revised in the hopes of higher accuracy next time, and the process continues. Barrett hypothesizes that, when prediction error for a certain category of simulations for x-like experiences is minimized, what results is a correction-informed simulation that the body will reenact for every x-like experience, resulting in a correction-informed full-bodied representation of sensory experience—an emotion. In this sense, Barrett proposes that we construct our emotions because the conceptual category framework our brains use to compare new experiences, and to pick the appropriate predictive sensory simulation to activate, is built on the go.

Computer science

With the rising popularity of representation learning, the theory has also been actively pursued and applied in machine learning and related fields. [24] [25] [26]

Challenges

One of the biggest challenges to test predictive coding has been the imprecision of exactly how prediction error minimization works. [27] In some studies, the increase in BOLD signal has been interpreted as error signal while in others it indicates changes in the input representation. [27] A crucial question that needs to be addressed is what exactly constitutes error signal and how it is computed at each level of information processing. [12] Another challenge that has been posed is predictive coding's computational tractability. According to Kwisthout and van Rooij, the subcomputation in each level of the predictive coding framework potentially hides a computationally intractable problem, which amounts to “intractable hurdles” that computational modelers have yet to overcome. [28]

Future research could focus on clarifying the neurophysiological mechanism and computational model of predictive coding.[ according to whom? ]

See also

Related Research Articles

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

<span class="mw-page-title-main">Cortical column</span> Group of neurons in the cortex of the brain

A cortical column is a group of neurons forming a cylindrical structure through the cerebral cortex of the brain perpendicular to the cortical surface. The structure was first identified by Vernon Benjamin Mountcastle in 1957. He later identified minicolumns as the basic units of the neocortex which were arranged into columns. Each contains the same types of neurons, connectivity, and firing properties. Columns are also called hypercolumn, macrocolumn, functional column or sometimes cortical module. Neurons within a minicolumn (microcolumn) encode similar features, whereas a hypercolumn "denotes a unit containing a full set of values for any given set of receptive field parameters". A cortical module is defined as either synonymous with a hypercolumn (Mountcastle) or as a tissue block of multiple overlapping hypercolumns.

<i>On Intelligence</i> Book by Jeff Hawkins

On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines is a 2004 book by Jeff Hawkins and Sandra Blakeslee. The book explains Hawkins' memory-prediction framework theory of the brain and describes some of its consequences.

The memory-prediction framework is a theory of brain function created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampi and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future.

<span class="mw-page-title-main">Thalamocortical radiations</span> Neural pathways between the thalamus and cerebral cortex

In neuroanatomy, thalamocortical radiations, also known as thalamocortical fibers, are the efferent fibers that project from the thalamus to distinct areas of the cerebral cortex. They form fiber bundles that emerge from the lateral surface of the thalamus.

<span class="mw-page-title-main">Neural oscillation</span> Brainwaves, repetitive patterns of neural activity in the central nervous system

Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.

<span class="mw-page-title-main">Neural binding</span>

Neural binding is the neuroscientific aspect of what is commonly known as the binding problem: the interdisciplinary difficulty of creating a comprehensive and verifiable model for the unity of consciousness. "Binding" refers to the integration of highly diverse neural information in the forming of one's cohesive experience. The neural binding hypothesis states that neural signals are paired through synchronized oscillations of neuronal activity that combine and recombine to allow for a wide variety of responses to context-dependent stimuli. These dynamic neural networks are thought to account for the flexibility and nuanced response of the brain to various situations. The coupling of these networks is transient, on the order of milliseconds, and allows for rapid activity.

David J. Heeger is an American neuroscientist, psychologist, computer scientist, data scientist, and entrepreneur. He is a professor at New York University, Chief Scientific Officer of Statespace Labs, and Chief Scientific Officer and co-founder of Epistemic AI.

<span class="mw-page-title-main">Efficient coding hypothesis</span>

The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes information about the outside world.

Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.

The theory of constructed emotion is a theory in affective science proposed by Lisa Feldman Barrett to explain the experience and perception of emotion. The theory posits that instances of emotion are constructed predictively by the brain in the moment as needed. It draws from social construction, psychological construction, and neuroconstruction.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms, computational models of biological neural networks and actual biological systems. Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures.

Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

The Dehaene–Changeux model (DCM), also known as the global neuronal workspace, or global cognitive workspace model, is a part of Bernard Baars's global workspace model for consciousness.

Gain field encoding is a hypothesis about the internal storage and processing of limb motion in the brain. In the motor areas of the brain, there are neurons which collectively have the ability to store information regarding both limb positioning and velocity in relation to both the body (intrinsic) and the individual's external environment (extrinsic). The input from these neurons is taken multiplicatively, forming what is referred to as a gain field. The gain field works as a collection of internal models off of which the body can base its movements. The process of encoding and recalling these models is the basis of muscle memory.

The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model and the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.

<span class="mw-page-title-main">Interoception</span> Sensory system that receives and integrates information from the body

Interoception is the collection of senses providing information to the organism about the internal state of the body. This can be both conscious and subconscious. It encompasses the brain's process of integrating signals relayed from the body into specific subregions—like the brainstem, thalamus, insula, somatosensory, and anterior cingulate cortex—allowing for a nuanced representation of the physiological state of the body. This is important for maintaining homeostatic conditions in the body and, potentially, facilitating self-awareness.

Floris de Lange is a cognitive neuroscientist known for his contributions in the fields of perception research, and predictive coding using laminar fMRI, magnetoencephalography, computational modeling, and AI models for neuroscience. Currently, de Lange holds the position of Principal Investigator at the Donders Center for Cognitive Neuroimaging, where he established the Predictive Brain Lab. Additionally, he serves as a senior editor at eLife and he is professor of Perception and Cognition at the Faculty of Social Sciences of the Radboud University Nijmegen.

References

  1. "Helmholtz's Treatise on Physiological Optics - Free". 2018-03-20. Archived from the original on 20 March 2018. Retrieved 2022-01-05.
  2. McClelland, J. L. & Rumelhart, D. E. (1981). "An interactive activation model of context effects in letter perception: I. An account of basic findings". Psychological Review. 88 (5): 375–407. doi:10.1037/0033-295X.88.5.375.
  3. 1 2 3 4 5 Rao, Rajesh P. N.; Ballard, Dana H. (1999). "Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects". Nature Neuroscience. 2 (1): 79–87. doi:10.1038/4580. PMID   10195184.
  4. Grush, Rick (2004). "The emulation theory of representation: Motor control, imagery, and perception". Behavioral and Brain Sciences. 27 (3): 377–396. doi:10.1017/S0140525X04000093. ISSN   0140-525X. PMID   15736871. S2CID   514252.
  5. Clark, Andy (2013). "Whatever next? Predictive brains, situated agents, and the future of cognitive science". Behavioral and Brain Sciences. 36 (3): 181–204. doi: 10.1017/S0140525X12000477 . PMID   23663408. S2CID   220661158.
  6. Friston, Karl (August 2018). "Does predictive coding have a future?". Nature Neuroscience. 21 (8): 1019–1021. doi:10.1038/s41593-018-0200-7. ISSN   1546-1726.
  7. Friston, Karl J.; Feldman, Harriet (2010). "Attention, Uncertainty, and Free-Energy". Frontiers in Human Neuroscience. 4: 215. doi: 10.3389/fnhum.2010.00215 . PMC   3001758 . PMID   21160551.
  8. Hohwy, Jakob (2012). "Attention and Conscious Perception in the Hypothesis Testing Brain". Frontiers in Psychology. 3: 96. doi: 10.3389/fpsyg.2012.00096 . PMC   3317264 . PMID   22485102.
  9. 1 2 Friston, Karl (2009). "The free-energy principle: A rough guide to the brain?". Trends in Cognitive Sciences. 13 (7): 293–301. doi:10.1016/j.tics.2009.04.005. PMID   19559644. S2CID   9139776.
  10. Ransom M. & Fazelpour S (2015). Three Problems for the Predictive Coding Theory of Attention. http://mindsonline.philosophyofbrains.com/2015/session4/three-problems-for-the-predictive-coding-theory-of-attention/
  11. 1 2 Adams, Rick A.; Shipp, Stewart; Friston, Karl J. (2013). "Predictions not commands: Active inference in the motor system". Brain Structure and Function. 218 (3): 611–643. doi:10.1007/s00429-012-0475-5. PMC   3637647 . PMID   23129312.
  12. 1 2 3 4 5 6 7 Bastos, Andre M.; Usrey, W. Martin; Adams, Rick A.; Mangun, George R.; Fries, Pascal; Friston, Karl J. (2012). "Canonical Microcircuits for Predictive Coding". Neuron. 76 (4): 695–711. doi:10.1016/j.neuron.2012.10.038. PMC   3777738 . PMID   23177956.
  13. Bennett, Max (2020). "An Attempt at a Unified Theory of the Neocortical Microcircuit in Sensory Cortex". Frontiers in Neural Circuits. 14: 40. doi: 10.3389/fncir.2020.00040 . PMC   7416357 . PMID   32848632.
  14. Keller, Georg B.; Mrsic-Flogel, Thomas D. (October 2018). "Predictive Processing: A Canonical Cortical Computation". Neuron. 100 (2): 424–435. doi:10.1016/j.neuron.2018.10.003. PMC   6400266 .
  15. Millidge, Beren; Seth, Anil; Buckley, Christopher (2022-01-19). "Predictive Coding: a Theoretical and Experimental Review". arXiv: 2107.12979 [cs.AI].
  16. Mikulasch, Fabian A.; Rudelt, Lucas; Wibral, Michael; Priesemann, Viola (January 2023). "Where is the error? Hierarchical predictive coding through dendritic error computation". Trends in Neurosciences. 46 (1): 45–59. arXiv: 2205.05303 . doi:10.1016/j.tins.2022.09.007.
  17. Whittington, James C.R.; Bogacz, Rafal (March 2019). "Theories of Error Back-Propagation in the Brain". Trends in Cognitive Sciences. 23 (3): 235–250. doi:10.1016/j.tics.2018.12.005. PMC   6382460 .
  18. Emberson, Lauren L.; Richards, John E.; Aslin, Richard N. (2015). "Top-down modulation in the infant brain: Learning-induced expectations rapidly affect the sensory cortex at 6 months". Proceedings of the National Academy of Sciences. 112 (31): 9585–9590. Bibcode:2015PNAS..112.9585E. doi: 10.1073/pnas.1510343112 . PMC   4534272 . PMID   26195772.
  19. Seth, Anil K. (2013). "Interoceptive inference, emotion, and the embodied self". Trends in Cognitive Sciences. 17 (11): 565–573. doi: 10.1016/j.tics.2013.09.007 . PMID   24126130. S2CID   3048221.
  20. Ondobaka, Sasha; Kilner, James; Friston, Karl (2017). "The role of interoceptive inference in theory of mind". Brain and Cognition. 112: 64–68. doi:10.1016/j.bandc.2015.08.002. PMC   5312780 . PMID   26275633.
  21. Barrett, Lisa Feldman; Simmons, W. Kyle (2015). "Interoceptive predictions in the brain". Nature Reviews Neuroscience. 16 (7): 419–429. doi:10.1038/nrn3950. PMC   4731102 . PMID   26016744.
  22. Barrett, Lisa Feldman (2016). "The theory of constructed emotion: An active inference account of interoception and categorization". Social Cognitive and Affective Neuroscience. 12 (1): 1–23. doi:10.1093/scan/nsw154. PMC   5390700 . PMID   27798257.
  23. Barrett, L.F. (2017). How emotions are made: The secret life of the brain. New York: Houghton Mifflin Harcourt. ISBN   0544133315
  24. Millidge, Beren; Salvatori, Tommaso; Song, Yuhang; Bogacz, Rafal; Lukasiewicz, Thomas (2022-02-18). "Predictive Coding: Towards a Future of Deep Learning beyond Backpropagation?". arXiv: 2202.09467 [cs.NE].
  25. Ororbia, Alexander G.; Kifer, Daniel (2022-04-19). "The Neural Coding Framework for Learning Generative Models". Nature Communications. 13 (1): 2064. doi: 10.1038/s41467-022-29632-7 . PMC   9018730 . PMID   35440589.
  26. Hinton, Geoffrey E. (2007). "Learning multiple layers of representation". Trends in Cognitive Sciences. 11 (10): 428–434. doi:10.1016/j.tics.2007.09.004. PMID   17921042. S2CID   15066318.
  27. 1 2 Kogo, Naoki; Trengove, Chris (2015). "Is predictive coding theory articulated enough to be testable?". Frontiers in Computational Neuroscience. 9: 111. doi: 10.3389/fncom.2015.00111 . PMC   4561670 . PMID   26441621.
  28. Kwisthout, Johan; Van Rooij, Iris (2020). "Computational Resource Demands of a Predictive Bayesian Brain". Computational Brain & Behavior. 3 (2): 174–188. doi:10.1007/s42113-019-00032-3. hdl: 2066/218854 . S2CID   196045530.