This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Author | Jeff Hawkins & Sandra Blakeslee |
---|---|
Language | English |
Subject | Psychology |
Publisher | Times Books |
Publication date | 2004 |
Publication place | United States |
Media type | Paperback |
Pages | 272 |
ISBN | 0-8050-7456-2 |
OCLC | 55510125 |
612.8/2 22 | |
LC Class | QP376 .H294 2004 |
On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines is a 2004 book [1] by Jeff Hawkins and Sandra Blakeslee. The book explains Hawkins' memory-prediction framework theory of the brain and describes some of its consequences.
Hawkins' basic idea is that the brain is a mechanism to predict the future, specifically, hierarchical regions of the brain predict their future input sequences. Perhaps not always far in the future, but far enough to be of real use to an organism. As such, the brain is a feed forward hierarchical state machine with special properties that enable it to learn. [1] : 208–210, 222
The state machine actually controls the behavior of the organism. Since it is a feed forward state machine, the machine responds to future events predicted from past data.
The hierarchy is capable of memorizing frequently observed sequences (Cognitive modules) of patterns and developing invariant representations. Higher levels of the cortical hierarchy predict the future on a longer time scale, or over a wider range of sensory input. Lower levels interpret or control limited domains of experience, or sensory or effector systems. Connections from the higher level states predispose some selected transitions in the lower-level state machines.
Hebbian learning is part of the framework, in which the event of learning physically alters neurons and connections, as learning takes place. [1] : 48, 164
Vernon Mountcastle's formulation of a cortical column is a basic element in the framework. Hawkins places particular emphasis on the role of the interconnections from peer columns, and the activation of columns as a whole. He strongly implies that a column is the cortex's physical representation of a state in a state machine. [1] : 50, 51, 55
As an engineer, any specific failure to find a natural occurrence of some process in his framework does not signal a fault in the memory-prediction framework per se, but merely signals that the natural process has performed Hawkins' functional decomposition in a different, unexpected way, as Hawkins' motivation is to create intelligent machines. For example, for the purposes of his framework, the nerve impulses can be taken to form a temporal sequence (but phase encoding could be a possible implementation of such a sequence; these details are immaterial for the framework).
His predictions use the visual system as a prototype for some example predictions, such as Predictions 2, 8, 10, and 11. Other predictions cite the auditory system ( Predictions 1, 3, 4, and 7).
1. In all areas of cortex, Hawkins (2004) predicts "we should find anticipatory cells", cells that fire in anticipation of a sensory event.
2. In primary sensory cortex, Hawkins predicts, for example, "we should find anticipatory cells in or near V1, at a precise location in the visual field (the scene)". It has been experimentally determined, for example, after mapping the angular position of some objects in the visual field, there will be a one-to-one correspondence of cells in the scene to the angular positions of those objects. Hawkins predicts that when the features of a visual scene are known in a memory, anticipatory cells should fire before the actual objects are seen in the scene.
3. In layers 2 and 3, predictive activity (neural firing) should stop propagating at specific cells, corresponding to a specific prediction. Hawkins does not rule out anticipatory cells in layers 4 and 5.
4. Learned sequences of firings comprise a representation of temporally constant invariants. Hawkins calls the cells which fire in this sequence "name cells". Hawkins suggests that these name cells are in layer 2, physically adjacent to layer 1. Hawkins does not rule out the existence of layer 3 cells with dendrites in layer 1, which might perform as name cells.
5. By definition, a temporally constant invariant will be active during a learned sequence. Hawkins posits that these cells will remain active for the duration of the learned sequence, even if the remainder of the cortical column is shifting state. Since we do not know the encoding of the sequence, we do not yet know the definition of ON or active; Hawkins suggests that the ON pattern may be as simple as a simultaneous AND (i.e., the name cells simultaneously "light up") across an array of name cells.
6. Hawkins' novel prediction is that certain cells are inhibited during a learned sequence. A class of cells in layers 2 and 3 should NOT fire during a learned sequence, the axons of these "exception cells" should fire only if a local prediction is failing. This prevents flooding the brain with the usual sensations, leaving only exceptions for post-processing.
7. If an unusual event occurs (the learned sequence fails), the "exception cells" should fire, propagating up the cortical hierarchy to the hippocampus, the repository of new memories.
8. Hawkins predicts a cascade of predictions, when recognition occurs, propagating down the cortical column (with each saccade of the eye over a learned scene, for example).
9. Pyramidal cells should be capable of detecting coincident events on thin dendrites, even for a neuron with thousands of synapses. Hawkins posits a temporal window (presuming time-encoded firing) which is necessary for his theory to remain viable.
10. Hawkins posits, for example, that if the inferotemporal (IT) level has learned a sequence, that eventually cells in V4 will also learn the sequence.
11. Hawkins predicts that "name cells" will be found in all regions of the cortex.
The entorhinal cortex (EC) is an area of the brain's allocortex, located in the medial temporal lobe, whose functions include being a widespread network hub for memory, navigation, and the perception of time. The EC is the main interface between the hippocampus and neocortex. The EC-hippocampus system plays an important role in declarative (autobiographical/episodic/semantic) memories and in particular spatial memories including memory formation, memory consolidation, and memory optimization in sleep. The EC is also responsible for the pre-processing (familiarity) of the input signals in the reflex nictitating membrane response of classical trace conditioning; the association of impulses from the eye and the ear occurs in the entorhinal cortex.
The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalamus and then reaches the visual cortex. The area of the visual cortex that receives the sensory input from the lateral geniculate nucleus is the primary visual cortex, also known as visual area 1 (V1), Brodmann area 17, or the striate cortex. The extrastriate areas consist of visual areas 2, 3, 4, and 5.
The cerebral cortex, also known as the cerebral mantle, is the outer layer of neural tissue of the cerebrum of the brain in humans and other mammals. It is the largest site of neural integration in the central nervous system, and plays a key role in attention, perception, awareness, thought, memory, language, and consciousness. The cerebral cortex is the part of the brain responsible for cognition.
The neocortex, also called the neopallium, isocortex, or the six-layered cortex, is a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, generation of motor commands, spatial reasoning and language. The neocortex is further subdivided into the true isocortex and the proisocortex.
A cortical column is a group of neurons forming a cylindrical structure through the cerebral cortex of the brain perpendicular to the cortical surface. The structure was first identified by Vernon Benjamin Mountcastle in 1957. He later identified minicolumns as the basic units of the neocortex which were arranged into columns. Each contains the same types of neurons, connectivity, and firing properties. Columns are also called hypercolumn, macrocolumn, functional column or sometimes cortical module. Neurons within a minicolumn (microcolumn) encode similar features, whereas a hypercolumn "denotes a unit containing a full set of values for any given set of receptive field parameters". A cortical module is defined as either synonymous with a hypercolumn (Mountcastle) or as a tissue block of multiple overlapping hypercolumns.
The receptive field, or sensory space, is a delimited medium where some physiological stimuli can evoke a sensory neuronal response in specific organisms.
The memory-prediction framework is a theory of brain function created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampi and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future.
Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.
A neuronal ensemble is a population of nervous system cells involved in a particular neural computation.
Neural binding is the neuroscientific aspect of what is commonly known as the binding problem: the interdisciplinary difficulty of creating a comprehensive and verifiable model for the unity of consciousness. "Binding" refers to the integration of highly diverse neural information in the forming of one's cohesive experience. The neural binding hypothesis states that neural signals are paired through synchronized oscillations of neuronal activity that combine and recombine to allow for a wide variety of responses to context-dependent stimuli. These dynamic neural networks are thought to account for the flexibility and nuanced response of the brain to various situations. The coupling of these networks is transient, on the order of milliseconds, and allows for rapid activity.
The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes information about the outside world.
Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.
In neuroanatomy, topographic map is the ordered projection of a sensory surface or an effector system to one or more structures of the central nervous system. Topographic maps can be found in all sensory systems and in many motor systems.
Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.
Recurrent thalamo-cortical resonance or Thalamocortical oscillation is an observed phenomenon of oscillatory neural activity between the thalamus and various cortical regions of the brain. It is proposed by Rodolfo Llinas and others as a theory for the integration of sensory information into the whole of perception in the brain. Thalamocortical oscillation is proposed to be a mechanism of synchronization between different cortical regions of the brain, a process known as temporal binding. This is possible through the existence of thalamocortical networks, groupings of thalamic and cortical cells that exhibit oscillatory properties.
The neural correlates of consciousness (NCC) are the minimal set of neuronal events and mechanisms sufficient for the occurrence of the mental states to which they are related. Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena; that is, neural changes which necessarily and regularly correlate with a specific experience. The set should be minimal because, under the materialist assumption that the brain is sufficient to give rise to any given conscious experience, the question is which of its components are necessary to produce it.
Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.
There are many types of artificial neural networks (ANN).
Neural decoding is a neuroscience field concerned with the hypothetical reconstruction of sensory and other stimuli from information that has already been encoded and represented in the brain by networks of neurons. Reconstruction refers to the ability of the researcher to predict what sensory stimuli the subject is receiving based purely on neuron action potentials. Therefore, the main goal of neural decoding is to characterize how the electrical activity of neurons elicit activity and responses in the brain.
In neuroscience, predictive coding is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses. Predictive coding is member of a wider set of theories that follow the Bayesian brain hypothesis.
{{cite journal}}
: Cite journal requires |journal=
(help)