This article may be too technical for most readers to understand.(July 2018) |
Echoic memory is the sensory memory that registers specific to auditory information (sounds). Once an auditory stimulus is heard, it is stored in memory so that it can be processed and understood. [1] Unlike most visual memory, where a person can choose how long to view the stimulus and can reassess it repeatedly, auditory stimuli are usually transient and cannot be reassessed. Since echoic memories are heard once, they are stored for slightly longer periods of time than iconic memories (visual memories). [2] Auditory stimuli are received by the ear one at a time before they can be processed and understood.
It can be said that the echoic memory is conceptually like a "holding tank", where a sound is unprocessed (or held back) until the following sound is heard, and only then can it be made meaningful. [3] This particular sensory store is capable of storing large amounts of auditory information that is only retained for a short period of time (3–4 seconds). This echoic sound resonates in the mind and is replayed for this brief amount of time shortly after being heard. [4] Echoic memory encodes only moderately primitive aspects of the stimuli, for example pitch, which specifies localization to the non-association brain regions. [5]
Shortly after George Sperling's partial report studies of the visual sensory memory store, researchers began investigating its counterpart in the auditory domain. The term echoic memory was coined in 1967 by Ulric Neisser to describe this brief representation of acoustic information. It was initially studied using similar partial report paradigms to those utilized by Sperling; however, modern neuropsychological techniques have enabled the development of estimations of the capacity, duration, and location of the echoic memory store. Using Sperling's model as an analogue, researchers continue to apply his work to the auditory sensory store using partial and whole report experiments. They found that the echoic memory can store memories for up to 4 seconds. However, different durations have been proposed involving how long the echoic memory stores the information once it is heard. [6] However, different durations have been proposed for the existing echo once the hearing signal has been presented. Guttman and Julesz suggested that it may last approximately one second or less, while Eriksen and Johnson suggested that it can take up to 10 seconds. [7]
Baddeley's model of working memory consists of the visuospatial sketchpad which is related to iconic memory, and a phonological loop which attends to auditory information processing in two ways. The phonological storage is broken up into two sections. The first is the storage of words that we hear, this tends to have the capacity to retain information for 3–4 seconds before decay, which is a much longer duration than iconic memory (which is less than 1000ms). The second is a sub-vocal rehearsal process to keep refreshing the memory trace by the using one's "inner voice". This consists of the words repeating in a loop in our mind. [8] However, this model fails to provide a detailed description of the relationship between the initial sensory input and ensuing memory processes.
A short-term memory model proposed by Nelson Cowan attempts to address this problem by describing a verbal sensory memory input and storage in more detail. It suggests a pre-attentive sensory storage system that can hold a large amount of accurate information over a short period of time and consists of an initial phase input of 200-400ms and a secondary phase that transfers the information into a more long term memory store to be integrated into working memory that starts to decay after 10-20s. [9]
Following Sperling's (1960) procedures on iconic memory tasks, future researchers were interested in testing the same phenomenon for the auditory sensory store. Echoic memory is measured by behavioural tasks where participants are asked to repeat a sequence of tones, words, or syllables that were presented to them, usually requiring attention and motivation. The most famous partial report task was conducted by presenting participants with an auditory stimulus in the left, right, and both ears simultaneously. [6] Then they were asked to report spatial location and category name of each stimulus. Results showed that spatial location was far easier to recall than semantic information when inhibiting information from one ear over the other. Consistent with results on iconic memory tasks, performance on the partial report conditions were far superior to the whole report condition. In addition, a decrease in performance was observed as the interstimulus interval (length of time between presentation of the stimulus and recall) increased.
Auditory backward recognition masking is one of the most successful tasks in studying audition. It involves presenting participants with a brief target stimulus, followed by a second stimulus (the mask) after an interstimulus interval. [10] The amount of time the auditory information is available in memory is manipulated by the length of the interstimulus interval. Performance as indicated by accuracy of target information increases as the interstimulus interval increased to 250 ms. The mask doesn't affect the amount of information obtained from the stimulus, but it acts as interference for further processing.
A more objective, independent task capable of measuring auditory sensory memory that does not require focused attention are mismatch negativity tasks, [11] which record changes in activation in the brain by use of electroencephalography. This records elements of auditory event-related potentials of brain activity elicited 150-200ms after a stimulus. This stimulus is an unattended, infrequent, "oddball" or deviant stimulus presented among a sequence of standard stimuli, thereby comparing the deviant stimulus to a memory trace. [12]
Auditory sensory memory has been found to be stored in the primary auditory cortex contralateral to the ear of presentation. [13] This echoic memory storage involves several different brain areas, due to the different processes it is involved in. The majority of brain regions involved are located in the prefrontal cortex as this is where the executive control is located, [10] and is responsible for attentional control. The phonological store and the rehearsal system appear to be a left-hemisphere based memory system as increased brain activity has been observed in these areas. [14] The major regions involved are the left posterior ventrolateral prefrontal cortex, the left premotor cortex, and the left posterior parietal cortex. Within the ventrolateral prefrontal cortex, Broca's area is the main location responsible for verbal rehearsal and the articulatory process. The dorsal premotor cortex is used in rhythmic organization and rehearsal, and finally the posterior parietal cortex shows a role in localizing objects in space.
The cortical areas in the brain believed to be involved with auditory sensory memory exhibited by mismatch negativity response have not been localized specifically. However results have shown comparative activation in the superior temporal gyrus and in the inferior temporal gyrus. [15]
Age-related increases in activation within the neural structures responsible for echoic memory have been observed showing that with age comes increased proficiency in the processing of auditory sensory information. [14]
Findings of a mismatch negativity study also suggest that the duration of auditory sensory memory increases with age, significantly between the ages of two and six years old from 500-5000ms. Children 2 years of age exhibited an mismatch negativity response in interstimulus interval between 500ms and 1000ms. Children 3 years old have a mismatch negativity response from 1 to 2 seconds, 4 year olds over 2 seconds, and 6-year-old children from 3 to 5 seconds. These developmental and cognitive changes occur at a young age, and extend into adulthood until eventually decreasing again at old age. [9]
Researchers have found shortened echoic memory duration in former late talkers, children with precordial catch syndrome [ citation needed ], and oral clefts, with information decaying before 2000 ms. However this reduced echoic memory is not predictive for language difficulties in adulthood. [16]
In a study, it was found that when words were presented to both younger subjects and adult subjects, the younger subjects out performed the adult subjects as the rate in which the words presented were increased [17]
Affect echoic memory capacity seems to be independent of age. [17]
Children with deficits in auditory memory have been shown to have developmental language disorders. [12] These problems are difficult to assess since performance could be due to their inability to understand a given task, rather than a problem with their memory.
People with attributed unilateral damage to the dorsolateral prefrontal cortex and temporal-parietal cortex after experiencing a stroke were measured using the mismatch negativity test. For the control group the mismatch negativity amplitude was largest in the right hemisphere regardless if the tone was presented in the right or left ear.
Mismatch negativity was greatly reduced for temporal-parietal damaged patients when the auditory stimulus was presented to the contralateral ear of the lesion side of the brain. This adheres to the theory of auditory sensory memory being stored in the contralateral auditory cortex of ear presentation. [13] Further research on stroke victims with a reduced auditory memory store has shown that listening to daily music or audio books improved their echoic memory. This shows a positive effect of music in neural rehabilitation after brain damage. [18]
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
Brodmann area 9, or BA9, refers to a cytoarchitecturally defined portion of the frontal cortex in the brain of humans and other primates. Its cytoarchitecture is referred to as granular due to the concentration of granule cells in layer IV. It contributes to the dorsolateral and medial prefrontal cortex.
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.
Iconic memory is the visual sensory memory register pertaining to the visual domain and a fast-decaying store of visual information. It is a component of the visual memory system which also includes visual short-term memory (VSTM) and long-term memory (LTM). Iconic memory is described as a very brief, pre-categorical, high capacity memory store. It contributes to VSTM by providing a coherent representation of our entire visual perception for a very brief period of time. Iconic memory assists in accounting for phenomena such as change blindness and continuity of experience during saccades. Iconic memory is no longer thought of as a single entity but instead, is composed of at least two distinctive components. Classic experiments including Sperling's partial report paradigm as well as modern techniques continue to provide insight into the nature of this SM store.
The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale.
Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.
The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. More analysis produce more elaborate and stronger memory than lower levels of processing. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace. There are three levels of processing in this model. Structural processing, or visual, is when we remember only the physical quality of the word. Phonemic processing includes remembering the word by the way it sounds. Lastly, we have semantic processing in which we encode the meaning of the word with another word that is similar or has similar meaning. Once the word is perceived, the brain allows for a deeper processing.
The mismatch negativity (MMN) or mismatch field (MMF) is a component of the event-related potential (ERP) to an odd stimulus in a sequence of stimuli. It arises from electrical activity in the brain and is studied within the field of cognitive neuroscience and psychology. It can occur in any sensory system, but has most frequently been studied for hearing and for vision, in which case it is abbreviated to vMMN. The (v)MMN occurs after an infrequent change in a repetitive sequence of stimuli For example, a rare deviant (d) stimulus can be interspersed among a series of frequent standard (s) stimuli. In hearing, a deviant sound can differ from the standards in one or more perceptual features such as pitch, duration, loudness, or location. The MMN can be elicited regardless of whether someone is paying attention to the sequence. During auditory sequences, a person can be reading or watching a silent subtitled movie, yet still show a clear MMN. In the case of visual stimuli, the MMN occurs after an infrequent change in a repetitive sequence of images.
Negative priming is an implicit memory effect in which prior exposure to a stimulus unfavorably influences the response to the same stimulus. It falls under the category of priming, which refers to the change in the response towards a stimulus due to a subconscious memory effect. Negative priming describes the slow and error-prone reaction to a stimulus that is previously ignored. For example, a subject may be imagined trying to pick a red pen from a pen holder. The red pen becomes the target of attention, so the subject responds by moving their hand towards it. At this time, they mentally block out all other pens as distractors to aid in closing in on just the red pen. After repeatedly picking the red pen over the others, switching to the blue pen results in a momentary delay picking the pen out. The slow reaction due to the change of the distractor stimulus to target stimulus is called the negative priming effect.
The contingent negative variation (CNV) is a negative slow surface potential, as measured by electroencephalography (EEG), that occurs during the period between a warning stimulus or signal and an imperative ("go") stimulus. The CNV was one of the first event-related potential (ERP) components to be described. The CNV component was first described by W. Grey Walter and colleagues in an article published in Nature in 1964. The importance of this finding was that it was one of the first studies which showed that consistent patterns of the amplitude of electric responses could be obtained from the large background noise which occurs in EEG recordings and that this activity could be related to a cognitive process such as expectancy.
In neuroscience, the N100 or N1 is a large, negative-going evoked potential measured by electroencephalography ; it peaks in adults between 80 and 120 milliseconds after the onset of a stimulus, and is distributed mostly over the fronto-central region of the scalp. It is elicited by any unpredictable stimulus in the absence of task demands. It is often referred to with the following P200 evoked potential as the "N100-P200" or "N1-P2" complex. While most research focuses on auditory stimuli, the N100 also occurs for visual, olfactory, heat, pain, balance, respiration blocking, and somatosensory stimuli.
A sensory map is an area of the brain which responds to sensory stimulation, and are spatially organized according to some feature of the sensory stimulation. In some cases the sensory map is simply a topographic representation of a sensory surface such as the skin, cochlea, or retina. In other cases it represents other stimulus properties resulting from neuronal computation and is generally ordered in a manner that reflects the periphery. An example is the somatosensory map which is a projection of the skin's surface in the brain that arranges the processing of tactile sensation. This type of somatotopic map is the most common, possibly because it allows for physically neighboring areas of the brain to react to physically similar stimuli in the periphery or because it allows for greater motor control.
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
Auditory spatial attention is a specific form of attention, involving the focusing of auditory perception to a location in space.
The oddball paradigm is an experimental design used within psychology research. The oddball paradigm relies on the brain's sensitivity to rare deviant stimuli presented pseudo-randomly in a series of repeated standard stimuli. The oddball paradigm has a wide selection of stimulus types, including stimuli such as sound duration, frequency, intensity, phonetic features, complex music, or speech sequences. The reaction of the participant to this "oddball" stimulus is recorded.
Change deafness is a perceptual phenomenon that occurs when, under certain circumstances, a physical change in an auditory stimulus goes unnoticed by the listener. There is uncertainty regarding the mechanisms by which changes to auditory stimuli go undetected, though scientific research has been done to determine the levels of processing at which these consciously undetected auditory changes are actually encoded. An understanding of the mechanisms underlying change deafness could offer insight on issues such as the completeness of our representation of the auditory environment, the limitations of the auditory perceptual system, and the relationship between the auditory system and memory. The phenomenon of change deafness is thought to be related to the interactions between high and low level processes that produce conscious experiences of auditory soundscapes.
Haptic memory is the form of sensory memory specific to touch stimuli. Haptic memory is used regularly when assessing the necessary forces for gripping and interacting with familiar objects. It may also influence one's interactions with novel objects of an apparently similar size and density. Similar to visual iconic memory, traces of haptically acquired information are short lived and prone to decay after approximately two seconds. Haptic memory is best for stimuli applied to areas of the skin that are more sensitive to touch. Haptics involves at least two subsystems; cutaneous, or everything skin related, and kinesthetic, or joint angle and the relative location of body. Haptics generally involves active, manual examination and is quite capable of processing physical traits of objects and surfaces.
Pre-attentive processing is the subconscious accumulation of information from the environment. All available information is pre-attentively processed. Then, the brain filters and processes what is important. Information that has the highest salience or relevance to what a person is thinking about is selected for further and more complete analysis by conscious (attentive) processing. Understanding how pre-attentive processing works is useful in advertising, in education, and for prediction of cognitive ability.
During every moment of an organism's life, sensory information is being taken in by sensory receptors and processed by the nervous system. Sensory information is stored in sensory memory just long enough to be transferred to short-term memory. Humans have five traditional senses: sight, hearing, taste, smell, touch. Sensory memory (SM) allows individuals to retain impressions of sensory information after the original stimulus has ceased. A common demonstration of SM is a child's ability to write letters and make circles by twirling a sparkler at night. When the sparkler is spun fast enough, it appears to leave a trail which forms a continuous image. This "light trail" is the image that is represented in the visual sensory store known as iconic memory. The other two types of SM that have been most extensively studied are echoic memory, and haptic memory; however, it is reasonable to assume that each physiological sense has a corresponding memory store. For example, children have been shown to remember specific "sweet" tastes during incidental learning trials but the nature of this gustatory store is still unclear. However, sensory memories might be related to a region of the thalamus, which serves as a source of signals encoding past experiences in the neocortex.
The auditosensory cortex is the part of the auditory system that is associated with the sense of hearing in humans. It occupies the bilateral primary auditory cortex in the temporal lobe of the mammalian brain. The term is used to describe Brodmann areas 41 and 42 together with the transverse temporal gyrus. The auditosensory cortex takes part in the reception and processing of auditory nerve impulses, which passes sound information from the thalamus to the brain. Abnormalities in this region are responsible for many disorders in auditory abilities, such as congenital deafness, true cortical deafness, primary progressive aphasia and auditory hallucination.