Storage (memory)

Last updated

In mental memory, storage is one of three fundamental stages along with encoding and retrieval. Memory is the process of storing and recalling information that was previously acquired. Storing refers to the process of placing newly acquired information into memory, which is modified in the brain for easier storage. Encoding this information makes the process of retrieval easier for the brain where it can be recalled and brought into conscious thinking. Modern memory psychology differentiates between the two distinct types of memory storage: short-term memory and long-term memory. Several models of memory have been proposed over the past century, some of them suggesting different relationships between short- and long-term memory to account for different ways of storing memory.

Contents

Types

Short-term memory

Short-term memory is encoded in auditory, visual, spatial, and tactile forms. Short-term memory is closely related to working memory. Baddeley suggested that information stored in short-term memory continuously deteriorates, which can eventually lead to forgetting in the absence of rehearsal. [1] George A. Miller suggested that the capacity of the short-term memory storage is about seven items plus or minus two, also known as the magic number 7, [2] but this number has been shown to be subject to numerous variability, including the size, similarity, and other properties of the chunks. [3] Memory span varies; it is lower for multisyllabic words than for shorter words. In general, the memory span for verbal contents i.e. letters, words, and digits, relies on the duration of time it takes to speak these contents aloud and on the degree of lexicality (relating to the words or the vocabulary of a language distinguished from its grammar and construction) of the contents. Characteristics such as the length of spoken time for each word, known as the word-length effect, or when words are similar to each other lead to fewer words being recalled.

Chunking

Chunking is the process of grouping pieces of information together into “chunks”. [4] This allows for the brain to collect more information at a given time by reducing it to more-specific groups. [4] With the processes of chunking, the external environment is linked to the internal cognitive processes of the brain. [4] Due to the limited capacity of the working memory, this type of storage is necessary for memory to properly function. [4] The exact number of chunks that can be present in the working memory is not definite, but ranges from one to three chunks. [5] The recall is not measured in terms of the items that are being remembered, but they chunks that they are put into. [6] This type of memory storage is typically effective, as it has been found that with the appearance of the first item in a chunk, the other items can be immediately recalled. [7] Though errors may occur, it if more common for the errors to occur at the beginning of the chunk than in the middle of the chunk. [6] Chunks can be recalled with long-term or working memory. [8] Simple chunks of information can be recalled without having to go through long term memory, such as the sequence ABABAB, which would use working memory for recollection. [8] More difficult sequences, such as a phone number, would have to be split into chunks and may have to pass through long-term memory to be recalled. [8] The spacing used in phone numbers is a common chunking method, as the grouping in the numbers allows for the digits to be remembered in clusters and not individually. [9]

Chunking was introduced by George A. Miller who suggested that this way of organizing and processing information allows for a more effective retention of material from the environment. [4] Miller developed the idea that chunking was a collection of similar items and when that chunk was named, it allowed for the items in that chunk to be more easily recalled. [9] Other researchers described the items in these chunks as being strongly connected to each other, but not to the other items in other chunks. [7] Each chunk, in their findings, would hold only the items pertaining to that topic, and not have it be relatable to any other chunk or items in that chunk. [7] The menu for a restaurant would display this type of chucking, as the entrée category would not display anything from the dessert category, and the dessert category would not display anything form the entrée category. [9]

Psychologist and master chess player Adriaan de Groot supported the theory of chunking through his experiment on chess positions and different levels of expertise. [4] When presented positions of pieces from chess tournament games, the experts were more accurate at recalling the positions. [4] However, when the groups were given random positions to remember, De Groot found that all groups performed poorly at the recalling task regardless of the participants knowledge of chess. [4] Further research into chunking greatly impacted the studies of memory development, expertise, and immediate recall. [8] Research into behavioral and imaging studies have also suggested that chunking can be applied to habit learning, motor skills, language processing, and visual perception. [9]

Rehearsal

Rehearsal is the process by which information is retained in short-term memory by conscious repetition of the word, phrase or number. If information has sufficient meaning to the person or if it is repeated enough, it can be encoded into long-term memory. There are two types of rehearsal: maintenance rehearsal and elaborate rehearsal. Maintenance rehearsal consists of constantly repeating the word or phrase of words to remember.[ citation needed ] Remembering a phone number is one of the best examples of this. Maintenance rehearsal is mainly used for the short-term ability to recall information. Elaborate rehearsal involves the association of old with new information.[ citation needed ]

Long-term memory

In contrast to the short-term memory, long-term memory refers to the ability to hold information for a prolonged time and is possibly the most complex component of the human memory system. The Atkinson–Shiffrin model of memory (Atkinson 1968) suggests that the items stored in short-term memory moves to long-term memory through repeated practice and use. Long-term storage may be similar to learning—the process by which information that may be needed again is stored for recall on demand. [10] The process of locating this information and bringing it back to working memory is called retrieval. This knowledge that is easily recalled is explicit knowledge, whereas most long-term memory is implicit knowledge and is not readily retrievable. Scientists speculate that the hippocampus is involved in the creation of long-term memory. It is unclear where long-term memory is stored, although there is evidence depicting long-term memory is stored in various parts of the nervous system. [11] Long-term memory is permanent. Memory can be recalled, which, according to the dual-store memory search model, enhances the long-term memory. Forgetting may occur when the memory fails to be recalled on later occasions.

Models

Several memory models have been proposed to account for different types of recall processes, including cued recall, free recall, and serial recall. However, to explain the recall process, the memory model must identify how an encoded memory can reside in the memory storage for a prolonged period until the memory is accessed again, during the recall process; but not all models use the terminology of short-term and long-term memory to explain memory storage; the dual-store theory and a modified version of Atkinson–Shiffrin model of memory (Atkinson 1968) uses both short-and long-term memory storage, but others do not.

Multi-trace distributed memory model

The multi-trace distributed memory model suggests that the memories that are being encoded are converted to vectors of values, with each scalar quantity of a vector representing a different attribute of the item to be encoded. Such notion was first suggested by early theories of Hooke (1969) and Semon (1923). A single memory is distributed to multiple attributes, or features, so that each attribute represents one aspect of the memory being encoded. Such a vector of values is then added into the memory array or a matrix, composed of different traces or vectors of memory. Therefore, every time a new memory is encoded, such memory is converted to a vector or a trace, composed of scalar quantities representing variety of attributes, which is then added to pre-existing and ever-growing memory matrix, composed of multiple traces—hence the name of the model.

Once memory traces corresponding to specific memory are stored in the matrix, to retrieve the memory for the recall process one must cue the memory matrix with a specific probe, which would be used to calculate the similarity between the test vector and the vectors stored in the memory matrix. Because the memory matrix is constantly growing with new traces being added in, one would have to perform a parallel search through all the traces present within the memory matrix to calculate the similarity, whose result can be used to perform either associative recognition, or with probabilistic choice rule, used to perform a cued recall.

While it has been claimed that human memory seems to be capable of storing a great amount of information, to the extent that some had thought an infinite amount, the presence of such ever-growing matrix within human memory sounds implausible. In addition, the model suggests that to perform the recall process, parallel-search between every single trace that resides within the ever-growing matrix is required, which also raises doubt on whether such computations can be done in a short amount of time. Such doubts, however, have been challenged by findings of Gallistel and King [12] who present evidence on the brain’s enormous computational abilities that can be in support of such parallel support.

Neural network models

The multi-trace model had two key limitations: one, notion of the presence of ever-growing matrix in human memory sounds implausible; and two, computational searches for similarity against millions of traces that would be present in memory matrix to calculate similarity sounds far beyond the scope of the human recalling process. The neural network model is the ideal model in this case, as it overcomes the limitations posed by the multi-trace model and maintains the useful features of the model as well.

The neural network model assumes that neurons in a neural network form a complex network with other neurons, forming a highly interconnected network; each neuron is characterized by the activation value, and the connection between two neurons is characterized by the weight value. Interaction between each neuron is characterized by the McCulloch–Pitts dynamical rule, [13] and change of weight and connections between neurons resulting from learning is represented by the Hebbian learning rule. [14] [15]

Anderson [16] shows that combination of Hebbian learning rule and McCulloch–Pitts dynamical rule allow network to generate a weight matrix that can store associations between different memory patterns – such matrix is the form of memory storage for the neural network model. Major differences between the matrix of multiple traces hypothesis and the neural network model is that while new memory indicates extension of the existing matrix for the multiple traces hypothesis, weight matrix of the neural network model does not extend; rather, the weight is said to be updated with introduction of new association between neurons.

Using the weight matrix and learning/dynamic rule, neurons cued with one value can retrieve the different value that is ideally a close approximation of the desired target memory vector.

As the Anderson’s weight matrix between neurons will only retrieve the approximation of the target item when cued, modified version of the model was sought in order to be able to recall the exact target memory when cued. The Hopfield Net [17] is currently the simplest and most popular neural network model of associative memory; the model allows the recall of clear target vector when cued with the part or the 'noisy' version of the vector.

The weight matrix of Hopfield Net, that stores the memory, closely resembles the one used in weight matrix proposed by Anderson. Again, when new association is introduced, the weight matrix is said to be ‘updated’ to accommodate the introduction of new memory; it is stored until the matrix is cued by a different vector.

Dual-store memory search model

First developed by Atkinson and Shiffrin (1968), and refined by others, including Raajimakers and Shiffrin, [18] the dual-store memory search model, now referred to as SAM or search of associative memory model, remains as one of the most influential computational models of memory. The model uses both short-term memory, termed short-term store (STS), and long-term memory, termed long-term store (LTS) or episodic matrix, in its mechanism.

When an item is first encoded, it is introduced into the short-term store. While the item stays in the short-term store, vector representations in long-term store go through a variety of associations. Items introduced in short-term store go through three different types of association: (autoassociation) the self-association in long-term store, (heteroassociation) the inter-item association in long-term store, and the (context association ) which refers to association between the item and its encoded context. For each item in short-term store, the longer the duration of time an item resides within the short-term store, the greater its association with itself will be with other items that co-reside within short-term store, and with its encoded context.

The size of the short-term store is defined by a parameter, r. As an item is introduced into the short-term store, and if the short-term store has already been occupied by a maximum number of items, the item will probably drop out of the short-term storage. [19]

As items co-reside in the short-term store, their associations are constantly being updated in the long-term store matrix. The strength of association between two items depends on the amount of time the two memory items spend together within the short-term store, known as the contiguity effect. Two items that are contiguous have greater associative strength and are often recalled together from long-term storage.

Furthermore, the primacy effect, an effect seen in memory recall paradigm, reveals that the first few items in a list have a greater chance of being recalled over others in the STS, while older items have a greater chance of dropping out of STS. The item that managed to stay in the STS for an extended amount of time would have formed a stronger autoassociation, heteroassociation and context association than others, ultimately leading to greater associative strength and a higher chance of being recalled.

The recency effect of recall experiments is when the last few items in a list are recalled exceptionally well over other items, and can be explained by the short-term store. When the study of a given list of memory has been finished, what resides in the short-term store in the end is likely to be the last few items that were introduced last. Because the short-term store is readily accessible, such items would be recalled before any item stored within long-term store. This recall accessibility also explains the fragile nature of recency effect, which is that the simplest distractors can cause a person to forget the last few items in the list, as the last items would not have had enough time to form any meaningful association within the long-term store. If the information is dropped out of the short-term store by distractors, the probability of the last items being recalled would be expected to be lower than even the pre-recency items in the middle of the list.

The dual-store SAM model also utilizes memory storage, which itself can be classified as a type of long-term storage: the semantic matrix. The long-term store in SAM represents the episodic memory, which only deals with new associations that were formed during the study of an experimental list; pre-existing associations between items of the list, then, need to be represented on different matrix, the semantic matrix. The semantic matrix remains as another source of information that is not modified by episodic associations that are formed during the exam. [20]

Thus, the two types of memory storage, short- and long-term stores, are used in the SAM model. In the recall process, items residing in short-term memory store will be recalled first, followed by items residing in long-term store, where the probability of being recalled is proportional to the strength of the association present within the long-term store. Another memory storage, the semantic matrix, is used to explain the semantic effect associated with memory recall.

See also

Related Research Articles

<span class="mw-page-title-main">Forgetting</span> Loss or modification of information encoded in an individuals memory

Forgetting or disremembering is the apparent loss or modification of information already encoded and stored in an individual's short or long-term memory. It is a spontaneous or gradual process in which old memories are unable to be recalled from memory storage. Problems with remembering, learning and retaining new information are a few of the most common complaints of older adults. Studies show that retention improves with increased rehearsal. This improvement occurs because rehearsal helps to transfer information into long-term memory.

Long-term memory (LTM) is the stage of the Atkinson–Shiffrin memory model in which informative knowledge is held indefinitely. It is defined in contrast to short-term and working memory, which persist for only about 18 to 30 seconds. LTM is commonly labelled as "explicit memory", as well as "episodic memory," "semantic memory," "autobiographical memory," and "implicit memory".

Short-term memory is the capacity for holding a small amount of information in an active, readily available state for a short interval. For example, short-term memory holds a phone number that has just been recited. The duration of short-term memory is estimated to be on the order of seconds. The commonly cited capacity of 7 items, found in Miller's Law, has been superseded by 4±1 items. In contrast, long-term memory holds information indefinitely.

Recall in memory refers to the mental process of retrieval of information from the past. Along with encoding and storage, it is one of the three core processes of memory. There are three main types of recall: free recall, cued recall and serial recall. Psychologists test these forms of recall as a way to study the memory processes of humans and animals. Two main theories of the process of recall are the two-stage theory and the theory of encoding specificity.

Semantic memory refers to general world knowledge that humans have accumulated throughout their lives. This general knowledge is intertwined in experience and dependent on culture. New concepts are learned by applying knowledge learned from things in the past.

The Atkinson–Shiffrin model is a model of memory proposed in 1968 by Richard Atkinson and Richard Shiffrin. The model asserts that human memory has three separate components:

  1. a sensory register, where sensory information enters memory,
  2. a short-term store, also called working memory or short-term memory, which receives and holds input from both the sensory register and the long-term store, and
  3. a long-term store, where information which has been rehearsed in the short-term store is held indefinitely.

In cognitive psychology, chunking is a process by which small individual pieces of a set of information are bound together to create a meaningful whole later on in memory. The chunks, by which the information is grouped, are meant to improve short-term retention of the material, thus bypassing the limited capacity of working memory and allowing the working memory to be more efficient. A chunk is a collection of basic units that are strongly associated with one another, and have been grouped together and stored in a person's memory. These chunks can be retrieved easily due to their coherent grouping. It is believed that individuals create higher-order cognitive representations of the items within the chunk. The items are more easily remembered as a group than as the individual items themselves. These chunks can be highly subjective because they rely on an individual's perceptions and past experiences, which are linked to the information set. The size of the chunks generally ranges from two to six items but often differs based on language and culture.

<span class="mw-page-title-main">Baddeley's model of working memory</span> Model of human memory

Baddeley's model of working memory is a model of human memory proposed by Alan Baddeley and Graham Hitch in 1974, in an attempt to present a more accurate model of primary memory. Working memory splits primary memory into multiple components, rather than considering it to be a single, unified construct.

Explicit memory is one of the two main types of long-term human memory, the other of which is implicit memory. Explicit memory is the conscious, intentional recollection of factual information, previous experiences, and concepts. This type of memory is dependent upon three processes: acquisition, consolidation, and retrieval.

The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace. There are three levels of processing in this model. Structural processing, or visual, is when we remember only the physical quality of the word E.g how the word is spelled and how letters look. Phonemic processing includes remembering the word by the way it sounds. E.G the word tall rhymes with fall. Lastly, we have semantic processing in which we encode the meaning of the word with another word that is similar of has similar meaning. Once the word is perceived, the brain allows for a deeper processing.

Information processing theory is the approach to the study of cognitive development evolved out of the American experimental tradition in psychology. Developmental psychologists who adopt the information processing perspective account for mental development in terms of maturational changes in basic components of a child's mind. The theory is based on the idea that humans process the information they receive, rather than merely responding to stimuli. This perspective uses an analogy to consider how the mind works like a computer. In this way, the mind functions like a biological computer responsible for analyzing information from the environment. According to the standard information-processing model for mental development, the mind's machinery includes attention mechanisms for bringing information in, working memory for actively manipulating information, and long-term memory for passively holding information so that it can be used in the future. This theory addresses how as children grow, their brains likewise mature, leading to advances in their ability to process and respond to the information they received through their senses. The theory emphasizes a continuous pattern of development, in contrast with cognitive-developmental theorists such as Jean Piaget's theory of cognitive development that thought development occurs in stages at a time.

Memory has the ability to encode, store and recall information. Memories give an organism the capability to learn and adapt from previous experiences as well as build relationships. Encoding allows a perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from long-term memory. Working memory stores information for immediate use or manipulation, which is aided through hooking onto previously archived items already present in the long-term memory of an individual.

Fergus Ian Muirden Craik FRS is a cognitive psychologist known for his research on levels of processing in memory. This work was done in collaboration with Robert Lockhart at the University of Toronto in 1972 and continued with another collaborative effort with Endel Tulving in 1975. Craik has received numerous awards and is considered a leader in the area of memory, attention and cognitive aging. Moreover, his work over the years can be seen in developmental psychology, aging and memory, and the neuropsychology of memory.

In psychology, multiple trace theory is a memory consolidation model advanced as an alternative model to strength theory. It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes. Further support for this theory came in the 1960s from empirical findings that people could remember specific attributes about an object without remembering the object itself. The mode in which the information is presented and subsequently encoded can be flexibly incorporated into the model. This memory trace is unique from all others resembling it due to differences in some aspects of the item's attributes, and all memory traces incorporated since birth are combined into a multiple-trace representation in the brain. In memory research, a mathematical formulation of this theory can successfully explain empirical phenomena observed in recognition and recall tasks.

Memory consolidation is a category of processes that stabilize a memory trace after its initial acquisition. A memory trace is a change in the nervous system caused by memorizing something. Consolidation is distinguished into two specific processes. The first, synaptic consolidation, which is thought to correspond to late-phase long-term potentiation, occurs on a small scale in the synaptic connections and neural circuits within the first few hours after learning. The second process is systems consolidation, occurring on a much larger scale in the brain, rendering hippocampus-dependent memories independent of the hippocampus over a period of weeks to years. Recently, a third process has become the focus of research, reconsolidation, in which previously consolidated memories can be made labile again through reactivation of the memory trace.

In psychology, context-dependent memory is the improved recall of specific episodes or information when the context present at encoding and retrieval are the same. In a simpler manner, "when events are represented in memory, contextual information is stored along with memory targets; the context can therefore cue memories containing that contextual information". One particularly common example of context-dependence at work occurs when an individual has lost an item in an unknown location. Typically, people try to systematically "retrace their steps" to determine all of the possible places where the item might be located. Based on the role that context plays in determining recall, it is not at all surprising that individuals often quite easily discover the lost item upon returning to the correct context. This concept is heavily related to the encoding specificity principle.

<span class="mw-page-title-main">Memory</span> Faculty of mind to store and retrieve data

Memory is the faculty of the mind by which data or information is encoded, stored, and retrieved when needed. It is the retention of information over time for the purpose of influencing future action. If past events could not be remembered, it would be impossible for language, relationships, or personal identity to develop. Memory loss is usually described as forgetfulness or amnesia.

Distributed practice is a learning strategy, where practice is broken up into a number of short sessions over a longer period of time. Humans and other animals learn items in a list more effectively when they are studied in several sessions spread out over a long period of time, rather than studied repeatedly in a short period of time, a phenomenon called the spacing effect. The opposite, massed practice, consists of fewer, longer training sessions and is generally a less effective method of learning. For example, when studying for an exam, dispersing your studying more frequently over a larger period of time will result in more effective learning than intense study the night before.

Unitary theories of memory are hypotheses that attempt to unify mechanisms of short-term and long-term memory. One can find early contributions to unitary memory theories in the works of John McGeoch in the 1930s and Benton Underwood, Geoffrey Keppel, and Arthur Melton in the 1950s and 1960s. Robert Crowder argued against a separate short-term store starting in the late 1980s. James Nairne proposed one of the first unitary theories, which criticized Alan Baddeley's working memory model, which is the dominant theory of the functions of short-term memory. Other theories since Nairne have been proposed; they highlight alternative mechanisms that the working memory model initially overlooked.

<span class="mw-page-title-main">Memory and retention in learning</span>

Human memory is the process in which information and material is encoded, stored and retrieved in the brain. Memory is a property of the central nervous system, with three different classifications: short-term, long-term and sensory memory. The three types of memory have specific, different functions but each are equally important for memory processes. Sensory information is transformed and encoded in a certain way in the brain, which forms a memory representation. This unique coding of information creates a memory.

References

  1. Kumaran, D. (Apr 2008). "Short-Term Memory and the Human Hippocampus". Journal of Neuroscience. 28 (15): 3837–3838. doi:10.1523/JNEUROSCI.0046-08.2008. PMC   6670459 . PMID   18400882.
  2. Millar, A.G. (1956). "The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information". Psychological Review. 101 (2): 343–35. doi:10.1037/0033-295X.101.2.343. hdl: 11858/00-001M-0000-002C-4646-B . PMID   8022966. S2CID   15388016.
  3. Baddeley, A.D. (November 1966). "Short-term memory for word sequences as a function of acoustic, semantic and formal similarity" (PDF). Quarterly Journal of Experimental Psychology. 18 (4): 362–5. doi:10.1080/14640746608400055. PMID   5956080. S2CID   32498516.
  4. 1 2 3 4 5 6 7 8 Gobet, F.; Lane, P.; Croker, S.; Cheng, P.; Jones, G.; Oliver, I.; Pine, J. (2001). "Chunking mechanisms in human learning". Trends in Cognitive Sciences. 5 (6): 236–243. doi:10.1016/s1364-6613(00)01662-4. ISSN   1364-6613. PMID   11390294. S2CID   4496115.
  5. Oztekin, I.; McElree, B. (2010). "Relationship between measures of working memory capacity and the time course of short-term memory retrieval and interference resolution". Journal of Experimental Psychology: Learning, Memory, and Cognition. 36 (2): 383–97. doi:10.1037/a0018029. PMC   2872513 . PMID   20192537.
  6. 1 2 Yamaguchi, M., Randle, J.M., Wilson, T.L., & Logan, G.D. (2017). Pushing typists back on the learning curve: Memory chunking improves retrieval of prior typing episodes. Journal of Experimental Psychology: Learning, Memory, and Cognition, (43)9, 1432-1447.
  7. 1 2 3 Thalmann, M.; Souza, A. S.; Oberauer, K. (2018). "How does chunking help working memory?" (PDF). Journal of Experimental Psychology: Learning, Memory, and Cognition. 45 (1): 37–55. doi:10.1037/xlm0000578. ISSN   1939-1285. PMID   29698045. S2CID   20393039.
  8. 1 2 3 4 Chekaf, M.; Cowan, N.; Mathy, F. (2016). "Chunk formation in immediate memory and how it relates to data compression". Cognition. 155: 96–107. doi:10.1016/j.cognition.2016.05.024. PMC   4983232 . PMID   27367593.
  9. 1 2 3 4 Fonollosa, J.; Neftci, E.; Rabinovich, M. (2015). "Learning of chunking sequences in cognition and behavior". PLOS Computational Biology. 11 (11): e1004592. doi: 10.1371/journal.pcbi.1004592 . PMC   4652905 . PMID   26584306.
  10. Peterson, L. (1966). Short-term memory. Retrieved October 30, 2014, from http://www.nature.com/scientificamerican/journal/v215/n1/pdf/scientificamerican0766-90.pdf
  11. Warren, S. (1997). Remember this: Memory and the Brain. Retrieved November 1, 2014, from https://serendipstudio.org/biology/b103/f97/projects97/Warren.html
  12. Gallistel, C.R.; King (2009). Memory and the computational brain: why cognitive science will transform neuroscience. Wiley-Blackwell.
  13. McCulloch, W.S.; Pitts (1943). "A logical calculus of the ideas immanent in nervous activity". Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259.
  14. Hebb, D.O. (1949). Organization of Behavior.
  15. Moscovitch, M. (2006). "The cognitive neuroscience of remote episodic, semantic and spatial memory". Current Opinion in Neurobiology. 16 (2): 179–190. doi:10.1016/j.conb.2006.03.013. PMID   16564688. S2CID   14109875.
  16. Anderson, J.A. (1970). "Two Models for Memory Organization using Interacting Traces". Mathematical Biosciences. 8 (1–2): 137–160. doi:10.1016/0025-5564(70)90147-1.
  17. Hopfield, J.J. (1982). "Neural Networks and Physical Systems with Emergent Collective Computational Abilities". Proceedings of the National Academy of Sciences. 79 (8): 2554–2558. doi: 10.1073/pnas.79.8.2554 . PMC   346238 . PMID   6953413.
  18. Raaijmakers, J.G.; Shiffrin (1981). "Search of associative memory". Psychological Review. 8 (2): 98–134. doi:10.1037/0033-295X.88.2.93.
  19. Philips, J.L.; Shriffin (1967). "The effects of List Length on Short-Term Memory". Journal of Verbal Learning and Verbal Behavior. 6 (3): 303–311. doi:10.1016/s0022-5371(67)80117-8.
  20. Nelson, D.L.; McKinney (1998). "Interpreting the Influence of Implicitly activated memories on recall and recognition". Psychological Review. 105 (2): 299–324. doi:10.1037/0033-295x.105.2.299. PMID   9577240.

Further reading