Autoassociative memory

Last updated

Autoassociative memory, also known as auto-association memory or an autoassociation network, is any type of memory that is able to retrieve a piece of data from only a tiny sample of itself. They are very effective in de-noising or removing interference from the input and can be used to determine whether the given input is “known” or “unknown”.

Contents

In reference to computer memory, the idea of associative memory is also referred to as Content-addressable memory (CAM).

The net is said to recognize a “known” vector if the net produces a pattern of activation on the output units which is same as one of the vectors stored in it.

Background

Traditional memory

Traditional memory[ clarification needed ] stores data at a unique address and can recall the data upon presentation of the complete unique address.

Autoassociative memory

Autoassociative memories are capable of retrieving a piece of data upon presentation of only partial information[ clarification needed ] from that piece of data. Hopfield networks [1] have been shown [2] to act as autoassociative memory since they are capable of remembering data by observing a portion of that data.

Iterative Autoassociative Net

In some cases, an auto-associative net does not reproduce a stored pattern the first time around, but if the result of the first showing is input to the net again, the stored pattern is reproduced. [3] They are of 3 further kinds - Recurrent linear auto-associator, [4] Brain-State-in-a-Box net, [5] and Discrete Hopfield net. The Hopfield Network is the most well known example of an autoassociative memory.

Hopfield Network

Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes, and they have been shown to act as autoassociative since they are capable of remembering data by observing a portion of that data. [6]

Heteroassociative memory

Heteroassociative memories, on the other hand, can recall an associated piece of datum from one category upon presentation of data from another category. For example: It is possible that the associative recall is a transformation from the pattern “banana” to the different pattern “monkey.” [7]

Bidirectional associative memory (BAM)

Bidirectional associative memories (BAM) [8] are artificial neural networks that have long been used for performing heteroassociative recall.

Example

For example, the sentence fragments presented below are sufficient for most English-speaking adult humans to recall the missing information.

  1. "To be or not to be, that is _____."
  2. "I came, I saw, _____."

Many readers will realize the missing information is in fact:

  1. "To be or not to be, that is the question."
  2. "I came, I saw, I conquered."

This demonstrates the capability of autoassociative networks to recall the whole by using some of its parts.

Related Research Articles

Artificial neural network Computational model used in machine learning, based on connected, hierarchical functions

Artificial neural networks (ANNs), usually simply called neural networks (NNs), are computing systems vaguely inspired by the biological neural networks that constitute animal brains.

Unsupervised learning Machine learning technique

Unsupervised learning (UL) is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, the machine is forced to build a compact internal representation of its world. In contrast to supervised learning (SL) where data is tagged by a human, e.g. as "car" or "fish" etc, UL exhibits self-organization that captures patterns as neuronal predilections or probability densities. The other levels in the supervision spectrum are reinforcement learning where the machine is given only a numerical performance score as its guidance, and semi-supervised learning where a smaller portion of the data is tagged. Two broad methods in UL are Neural Networks and Probabilistic Methods.

Associative memory may refer to:

Boltzmann machine

A Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field. It was translated from statistical physics for use in cognitive science. The Boltzmann machine is based on a stochastic spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model that is a stochastic Ising Model and applied to machine learning.

A Hopfield network is a form of recurrent artificial neural network and a type of spin glass system popularised by John Hopfield in 1982 as described earlier by Little in 1974 based on Ernst Ising's work with Wilhelm Lenz on the Ising Model. Hopfield networks serve as content-addressable ("associative") memory systems with binary threshold nodes. Hopfield networks also provide a model for understanding human memory.

Recurrent neural network Computational model used in machine learning

A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs. This makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition.

Neural network Structure in biology and artificial intelligence

A neural network is a network or circuit of neurons, or in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus a neural network is either a biological neural network, made up of real biological neurons, or an artificial neural network, for solving artificial intelligence (AI) problems. The connections of the biological neuron are modeled as weights. A positive weight reflects an excitatory connection, while negative values mean inhibitory connections. All inputs are modified by a weight and summed. This activity is referred to as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be −1 and 1.

Holonomic brain theory, also known as The Holographic Brain, is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. This is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry, and which assumes that any quantum effects will not be significant at this scale. The entire field of quantum consciousness is often criticized as pseudoscience, as detailed on the main article thereof.

Quantum neural network Quantum Mechanics in Nueral Network

Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.

A cultured neuronal network is a cell culture of neurons that is used as a model to study the central nervous system, especially the brain. Often, cultured neuronal networks are connected to an input/output device such as a multi-electrode array (MEA), thus allowing two-way communication between the researcher and the network. This model has proved to be an invaluable tool to scientists studying the underlying principles behind neuronal learning, memory, plasticity, connectivity, and information processing.

Memory is the process of storing and recalling information that was previously acquired. Memory occurs through three fundamental stages: encoding, storage, and retrieval. Storing refers to the process of placing newly acquired information into memory, which is modified in the brain for easier storage. Encoding this information makes the process of retrieval easier for the brain where it can be recalled and brought into conscious thinking. Modern memory psychology differentiates between the two distinct types of memory storage: short-term memory and long-term memory. Several models of memory have been proposed over the past century, some of them suggesting different relationships between short- and long-term memory to account for different ways of storing memory.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

Neural cliques are network-level memory coding units in the hippocampus. They are functionally organized in a categorical and hierarchical manner. Researchers investigating the role of neural cliques have gained insight into the process of storing memories in the brain. Research evidence suggests that memory of events is achieved not through memorization of exact event details but through recreation of select images based on cognitive significance. This process enables the brain to exhibit large storage capacity and facilitates the capacity for abstract reasoning and generalization. Although several studies converges in the demonstration that real-time patterns of memory traces and sensory inputs are retained in the form of neural cliques, the topic is currently in active research in order to fully understand this biological code.

Bidirectional associative memory (BAM) is a type of recurrent neural network. BAM was introduced by Bart Kosko in 1988. There are two types of associative memory, auto-associative and hetero-associative. BAM is hetero-associative, meaning given a pattern it can return another pattern which is potentially of a different size. It is similar to the Hopfield network in that they are both forms of associative memory. However, Hopfield nets return patterns of the same size.

There are many types of artificial neural networks (ANN).

Sparse distributed memory (SDM) is a mathematical model of human long-term memory introduced by Pentti Kanerva in 1988 while he was at NASA Ames Research Center. It is a generalized random-access memory (RAM) for long binary words. These words serve as both addresses to and data for the memory. The main attribute of the memory is sensitivity to similarity, meaning that a word can be read back not only by giving the original write address but also by giving one close to it, as measured by the number of mismatched bits.

An attractor network is a type of recurrent dynamical network, that evolves toward a stable pattern over time. Nodes in the attractor network converge toward a pattern that may either be fixed-point, cyclic, chaotic or random (stochastic). Attractor networks have largely been used in computational neuroscience to model neuronal processes such as associative memory and motor behavior, as well as in biologically inspired methods of machine learning. An attractor network contains a set of n nodes, which can be represented as vectors in a d-dimensional space where n>d. Over time, the network state tends toward one of a set of predefined states on a d-manifold; these are the attractors.

An artificial neural network's learning rule or learning process is a method, mathematical logic or algorithm which improves the network's performance and/or training time. Usually, this rule is applied repeatedly over the network. It is done by updating the weights and bias levels of a network when a network is simulated in a specific data environment. A learning rule may accept existing conditions of the network and will compare the expected result and actual result of the network to give new and improved values for weights and bias. Depending on the complexity of actual model being simulated, the learning rule of the network can be as simple as an XOR gate or mean squared error, or as complex as the result of a system of differential equations.

Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to completely and abruptly forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations. Catastrophic interference is an important issue to consider when creating connectionist models of memory. It was originally brought to the attention of the scientific community by research from McCloskey and Cohen (1989), and Ratcliff (1990). It is a radical manifestation of the 'sensitivity-stability' dilemma or the 'stability-plasticity' dilemma. Specifically, these problems refer to the challenge of making an artificial neural network that is sensitive to, but not disrupted by, new information. Lookup tables and connectionist networks lie on the opposite sides of the stability plasticity spectrum. The former remains completely stable in the presence of new information but lacks the ability to generalize, i.e. infer general principles, from new inputs. On the other hand, connectionist networks like the standard backpropagation network can generalize to unseen inputs, but they are very sensitive to new information. Backpropagation models can be considered good models of human memory insofar as they mirror the human ability to generalize but these networks often exhibit less stability than human memory. Notably, these backpropagation networks are susceptible to catastrophic interference. This is an issue when modelling human memory, because unlike these networks, humans typically do not show catastrophic forgetting.

In psychology, associative memory is defined as the ability to learn and remember the relationship between unrelated items. This would include, for example, remembering the name of someone or the aroma of a particular perfume. This type of memory deals specifically with the relationship between these different objects or concepts. A normal associative memory task involves testing participants on their recall of pairs of unrelated items, such as face-name pairs. Associative memory is a declarative memory structure and episodically based.

References

  1. Hopfield, J J (1 April 1982). "Neural networks and physical systems with emergent collective computational abilities". Proceedings of the National Academy of Sciences of the United States of America. 79 (8): 2554–2558]. Bibcode:1982PNAS...79.2554H. doi:10.1073/pnas.79.8.2554. PMC   346238 . PMID   6953413.
  2. Artificial Intelligence Illuminated - Ben Coppin - Google Books. Books.google.co.uk. Retrieved on 2013-11-20.
  3. Jugal, Kalita. "Pattern Association or Associative Networks" (PDF).
  4. Michael S. C. Thomas, James L. McClelland. "Connectionist models of cognition" (PDF).
  5. Golden, Richard M. (1986-03-01). "The "Brain-State-in-a-Box" neural model is a gradient descent algorithm". Journal of Mathematical Psychology. 30 (1): 73–80. doi:10.1016/0022-2496(86)90043-X. ISSN   0022-2496.
  6. Coppin, Ben (2004). Artificial Intelligence Illuminated. Jones & Bartlett Learning. ISBN   978-0-7637-3230-1.
  7. Hirahara, Makoto (2009), "Associative Memory", in Binder, Marc D.; Hirokawa, Nobutaka; Windhorst, Uwe (eds.), Encyclopedia of Neuroscience, Berlin, Heidelberg: Springer, p. 195, doi:10.1007/978-3-540-29678-2_392, ISBN   978-3-540-29678-2
  8. Kosko, B. (1988). "Bidirectional Associative Memories" (PDF). IEEE Transactions on Systems, Man, and Cybernetics. 18 (1): 49–60. doi:10.1109/21.87054.