Holographic associative memory

Last updated

For holographic data storage, holographic associative memory (HAM) is an information storage and retrieval system based on the principles of holography. Holograms are made by using two beams of light, called a "reference beam" and an "object beam". They produce a pattern on the film that contains them both. Afterwards, by reproducing the reference beam, the hologram recreates a visual image of the original object. In theory, one could use the object beam to do the same thing: reproduce the original reference beam. In HAM, the pieces of information act like the two beams. Each can be used to retrieve the other from the pattern. It can be thought of as an artificial neural network which mimics the way the brain uses information. The information is presented in abstract form by a complex vector which may be expressed directly by a waveform possessing frequency and magnitude. This waveform is analogous to electrochemical impulses believed to transmit information between biological neuron cells.

Contents

Definition

HAM is part of the family of analog, correlation-based, associative, stimulus-response memories, where information is mapped onto the phase orientation of complex numbers. It can be considered as a complex valued artificial neural network. The holographic associative memory exhibits some remarkable characteristics. Holographs have been shown to be effective for associative memory tasks, generalization, and pattern recognition with changeable attention. Ability of dynamic search localization is central to natural memory. [1] For example, in visual perception, humans always tend to focus on some specific objects in a pattern. Humans can effortlessly change the focus from object to object without requiring relearning. HAM provides a computational model which can mimic this ability by creating representation for focus. At the heart of this new memory lies a novel bi-modal representation of pattern and a hologram-like complex spherical weight state-space. Besides the usual advantages of associative computing, this technique also has excellent potential for fast optical realization because the underlying hyper-spherical computations can be naturally implemented on optical computations.

It is based on principle of information storage in the form of stimulus-response patterns where information is presented by phase angle orientations of complex numbers on a Riemann surface. [2] A very large number of stimulus-response patterns may be superimposed or "enfolded" on a single neural element. Stimulus-response associations may be both encoded and decoded in one non-iterative transformation. The mathematical basis requires no optimization of parameters or error backpropagation, unlike connectionist neural networks. The principal requirement is for stimulus patterns to be made symmetric or orthogonal in the complex domain. HAM typically employs sigmoid pre-processing where raw inputs are orthogonalized and converted to Gaussian distributions.

Principles of operation

  1. Stimulus-response associations are both learned and expressed in one non-iterative transformation. No backpropagation of error terms or iterative processing required.
  2. The method forms a non-connectionist model in which the ability to superimpose a very large set of analog stimulus-response patterns or complex associations exists within the individual neuron cell.
  3. The generated phase angle communicates response information, and magnitude communicates a measure of recognition (or confidence in the result).
  4. The process permits a capability with neural system to establish dominance profile of stored information, thus exhibiting a memory profile of any range - from short-term to long-term memory.
  5. The process follows the non-disturbance rule, that is prior stimulus-response associations are minimally influenced by subsequent learning.
  6. The information is presented in abstract form by a complex vector which may be expressed directly by a waveform possessing frequency and magnitude. This waveform is analogous to electrochemical impulses believed to transmit information between biological neuron cells.

See also

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

Neural networks are a branch of machine learning models inspired by the neuronal organization found in the biological neural networks in animal brains.

Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems. The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, transistors, among others. Training software-based neuromorphic systems of spiking neural networks can be achieved using error backpropagation, e.g., using Python based frameworks such as snnTorch, or using canonical learning rules from the biological learning literature, e.g., using BindsNet.

The receptive field, or sensory space, is a delimited medium where some physiological stimuli can evoke a sensory neuronal response in specific organisms.

<span class="mw-page-title-main">Optical neural network</span>

An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength. Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs. This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.

A recurrent neural network (RNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. In contrast to the uni-directional feedforward neural network, it is a bi-directional artificial neural network, meaning that it allows the output from some nodes to affect subsequent input to the same nodes. Their ability to use internal state (memory) to process arbitrary sequences of inputs makes them applicable to tasks such as unsegmented, connected handwriting recognition or speech recognition. The term "recurrent neural network" is used to refer to the class of networks with an infinite impulse response, whereas "convolutional neural network" refers to the class of finite impulse response. Both classes of networks exhibit temporal dynamic behavior. A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.

<span class="mw-page-title-main">Neural circuit</span> Network or circuit of neurons

A neural circuit is a population of neurons interconnected by synapses to carry out a specific function when activated. Multiple neural circuits interconnect with one another to form large scale brain networks.

Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry.

Neuronal tuning refers to the hypothesized property of brain cells by which they selectively represent a particular type of sensory, association, motor, or cognitive information. Some neuronal responses have been hypothesized to be optimally tuned to specific patterns through experience. Neuronal tuning can be strong and sharp, as observed in primary visual cortex, or weak and broad, as observed in neural ensembles. Single neurons are hypothesized to be simultaneously tuned to several modalities, such as visual, auditory, and olfactory. Neurons hypothesized to be tuned to different signals are often hypothesized to integrate information from the different sources. In computational models called neural networks, such integration is the major principle of operation. The best examples of neuronal tuning can be seen in the visual, auditory, olfactory, somatosensory, and memory systems, although due to the small number of stimuli tested the generality of neuronal tuning claims is still an open question.

<span class="mw-page-title-main">Quantum neural network</span> Quantum Mechanics in Neural Networks

Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.

The term hybrid neural network can have two meanings:

  1. Biological neural networks interacting with artificial neuronal models, and
  2. Artificial neural networks with a symbolic part.

Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.

Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering.

<span class="mw-page-title-main">Echo state network</span> Type of reservoir computer

An echo state network (ESN) is a type of reservoir computer that uses a recurrent neural network with a sparsely connected hidden layer. The connectivity and weights of hidden neurons are fixed and randomly assigned. The weights of output neurons can be learned so that the network can produce or reproduce specific temporal patterns. The main interest of this network is that although its behavior is non-linear, the only weights that are modified during training are for the synapses that connect the hidden neurons to output neurons. Thus, the error function is quadratic with respect to the parameter vector and can be differentiated easily to a linear system.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

Reservoir computing is a framework for computation derived from recurrent neural network theory that maps input signals into higher dimensional computational spaces through the dynamics of a fixed, non-linear system called a reservoir. After the input signal is fed into the reservoir, which is treated as a "black box," a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The first key benefit of this framework is that training is performed only at the readout stage, as the reservoir dynamics are fixed. The second is that the computational power of naturally available systems, both classical and quantum mechanical, can be used to reduce the effective computational cost.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

Models of neural computation are attempts to elucidate, in an abstract and mathematical fashion, the core principles that underlie information processing in biological nervous systems, or functional components thereof. This article aims to provide an overview of the most definitive models of neuro-biological computation as well as the tools commonly used to construct and analyze them.

There are many types of artificial neural networks (ANN).

Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations.

Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling that period an "AI winter".

References

  1. Khan, J. I. (1998). "Characteristics of multidimensional holographic associative memory in retrieval with dynamically localizable attention". IEEE Transactions on Neural Networks. 9 (3): 389–406. doi:10.1109/72.668882. ISSN   1045-9227.
  2. Sutherland, John G. (1 January 1990). "A holographic model of memory, learning and expression". International Journal of Neural Systems. 01 (3): 259–267. doi:10.1142/S0129065790000163.

Further reading