Pulse-coupled networks

Last updated

Pulse-coupled networks or pulse-coupled neural networks (PCNNs) are neural models proposed by modeling a cat's visual cortex, and developed for high-performance biomimetic image processing. [1]

In 1989, Eckhorn introduced a neural model to emulate the mechanism of cat's visual cortex. [2] The Eckhorn model provided a simple and effective tool for studying small mammal’s visual cortex, and was soon recognized as having significant application potential in image processing.

In 1994, Johnson adapted the Eckhorn model to an image processing algorithm, calling this algorithm a pulse-coupled neural network.

The basic property of the Eckhorn's linking-field model (LFM) is the coupling term. LFM is a modulation of the primary input by a biased offset factor driven by the linking input. These drive a threshold variable that decays from an initial high value. When the threshold drops below zero it is reset to a high value and the process starts over. This is different than the standard integrate-and-fire neural model, which accumulates the input until it passes an upper limit and effectively "shorts out" to cause the pulse.

LFM uses this difference to sustain pulse bursts, something the standard model does not do on a single neuron level. It is valuable to understand, however, that a detailed analysis of the standard model must include a shunting term, due to the floating voltages level in the dendritic compartment(s), and in turn this causes an elegant multiple modulation effect that enables a true higher-order network (HON). [3] [4] [5]

A PCNN is a two-dimensional neural network. Each neuron in the network corresponds to one pixel in an input image, receiving its corresponding pixel's color information (e.g. intensity) as an external stimulus. Each neuron also connects with its neighboring neurons, receiving local stimuli from them. The external and local stimuli are combined in an internal activation system, which accumulates the stimuli until it exceeds a dynamic threshold, resulting in a pulse output. Through iterative computation, PCNN neurons produce temporal series of pulse outputs. The temporal series of pulse outputs contain information of input images and can be used for various image processing applications, such as image segmentation and feature generation. Compared with conventional image processing means, PCNNs have several significant merits, including robustness against noise, independence of geometric variations in input patterns, capability of bridging minor intensity variations in input patterns, etc.

A simplified PCNN called a spiking cortical model was developed in 2009. [6]

Applications

PCNNs are useful for image processing, as discussed in a book by Thomas Lindblad and Jason M. Kinser. [7]

PCNNs have been used in a variety of image processing applications, including: image segmentation, pattern recognition, feature generation, face extraction, motion detection, region growing, image denoising [8] and image enhancement [9]

Multidimensional pulse image processing of chemical structure data using PCNN has been discussed by Kinser, et al. [10]

They have also been applied to an all pairs shortest path problem. [11]

Related Research Articles

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the neuronal organization found in the biological neural networks in animal brains.

<span class="mw-page-title-main">Color constancy</span> How humans perceive color

Color constancy is an example of subjective constancy and a feature of the human color perception system which ensures that the perceived color of objects remains relatively constant under varying illumination conditions. A green apple for instance looks green to us at midday, when the main illumination is white sunlight, and also at sunset, when the main illumination is red. This helps us identify objects.

Computational neuroscience is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.

<span class="mw-page-title-main">Image segmentation</span> Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

The receptive field, or sensory space, is a delimited medium where some physiological stimuli can evoke a sensory neuronal response in specific organisms.

<span class="mw-page-title-main">Neural network (biology)</span> Structure in nervous systems

A neural network, also called a neuronal network, is an interconnected population of neurons. Biological neural networks are studied to understand the organization and functioning of nervous systems.

Winner-take-all is a computational principle applied in computational models of neural networks by which neurons compete with each other for activation. In the classical form, only the neuron with the highest activation stays active while all other neurons shut down; however, other variations allow more than one neuron to be active, for example the soft winner take-all, by which a power function is applied to the neurons.

<span class="mw-page-title-main">Neural oscillation</span> Brainwaves, repetitive patterns of neural activity in the central nervous system

Neural oscillations, or brainwaves, are rhythmic or repetitive patterns of neural activity in the central nervous system. Neural tissue can generate oscillatory activity in many ways, driven either by mechanisms within individual neurons or by interactions between neurons. In individual neurons, oscillations can appear either as oscillations in membrane potential or as rhythmic patterns of action potentials, which then produce oscillatory activation of post-synaptic neurons. At the level of neural ensembles, synchronized activity of large numbers of neurons can give rise to macroscopic oscillations, which can be observed in an electroencephalogram. Oscillatory activity in groups of neurons generally arises from feedback connections between the neurons that result in the synchronization of their firing patterns. The interaction between neurons can give rise to oscillations at a different frequency than the firing frequency of individual neurons. A well-known example of macroscopic neural oscillations is alpha activity.

<span class="mw-page-title-main">Efficient coding hypothesis</span>

The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes information about the outside world. Barlow hypothesized that the spikes in the sensory system formed a neural code for efficiently representing sensory information. By efficient it is understood that the code minimized the number of spikes needed to transmit a given signal. This is somewhat analogous to transmitting information across the internet, where different file formats can be used to transmit a given image. Different file formats require different number of bits for representing the same image at given distortion level, and some are better suited for representing certain classes of images than others. According to this model, the brain is thought to use a code which is suited for representing visual and audio information representative of an organism's natural environment.

Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the neuronal responses, and the relationship among the electrical activities of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is believed that neurons can encode both digital and analog information.

<span class="mw-page-title-main">Synaptic gating</span>

Synaptic gating is the ability of neural circuits to gate inputs by either suppressing or facilitating specific synaptic activity. Selective inhibition of certain synapses has been studied thoroughly, and recent studies have supported the existence of permissively gated synaptic transmission. In general, synaptic gating involves a mechanism of central control over neuronal output. It includes a sort of gatekeeper neuron, which has the ability to influence transmission of information to selected targets independently of the parts of the synapse upon which it exerts its action.

<span class="mw-page-title-main">Spiking neural network</span> Artificial neural network that mimics neurons

Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle, but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model.

Hierarchical temporal memory (HTM) is a biologically constrained machine intelligence technology developed by Numenta. Originally described in the 2004 book On Intelligence by Jeff Hawkins with Sandra Blakeslee, HTM is primarily used today for anomaly detection in streaming data. The technology is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian brain.

A modular neural network is an artificial neural network characterized by a series of independent neural networks moderated by some intermediary. Each independent neural network serves as a module and operates on separate inputs to accomplish some subtask of the task the network hopes to perform. The intermediary takes the outputs of each module and processes them to produce the output of the network as a whole. The intermediary only accepts the modules' outputs—it does not respond to, nor otherwise signal, the modules. As well, the modules do not interact with each other.

The network of the human nervous system is composed of nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and Rumelhart, J. L., McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.

Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.

Binocular neurons are neurons in the visual system that assist in the creation of stereopsis from binocular disparity. They have been found in the primary visual cortex where the initial stage of binocular convergence begins. Binocular neurons receive inputs from both the right and left eyes and integrate the signals together to create a perception of depth.

A binding neuron (BN) is an abstract concept of processing of input impulses in a generic neuron based on their temporal coherence and the level of neuronal inhibition. Mathematically, the concept may be implemented by most neuronal models including the well-known leaky integrate-and-fire model. The BN concept originated in 1996 and 1998 papers by A. K. Vidybida,

<span class="mw-page-title-main">DeepDream</span> Software program

DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately overprocessed images.

<span class="mw-page-title-main">Carsen Stringer</span> American computational neuroscientist

Carsen Stringer is an American computational neuroscientist and Group Leader at the Howard Hughes Medical Institute Janelia Research Campus. Stringer uses machine learning and deep neural networks to visualize large scale neural recordings and then probe the neural computations that give rise to visual processing in mice. Stringer has also developed several novel software packages that enable cell segmentation and robust analyses of neural recordings and mouse behavior.

References

  1. Zhan, K.; Shi, J.; Wang, H.; Xie, Y.; Li, Q. (2017). "Computational mechanisms of pulse-coupled neural networks: A comprehensive review". Archives of Computational Methods in Engineering. 24 (3): 573–588. doi:10.1007/s11831-016-9182-3. S2CID   57453279.
  2. Eckhorn, R.; Reitbock H.J.; Arndt, M.; Dicke, P. (1989). "A neural network for feature linking via synchronous activity: results from cat visual cortex and from simulations". In Cotterill, R.M.J. (ed.). Models of Brain Function. Cambridge: Cambridge University Press. pp. 255–272.
  3. Johnson and Padgett (1999). "PCNN models and applications". IEEE Transactions on Neural Networks. 10 (3, MAY): 480–498. doi:10.1109/72.761706. PMID   18252547. for the shunting terms
  4. Giles, C. Lee; Sun, Guo-Zheng; Chen, Hsing-Hen; Lee, Yee-Chun; Chen, Dong (1989). "Higher Order Recurrent Networks and Grammatical Inference". Advances in Neural Information Processing Systems. 2. Retrieved 2024-03-12.
  5. Giles, C.; Griffin, R.; Maxwell, T. (1987). "Encoding Geometric Invariances in Higher-Order Neural Networks". Neural Information Processing Systems. Retrieved 2024-03-12.
  6. Kun Zhan; Hongjuan Zhang; Yide Ma (December 2009). "New Spiking Cortical Model for Invariant Texture Retrieval and Image Processing". IEEE Transactions on Neural Networks. 20 (12): 1980–1986. doi:10.1109/TNN.2009.2030585. PMID   19906586.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  7. Lindblad, Thomas; Kinser, Jason M. (2005). Image processing using pulse-coupled neural networks (2nd, rev. ed.). Berlin ; New York: Springer. ISBN   3-540-24218-X.
  8. Zhang, Y. (2008). "Improved Image Filter based on SPCNN". Science in China Series F. 51 (12): 2115–2125. doi:10.1007/s11432-008-0124-z. S2CID   22319368.
  9. Wu, L. (2010). "Color Image Enhancement based on HVS and PCNN". Science China Information Sciences. 53 (10): 1963–1976. doi:10.1007/s11432-010-4075-9.
  10. Kinser, Jason M; Waldemark, Karina; Lindblad, Thomas; Jacobsson, Sven P (May 2000). "Multidimensional pulse image processing of chemical structure data". Chemometrics and Intelligent Laboratory Systems. 51 (1): 115–124. doi:10.1016/S0169-7439(00)00065-4.
  11. Wei, G.; Wang, S. (2011). "A novel algorithm for all pairs shortest path problem based on matrix multiplication and pulse coupled neural network". Digital Signal Processing. 21 (4): 517–521. Bibcode:2011DSP....21..517Z. doi:10.1016/j.dsp.2011.02.004.