In psychology, parallel processing is the ability of the brain to simultaneously process incoming stimuli of differing quality. [1] Parallel processing is associated with the visual system in that the brain divides what it sees into four components: color, motion, shape, and depth. These are individually analyzed and then compared to stored memories, which helps the brain identify what you are viewing. [2] The brain then combines all of these into the field of view that is then seen and comprehended. [3] This is a continual and seamless operation. For example, if one is standing between two different groups of people who are simultaneously carrying on two different conversations, one may be able to pick up only some information of both conversations at the same time. [4] Parallel processing has been linked, by some experimental psychologists, to the stroop effect (resulting from the stroop test where there is a mismatch between the name of a color and the color that the word is written in). [5] In the stroop effect, an inability to attend to all stimuli is seen through people's selective attention. [6]
In 1990, American Psychologist David Rumelhart proposed the model of parallel distributed processing (PDP) in hopes of studying neural processes through computer simulations. [7] According to Rumelhart, the PDP model represents information processing as interactions between elements called units, with the interactions being either excitatory or inhibitory in nature. [8] Parallel Distributed Processing Models are neurally inspired, emulating the organisational structure of nervous systems of living organisms. [9] A general mathematical framework is provided for them.
Parallel processing models assume that information is represented in the brain using patterns of activation. Information processing encompasses the interactions of neuron-like units linked by synapse-like connections. These can be either excitatory or inhibitory. Every individual unit's activation level is updated using a function of connection strengths and activation level of other units. A set of response units is activated by the propagation of activation patterns. The connection weights are eventually adjusted using learning. [10]
In contrast to parallel processing, serial processing involves sequential processing of information, without any overlap of processing times. [11] The distinction between these two processing models is most observed during visual stimuli is targeted and processed (also called visual search).
In case of serial processing, the elements are searched one after the other in a serial order to find the target. When the target is found, the search terminates. Alternatively, it continues to the end to ensure that the target is not present. This results in reduced accuracy and increased time for displays with more objects.
On the other hand, in the case of parallel processing, all objects are processed simultaneously but the completion times may vary. This may or may not reduce the accuracy, but the time courses are similar irrespective of the size of the display. [12]
However, there are concerns about the efficiency of parallel processing models in case of complex tasks which are discussed ahead in this article.
There are eight major aspects of a parallel distributed processing model: [8]
These units may include abstract elements such as features, shapes and words, and are generally categorised into three types: input, output and hidden units.
This is a representation of the state of the system. The pattern of activation is represented using a vector of N real numbers, over the set of processing units. It is this pattern that captures what the system is representing at any time.
An output function maps the current state of activation to an output signal. The units interact with their neighbouring units by transmitting signals. The strengths of these signals are determined by their degree of activation. This in turn affects the degree to which they affect their neighbours.
The pattern of connectivity determines how the system will react to an arbitrary input. The total pattern of connectivity is represented by specifying the weights for every connection. A positive weight represents an excitatory input and a negative weight represents an inhibitory input.
A net input is produced for each type of input using rules that take the output vector and combine it with the connectivity matrices. In the case of a more complex pattern connectivity, the rules are more complex too.
A new state of activation is produced for every unit by joining the net inputs of impinging units combined and the current state of activation for that unit.
The patterns of connectivity are modified using experience. The modifications can be of three types: First, the development of new connections. Second, the loss of existing connection. Last, the modification of strengths of connections that already exist. The first two can be considered as special cases of the last one. When the strength of a connection is changed from zero to a positive or negative one, it is like forming a new connection. When the strength of a connection is changed to zero, it is like losing an existing connection.
In PDP models, the environment is represented as a time-varying stochastic function over the space of input patterns. [13] This means that at any given point, there is a possibility that any of the possible set of input patterns is impinging on the input units. [9]
An example of the PDP model is illustrated in Rumelhart's book 'Parallel Distributed Processing' of individuals who live in the same neighborhood and are part of different gangs. Other information is also included, such as their names, age group, marital status, and occupations within their respective gangs. Rumelhart considered each category as a 'unit' and an individual has connections with each unit. For instance, if more information is sought on an individual named Ralph, that name unit is activiated, revealing connections to the other properties of Ralph such as his marital status or age group. [8]
To sense depth, humans use both eyes to see three dimensional objects. This sense is present at birth in humans and some animals, such as cats, dogs, owls, and monkeys. [14] Animals with wider-set eyes have a harder time establishing depth, such as horses and cows. A special depth test was used on infants, named The Visual Cliff. [15] This test consisted of a table, half coated in a checkerboard pattern, and the other half a clear plexiglass sheet, revealing a second checkerboard platform about a foot below. Although the plexiglass was safe to climb on, the infants refused to cross over due to the perception of a visual cliff. This test proved that most infants already have a good sense of depth. This phenomenon is similar to how adults perceive heights.
Certain cues help establish depth perception. Binocular cues are made by humans' two eyes, which are subconsciously compared to calculate distance. [16] This idea of two separate images is used by 3-D and VR filmmakers to give two dimensional footage the element of depth. Monocular cues can be used by a single eye with hints from the environment. These hints include relative height, relative size, linear perspective, lights and shadows, and relative motion. [15] Each hint helps to establish small facts about a scene that work together to form a perception of depth. Binocular cues and monocular cues are used constantly and subconsciously to sense depth.
Limitations of parallel processing have been brought up in several analytical studies. The main limitations highlighted include capacity limits of the brain, attentional blink rate interferences, limited processing capabilities, and information limitations in visual searches.
There are processing limits to the brain in the execution of complex tasks like object recognition. All parts of the brain cannot process at full capacity in a parallel method. Attention controls the allocation of resources to the tasks. To work efficiently, attention must be guided from object to object. [17]
These limits to attentional resources sometimes lead to serial bottlenecks in parallel processing, meaning that parallel processing is obstructed by serial processing in between. However, there is evidence for coexistence of serial and parallel processes. [18]
The feature integration theory by Anne Treisman is one of the theories that integrates serial and parallel processing while taking into account attentional resources. It consists of two stages-
Attention is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is a process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, only less than 1% of the visual input data can enter the bottleneck, leading to inattentional blindness.
Connectionism is the name of an approach to the study of human mental processes and cognition that utilizes mathematical models known as connectionist networks or artificial neural networks. Connectionism has had many 'waves' since its beginnings.
The consciousness and binding problem is the problem of how objects, background and abstract or emotional features are combined into a single experience.
In psychology, the Stroop effect is the delay in reaction time between congruent and incongruent stimuli.
James Lloyd "Jay" McClelland, FBA is the Lucie Stern Professor at Stanford University, where he was formerly the chair of the Psychology Department. He is best known for his work on statistical learning and Parallel Distributed Processing, applying connectionist models to explain cognitive phenomena such as spoken word recognition and visual word recognition. McClelland is to a large extent responsible for the large increase in scientific interest in connectionism in the 1980s.
A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers. Its flow is uni-directional, meaning that the information in the model flows in only one direction—forward—from the input nodes, through the hidden nodes and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks, which have a bi-directional flow. Modern feedforward networks are trained using the backpropagation method and are colloquially referred to as the "vanilla" neural networks.
A neural network, also called a neuronal network, is an interconnected population of neurons. Biological neural networks are studied to understand the organization and functioning of nervous systems.
Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry.
Affective neuroscience is the study of how the brain processes emotions. This field combines neuroscience with the psychological study of personality, emotion, and mood. The basis of emotions and what emotions are remains an issue of debate within the field of affective neuroscience.
David Everett Rumelhart was an American psychologist who made many contributions to the formal analysis of human cognition, working primarily within the frameworks of mathematical psychology, symbolic artificial intelligence, and parallel distributed processing. He also admired formal linguistic approaches to cognition, and explored the possibility of formulating a formal grammar to capture the structure of stories.
In cognitive psychology, the word superiority effect (WSE) refers to the phenomenon that people have better recognition of letters presented within words as compared to isolated letters and to letters presented within nonword strings. Studies have also found a WSE when letter identification within words is compared to letter identification within pseudowords and pseudohomophones.
Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature among other objects or features. Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food among piles of leaves, when trying to find a friend in a large crowd of people, or simply when playing visual search games such as Where's Wally?
Memory has the ability to encode, store and recall information. Memories give an organism the capability to learn and adapt from previous experiences as well as build relationships. Encoding allows a perceived item of use or interest to be converted into a construct that can be stored within the brain and recalled later from long-term memory. Working memory stores information for immediate use or manipulation, which is aided through hooking onto previously archived items already present in the long-term memory of an individual.
TRACE is a connectionist model of speech perception, proposed by James McClelland and Jeffrey Elman in 1986. It is based on a structure called "the TRACE," a dynamic processing structure made up of a network of units, which performs as the system's working memory as well as the perceptual processing mechanism. TRACE was made into a working computer program for running perceptual simulations. These simulations are predictions about how a human mind/brain processes speech sounds and words as they are heard in real time.
The logogen model of 1969 is a model of speech recognition that uses units called "logogens" to explain how humans comprehend spoken or written words. Logogens are a vast number of specialized recognition units, each able to recognize one specific word. This model provides for the effects of context on word recognition.
Network of human nervous system comprises nodes that are connected by links. The connectivity may be viewed anatomically, functionally, or electrophysiologically. These are presented in several Wikipedia articles that include Connectionism, Biological neural network, Artificial neural network, Computational neuroscience, as well as in several books by Ascoli, G. A. (2002), Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011), Gerstner, W., & Kistler, W. (2002), and Rumelhart, J. L., McClelland, J. L., and PDP Research Group (1986) among others. The focus of this article is a comprehensive view of modeling a neural network. Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic, mesoscopic, or macroscopic (system) levels. Computational modeling refers to models that are developed using computing tools.
Catastrophic interference, also known as catastrophic forgetting, is the tendency of an artificial neural network to abruptly and drastically forget previously learned information upon learning new information. Neural networks are an important part of the network approach and connectionist approach to cognitive science. With these networks, human capabilities such as memory and learning can be modeled using computer simulations.
In psychology, the transposed letter effect is a test of how a word is processed when two letters within the word are switched.
In neuroscience, predictive coding is a theory of brain function which postulates that the brain is constantly generating and updating a "mental model" of the environment. According to the theory, such a mental model is used to predict input signals from the senses that are then compared with the actual input signals from those senses. With the rising popularity of representation learning, the theory is being actively pursued and applied in machine learning and related fields.
Artificial neural networks (ANNs) are models created using machine learning to perform a number of tasks. Their creation was inspired by neural circuitry. While some of the computational implementations ANNs relate to earlier discoveries in mathematics, the first implementation of ANNs was by psychologist Frank Rosenblatt, who developed the perceptron. Little research was conducted on ANNs in the 1970s and 1980s, with the AAAI calling that period an "AI winter".
{{cite book}}
: CS1 maint: multiple names: authors list (link)