Behavioral contrast refers to a change in the strength of one response that occurs when the rate of reward of a second response, or of the first response under different conditions, is changed. For example, suppose that a pigeon in an operant chamber pecks a key for food reward. Sometimes the key is red, sometimes green, but food comes with equal frequency in either case. Then suddenly pecking the key when it is green brings food less frequently. Positive contrast is seen when the rate of response to the red key goes up, even though the frequency of reward in red remains unchanged. Likewise, increasing the reward to green tends to reduce the response rate to red (negative contrast). This sort of contrast effect may occur following changes in the amount, frequency, or nature of the reward, and it has been shown to occur with various experimental designs and response measures (e.g. response rate, running speed). [1] [2]
In 1942, Crespi measured the speed of rats running to various amounts of reward at the end of an alley. He found that the greater the magnitude of reward, the faster the rat would run to get the reward. [3] In the middle of his experiment Crespi shifted some of his animals from a large reward to a small reward. These animals now ran even more slowly than control animals that had been trained on small reward throughout the experiment. This overshoot is an example of successive negative contrast. Likewise, other animals shifted from small to large reward ran faster than those trained on the larger reward throughout (successive positive contrast). Crespi originally called these effects depression and elation respectively, but, in 1949, Zeaman suggested changing the names to negative contrast and positive contrast. [4] The combined concept of behavioral contrast is sometimes also referred to as the Crespi effect.
In 1981, Bower discovered that positive contrast may be reduced because the response measure hits a ceiling. Thus, if contrast is the subject of an experiment, reward sizes may need to be adjusted to keep the response below such a ceiling. [5] In 1996, Flaherty suggested that negative contrast was related to frustration; that is, the sudden shift to a low reward causes frustration for the person or the animal, and this frustration interferes with the behavior the subject is performing. [6]
Burrhus Frederic Skinner was an American psychologist, behaviorist, author, inventor, and social philosopher. He was a professor of psychology at Harvard University from 1958 until his retirement in 1974.
Edward Lee Thorndike was an American psychologist who spent nearly his entire career at Teachers College, Columbia University. His work on comparative psychology and the learning process led to the theory of connectionism and helped lay the scientific foundation for educational psychology. He also worked on solving industrial problems, such as employee exams and testing. He was a member of the board of the Psychological Corporation and served as president of the American Psychological Association in 1912. A Review of General Psychology survey, published in 2002, ranked Thorndike as the ninth-most cited psychologist of the 20th century. Edward Thorndike had a powerful impact on reinforcement theory and behavior analysis, providing the basic framework for empirical laws in behavior psychology with his law of effect. Through his contributions to the behavioral psychology field came his major impacts on education, where the law of effect has great influence in the classroom.
Operant conditioning is a type of associative learning process through which the strength of a behavior is modified by reinforcement or punishment. It is also a procedure that is used to bring about such learning.
Classical conditioning refers to a learning procedure in which a biologically potent stimulus is paired with a previously neutral stimulus. It also refers to the learning process that results from this pairing, through which the neutral stimulus comes to elicit a response that is usually similar to the one elicited by the potent stimulus. It was first studied by Ivan Pavlov in 1897.
In behavioral psychology, reinforcement is a consequence applied that will strengthen an organism's future behavior whenever that behavior is preceded by a specific antecedent stimulus. This strengthening effect may be measured as a higher frequency of behavior, longer duration, greater magnitude, or shorter latency. There are two types of reinforcement, known as positive reinforcement and negative reinforcement; positive is where by a reward is offered on expression of the wanted behaviour and negative is taking away an undesirable element in the persons environment whenever the desired behaviour is achieved. Rewarding stimuli, which are associated with "wanting" and "liking" and appetitive behavior, function as positive reinforcers; the converse statement is also true: positive reinforcers provide a desirable stimulus. Reinforcement does not require an individual to consciously perceive an effect elicited by the stimulus. Thus, reinforcement occurs only if there is an observable strengthening in behavior. However, there is also negative reinforcement, which is characterized by taking away an undesirable stimulus. Changing someone's job might serve as a negative reinforcer to someone who suffers from back problems, i.e. Changing from a labourers job to an office position for instance.
Albert Bandura is a Canadian-American psychologist who is the David Starr Jordan Professor Emeritus of Social Science in Psychology at Stanford University.
The experimental analysis of behavior is school of thought in psychology founded on B. F. Skinner's philosophy of radical behaviorism and defines the basic principles used in applied behavior analysis. A central principle was the inductive reasoning data-driven examination of functional relations, as opposed to the kinds of hypothetico-deductive learning theory that had grown up in the comparative psychology of the 1920–1950 period. Skinner's approach was characterized by observation of measurable behavior which could be predicted and controlled. It owed its early success to the effectiveness of Skinner's procedures of operant conditioning, both in the laboratory and in behavior therapy.
Animal cognition encompasses the mental capacities of non-human animals. The study of animal conditioning and learning used in this field was developed from comparative psychology. It has also been strongly influenced by research in ethology, behavioral ecology, and evolutionary psychology; the alternative name cognitive ethology is sometimes used. Many behaviors associated with the term animal intelligence are also subsumed within animal cognition.
Behaviorism is a systematic approach to understanding the behavior of humans and other animals. It assumes that behavior is either a reflex evoked by the pairing of certain antecedent stimuli in the environment, or a consequence of that individual's history, including especially reinforcement and punishment contingencies, together with the individual's current motivational state and controlling stimuli. Although behaviorists generally accept the important role of heredity in determining behavior, they focus primarily on environmental events.
A cognitive shift or shift in cognitive focus is triggered by the brain's response and change due to some external force.
Shaping is a conditioning paradigm used primarily in the experimental analysis of behavior. The method used is differential reinforcement of successive approximations. It was introduced by B. F. Skinner with pigeons and extended to dogs, dolphins, humans and other species. In shaping, the form of an existing response is gradually changed across successive trials towards a desired target behavior by reinforcing exact segments of behavior. Skinner's explanation of shaping was this:
We first give the bird food when it turns slightly in the direction of the spot from any part of the cage. This increases the frequency of such behavior. We then withhold reinforcement until a slight movement is made toward the spot. This again alters the general distribution of behavior without producing a new unit. We continue by reinforcing positions successively closer to the spot, then by reinforcing only when the head is moved slightly forward, and finally only when the beak actually makes contact with the spot. ... The original probability of the response in its final form is very low; in some cases it may even be zero. In this way we can build complicated operants which would never appear in the repertoire of the organism otherwise. By reinforcing a series of successive approximations, we bring a rare response to a very high probability in a short time. ... The total act of turning toward the spot from any point in the box, walking toward it, raising the head, and striking the spot may seem to be a functionally coherent unit of behavior; but it is constructed by a continual process of differential reinforcement from undifferentiated behavior, just as the sculptor shapes his figure from a lump of clay.
Brain stimulation reward (BSR) is a pleasurable phenomenon elicited via direct stimulation of specific brain regions, originally discovered by James Olds and Peter Milner. BSR can serve as a robust operant reinforcer. Targeted stimulation activates the reward system circuitry and establishes response habits similar to those established by natural rewards, such as food and sex. Experiments on BSR soon demonstrated that stimulation of the lateral hypothalamus, along with other regions of the brain associated with natural reward, was both rewarding as well as motivation-inducing. Electrical brain stimulation and intracranial drug injections produce robust reward sensation due to a relatively direct activation of the reward circuitry. This activation is considered to be more direct than rewards produced by natural stimuli, as those signals generally travel through the more indirect peripheral nerves. BSR has been found in all vertebrates tested, including humans, and it has provided a useful tool for understanding how natural rewards are processed by specific brain regions and circuits, as well the neurotransmission associated with the reward system.
The reward system is a group of neural structures responsible for incentive salience, associative learning, and positively-valenced emotions, particularly ones involving pleasure as a core component. Reward is the attractive and motivational property of a stimulus that induces appetitive behavior, also known as approach or consummatory behavior. A rewarding stimulus has been described as "any stimulus, object, event, activity, or situation that has the potential to make us approach and consume it is by definition a reward". In operant conditioning, rewarding stimuli function as positive reinforcers; however, the converse statement also holds true: positive reinforcers are rewarding.
In operant conditioning, punishment is any change in a human or animal's surroundings which, occurring after a given behavior or response, reduces the likelihood of that behavior occurring again in the future. As with reinforcement, it is the behavior, not the human/animal, that is punished. Whether a change is or is not punishing is determined by its effect on the rate that the behavior occurs, not by any "hostile" or aversive features of the change. For example, a painful stimulus which would act as a punisher for most people may actually reinforce some behaviors of masochistic individuals.
Errorless learning was an instructional design introduced by psychologist Charles Ferster in the 1950s as part of his studies on what would make the most effective learning environment. B. F. Skinner was also influential in developing the technique, noting that,
...errors are not necessary for learning to occur. Errors are not a function of learning or vice versa nor are they blamed on the learner. Errors are a function of poor analysis of behavior, a poorly designed shaping program, moving too fast from step to step in the program, and the lack of the prerequisite behavior necessary for success in the program.
In behavioral psychology, stimulus control is a phenomenon in operant conditioning that occurs when an organism behaves in one way in the presence of a given stimulus and another way in its absence. A stimulus that modifies behavior in this manner is either a discriminative stimulus (Sd) or stimulus delta (S-delta). Stimulus-based control of behavior occurs when the presence or absence of an Sd or S-delta controls the performance of a particular behavior. For example, the presence of a stop sign (S-delta) at a traffic intersection alerts the driver to stop driving and increases the probability that "braking" behavior will occur. Such behavior is said to be emitted because it does not force the behavior to occur since stimulus control is a direct result of historical reinforcement contingencies, as opposed to reflexive behavior that is said to be elicited through respondent conditioning.
Parrot training, also called parrot teaching, is the application of training techniques to modify the behavior of household companion parrots. Training is used to deal with behavior problems such as biting and screaming, to train husbandry behaviors such as allowing claw trimming without restraint or accepting a parrot harness, and to teach various tricks.
Episodic-like memory is the memory system in animals that is comparable to human episodic memory. The term was first described by Clayton & Dickinson referring to an animal’s ability to encode and retrieve information about ‘what’ occurred during an episode, ‘where’ the episode took place, and ‘when’ the episode happened. This ability in animals is considered ‘episodic-like’ because there is currently no way of knowing whether or not this form of remembering is accompanied by conscious recollection—a key component of Endel Tulving’s original definition of episodic memory.
The Crespi effect is a behavioural contrast phenomenon first observed in rats by US psychologist Leo P. Crespi in 1942. He found that in a repeatedly carried out task such as finding food in a maze, the running speed of the rat is proportional to the size of the reward it obtained on the previous trial. The more food reward that was given to it last time upon completion of the task, the faster it will run when attempting to complete the same task. The effect also works in reverse: when rats were shifted from a larger to a smaller reward, they ran more slowly than the control rats that had always received the small reward.
Association in psychology refers to a mental connection between concepts, events, or mental states that usually stems from specific experiences. Associations are seen throughout several schools of thought in psychology including behaviorism, associationism, psychoanalysis, social psychology, and structuralism. The idea stems from Plato and Aristotle, especially with regard to the succession of memories, and it was carried on by philosophers such as John Locke, David Hume, David Hartley, and James Mill. It finds its place in modern psychology in such areas as memory, learning, and the study of neural pathways.