Stimulus control

Last updated

In behavioral psychology (or applied behavior analysis), stimulus control is a phenomenon in operant conditioning (also called contingency management) that occurs when an organism behaves in one way in the presence of a given stimulus and another way in its absence. A stimulus that modifies behavior in this manner is either a discriminative stimulus (Sd) or stimulus delta (S-delta). Stimulus-based control of behavior occurs when the presence or absence of an Sd or S-delta controls the performance of a particular behavior. For example, the presence of a stop sign (S-delta) at a traffic intersection alerts the driver to stop driving and increases the probability that "braking" behavior will occur. Such behavior is said to be emitted because it does not force the behavior to occur since stimulus control is a direct result of historical reinforcement contingencies, as opposed to reflexive behavior that is said to be elicited through respondent conditioning.

Contents

Some theorists believe that all behavior is under some form of stimulus control. [1] For example, in the analysis of B. F. Skinner, [2] verbal behavior is a complicated assortment of behaviors with a variety of controlling stimuli. [3]

Characteristics

The controlling effects of stimuli are seen in quite diverse situations and in many aspects of behavior. For example, a stimulus presented at one time may control responses emitted immediately or at a later time; two stimuli may control the same behavior; a single stimulus may trigger behavior A at one time and behavior B at another; a stimulus may control behavior only in the presence of another stimulus, and so on. These sorts of control are brought about by a variety of methods and they can explain many aspects of behavioral processes. [4]

In simple, practical situations, for example if one were training a dog using operant conditioning, optimal stimulus control might be described as follows:

Establishing stimulus control through operant conditioning

Discrimination training

Operant stimulus control is typically established by discrimination training. For example, to make a light control a pigeon's pecks on a button, reinforcement only occurs following a peck to the button. Over a series of trials the pecking response becomes more probable in the presence of the light and less probable in its absence, and the light is said to become a discriminative stimulus or SD. [6] Virtually any stimulus that the animal can perceive may become a discriminative stimulus, and many different schedules of reinforcement may be used to establish stimulus control. For example, a green light might be associated with a VR 10 schedule and a red light associated with a FI 20-sec schedule, in which case the green light will control a higher rate of response than the red light.

Generalization

After a discriminative stimulus is established, similar stimuli are found to evoke the controlled response. This is called stimulus generalization. As the stimulus becomes less and less similar to the original discriminative stimulus, response strength declines; measurements of the response thus describe a generalization gradient.

An experiment by Hanson (1959) [7] provides an early, influential example of the many experiments that have explored the generalization phenomenon. First a group of pigeons was reinforced for pecking a disc illuminated by a light of 550 nm wavelength, and never reinforced otherwise. Reinforcement was then stopped, and a series of different wavelength lights was presented one at a time. The results showed a generalization gradient: the more the wavelength differed from the trained stimulus, the fewer responses were produced. [7]

Many factors modulate the generalization process. One is illustrated by the remainder of Hanson's study, which examined the effects of discrimination training on the shape of the generalization gradient. Birds were reinforced for pecking at a 550 nm light, which looks yellowish-green to human observers. The birds were not reinforced when they saw a wavelength more toward the red end of the spectrum. Each of four groups saw a single unreinforced wavelength, either 555, 560, 570, or 590 nm, in addition to the reinforced 550 wavelength. The birds were then tested as before, with a range of unreinforced wavelengths. This procedure yielded sharper generalization gradients than did the simple generalization procedure used in the first procedure. In addition, however, Hansen's experiment showed a new phenomenon, called the "peak shift". That is, the peak of the test gradients shifted away from the SD, such that the birds responded more often to a wavelength they had never seen before than to the reinforced SD. An earlier theory involving inhibitory and excitatory gradients partially explained the results, [8] A more detailed quantitative model of the effect was proposed by Blough (1975). [9] Other theories have been proposed, including the idea that the peak shift is an example of relational control; that is, the discrimination was perceived as a choice between the "greener" of two stimuli, and when a still greener stimulus was offered the pigeons responded even more rapidly to that than to the originally reinforced stimulus. [10]

Matching to sample

In a typical matching-to-sample task, a stimulus is presented in one location (the "sample"), and the subject chooses a stimulus in another location that matches the sample in some way (e.g., shape or color). [11] In the related "oddity" matching procedure, the subject responds to a comparison stimulus that does not match the sample. These are called "conditional" discrimination tasks because which stimulus is responded to depends or is "conditional" on the sample stimulus.

The matching-to-sample procedure has been used to study a very wide range of problems. Of particular note is the "delayed matching to sample" variation, which has often been used to study short-term memory in animals. In this variation, the subject is exposed to the sample stimulus, and then the sample is removed and a time interval, the "delay", elapses before the choice stimuli appear. To make a correct choice the subject has to retain information about the sample across the delay. The length of the delay, the nature of the stimuli, events during the delay, and many other factors have been found to influence performance on this task. [12]

Cannabinoids

Psychoactive cannabinoids from the Cannabis plant (phytocannabinoids), from the body (endocannabinoids), and from the research lab (synthetic cannabinoids) produce their discriminative stimulus effects by stimulation of CB1 receptors in the brain. [13]

See also

Related Research Articles

<span class="mw-page-title-main">B. F. Skinner</span> American psychologist and social philosopher (1904–1990)

Burrhus Frederic Skinner was an American psychologist, behaviorist, author, inventor, and social philosopher. Considered the father of Behaviorism, he was the Edgar Pierce Professor of Psychology at Harvard University from 1958 until his retirement in 1974.

Operant conditioning, also called instrumental conditioning, is a learning process where behaviors are modified through the association of stimuli with reinforcement or punishment. In it, operants—behaviors that affect one's environment—are conditioned to occur or not occur depending on the environmental consequences of the behavior.

<span class="mw-page-title-main">Operant conditioning chamber</span> Laboratory apparatus used to study animal behavior

An operant conditioning chamber is a laboratory apparatus used to study animal behavior. The operant conditioning chamber was created by B. F. Skinner while he was a graduate student at Harvard University. The chamber can be used to study both operant conditioning and classical conditioning.

In reinforcement theory, it is argued that human behavior is a result of "contingent consequences" to human actions The publication pushes forward the idea that "you get what you reinforce" This means that behavior when given the right types of reinforcers can change employee behavior for the better and negative behavior can be weeded out.

Radical behaviorism is a "philosophy of the science of behavior" developed by B. F. Skinner. It refers to the philosophy behind behavior analysis, and is to be distinguished from methodological behaviorism—which has an intense emphasis on observable behaviors—by its inclusion of thinking, feeling, and other private events in the analysis of human and animal psychology. The research in behavior analysis is called the experimental analysis of behavior and the application of the field is called applied behavior analysis (ABA), which was originally termed "behavior modification."

The experimental analysis of behavior is a science that studies the behavior of individuals across a variety of species. A key early scientist was B. F. Skinner who discovered operant behavior, reinforcers, secondary reinforcers, contingencies of reinforcement, stimulus control, shaping, intermittent schedules, discrimination, and generalization. A central method was the examination of functional relations between environment and behavior, as opposed to hypothetico-deductive learning theory that had grown up in the comparative psychology of the 1920–1950 period. Skinner's approach was characterized by observation of measurable behavior which could be predicted and controlled. It owed its early success to the effectiveness of Skinner's procedures of operant conditioning, both in the laboratory and in behavior therapy.

<span class="mw-page-title-main">Behaviorism</span> Systematic approach to understanding the behavior of humans and other animals

Behaviorism is a systematic approach to understanding the behavior of humans and other animals. It assumes that behavior is either a reflex evoked by the pairing of certain antecedent stimuli in the environment, or a consequence of that individual's history, including especially reinforcement and punishment contingencies, together with the individual's current motivational state and controlling stimuli. Although behaviorists generally accept the important role of heredity in determining behavior, they focus primarily on environmental events.

<i>Verbal Behavior</i> Psychology book

Verbal Behavior is a 1957 book by psychologist B. F. Skinner, in which he describes what he calls verbal behavior, or what was traditionally called linguistics. Skinner's work describes the controlling elements of verbal behavior with terminology invented for the analysis - echoics, mands, tacts, autoclitics and others - as well as carefully defined uses of ordinary terms such as audience.

<span class="mw-page-title-main">Animal training</span> Teaching animals specific responses to specific conditions or stimuli

Animal training is the act of teaching animals specific responses to specific conditions or stimuli. Training may be for purposes such as companionship, detection, protection, and entertainment. The type of training an animal receives will vary depending on the training method used, and the purpose for training the animal. For example, a seeing eye dog will be trained to achieve a different goal than a wild animal in a circus.

<span class="mw-page-title-main">Shaping (psychology)</span> Psychological paradigm for behavior analysis

Shaping is a conditioning paradigm used primarily in the experimental analysis of behavior. The method used is differential reinforcement of successive approximations. It was introduced by B. F. Skinner with pigeons and extended to dogs, dolphins, humans and other species. In shaping, the form of an existing response is gradually changed across successive trials towards a desired target behavior by reinforcing exact segments of behavior. Skinner's explanation of shaping was this:

We first give the bird food when it turns slightly in the direction of the spot from any part of the cage. This increases the frequency of such behavior. We then withhold reinforcement until a slight movement is made toward the spot. This again alters the general distribution of behavior without producing a new unit. We continue by reinforcing positions successively closer to the spot, then by reinforcing only when the head is moved slightly forward, and finally only when the beak actually makes contact with the spot. ... The original probability of the response in its final form is very low; in some cases it may even be zero. In this way we can build complicated operants which would never appear in the repertoire of the organism otherwise. By reinforcing a series of successive approximations, we bring a rare response to a very high probability in a short time. ... The total act of turning toward the spot from any point in the box, walking toward it, raising the head, and striking the spot may seem to be a functionally coherent unit of behavior; but it is constructed by a continual process of differential reinforcement from undifferentiated behavior, just as the sculptor shapes his figure from a lump of clay.

Extinction is a behavioral phenomenon observed in both operantly conditioned and classically conditioned behavior, which manifests itself by fading of non-reinforced conditioned response over time. When operant behavior that has been previously reinforced no longer produces reinforcing consequences the behavior gradually stops occurring. In classical conditioning, when a conditioned stimulus is presented alone, so that it no longer predicts the coming of the unconditioned stimulus, conditioned responding gradually stops. For example, after Pavlov's dog was conditioned to salivate at the sound of a metronome, it eventually stopped salivating to the metronome after the metronome had been sounded repeatedly but no food came. Many anxiety disorders such as post traumatic stress disorder are believed to reflect, at least in part, a failure to extinguish conditioned fear.

In operant conditioning, the matching law is a quantitative relationship that holds between the relative rates of response and the relative rates of reinforcement in concurrent schedules of reinforcement. For example, if two response alternatives A and B are offered to an organism, the ratio of response rates to A and B equals the ratio of reinforcements yielded by each response. This law applies fairly well when non-human subjects are exposed to concurrent variable interval schedules ; its applicability in other situations is less clear, depending on the assumptions made and the details of the experimental situation. The generality of applicability of the matching law is subject of current debate.

Behavioral momentum is a theory in quantitative analysis of behavior and is a behavioral metaphor based on physical momentum. It describes the general relation between resistance to change and the rate of reinforcement obtained in a given situation.

Errorless learning was an instructional design introduced by psychologist Charles Ferster in the 1950s as part of his studies on what would make the most effective learning environment. B. F. Skinner was also influential in developing the technique, noting that,

...errors are not necessary for learning to occur. Errors are not a function of learning or vice versa nor are they blamed on the learner. Errors are a function of poor analysis of behavior, a poorly designed shaping program, moving too fast from step to step in the program, and the lack of the prerequisite behavior necessary for success in the program.

<span class="mw-page-title-main">Charles Ferster</span>

Charles Bohris Ferster was an American behavioral psychologist. A pioneer of applied behavior analysis, he developed errorless learning and was a colleague of B.F. Skinner's at Harvard University, co-authoring the book Schedules of Reinforcement (1957).

Discrimination learning is defined in psychology as the ability to respond differently to different stimuli. This type of learning is used in studies regarding operant and classical conditioning. Operant conditioning involves the modification of a behavior by means of reinforcement or punishment. In this way, a discriminative stimulus will act as an indicator to when a behavior will persist and when it will not. Classical conditioning involves learning through association when two stimuli are paired together repeatedly. This conditioning demonstrates discrimination through specific micro-instances of reinforcement and non-reinforcement. This phenomenon is considered to be more advanced than learning styles such as generalization and yet simultaneously acts as a basic unit to learning as a whole. The complex and fundamental nature of discrimination learning allows for psychologists and researchers to perform more in-depth research that supports psychological advancements. Research on the basic principles underlying this learning style has their roots in neuropsychology sub-processes.

<span class="mw-page-title-main">Match-to-sample task</span>

Short-term memory for learned associations has been studied using the match-to-sample task. The basic procedure begins by presenting a subject with a stimulus that they will be required to remember, known as the 'sample'. They are then required to identify from a subsequent set of stimuli one that 'matches' the sample, known as the comparison stimuli. While the correct comparison stimulus option often matches the sample identically, the task can require a symbolic match or a matching of similar features.

James "Jim" A. Dinsmoor was an American experimental psychologist who published work in the field of the experimental analysis of behavior.

The differential outcomes effect (DOE) is a theory in behaviorism, a branch of psychology, that shows that a positive effect on accuracy occurs in discrimination learning between different stimuli when unique rewards are paired with each individual stimulus. The DOE was first demonstrated in 1970 by Milton Trapold on an experiment with rats. Rats were trained to discriminate between a clicker and a tone by pressing the left and right levers. Half of the rats were trained using the differential outcomes procedure (DOP), where the clicker was paired with sucrose and tone with food pellets. The remaining rats were trained with only sucrose or only food pellets. The rats trained with the DOP were significantly more accurate than those trained with only one type of reinforcement. Since then it has been established through a myriad of experiments that the DOE exists in most species capable of learning.

Association in psychology refers to a mental connection between concepts, events, or mental states that usually stems from specific experiences. Associations are seen throughout several schools of thought in psychology including behaviorism, associationism, psychoanalysis, social psychology, and structuralism. The idea stems from Plato and Aristotle, especially with regard to the succession of memories, and it was carried on by philosophers such as John Locke, David Hume, David Hartley, and James Mill. It finds its place in modern psychology in such areas as memory, learning, and the study of neural pathways.

References

  1. Baum, William M. (2005). Understanding behaviorism : Behavior, culture, and evolution (2. ed.). Malden, MA: Blackwell Pub. ISBN   140511262X.
  2. Skinner, Burrhus Frederick (1957). Verbal Behavior. Acton, MA: Copley Publishing Group. ISBN   1-58390-021-7
  3. Skinner, B.F. (1992). Verbal behavior. Acton, Mass.: Copley. ISBN   1583900217.
  4. Catania, A. C. "Learning" 3rd ed, 1992, Prentice Hall, Englewoood Cliffs, NJ.
  5. Pryor, Karen (2002). Don't Shoot the Dog!. City: Ringpress Books Ltd. ISBN   1-86054-238-7.
  6. Watanabe, S; Sakamoto, K.; Wakita, M. (1994). "Pigeons' discrimination of paintings by Monet and Picasso". Journal of the Experimental Analysis of Behavior. 63 (2): 165–174. doi:10.1901/jeab.1995.63-165. PMC   1334394 . PMID   16812755.
  7. 1 2 Hanson, H. M. (1959). "Effects of discrimination training on stimulus generalization". Journal of Experimental Psychology. 58 (5): 321–334. doi:10.1037/h0042606. PMID   13851902.
  8. Spence, K. W. (1937). "The differential response in animals to stimuli varying in a single dimension". Psychological Review. 44: 430–444. doi:10.1037/h0062885.
  9. Blough, D. S. (1975). "Steady state data and a quantitative model of operant generalization and discrimination". Journal of Experimental Psychology: Animal Behavior Processes. 104: 3–21. doi:10.1037/0097-7403.1.1.3.
  10. Rachlin, Howard (1991). Introduction to modern behaviorism (3rd ed.). New York: W.H. Freeman. ISBN   0716721767.
  11. Blough, D. S. (1959). "Delayed matching in the pigeon". Journal of the Experimental Analysis of Behavior. 2 (2): 151–160. doi:10.1901/jeab.1959.2-151. PMC   1403892 . PMID   13801643.
  12. Bouton, M. E. "Learning and Behavior: A Contemporary Synthesis" (second edition) Sunderland MA: Sinauer
  13. Wiley, Jenny L.; Owens, R. Allen; Lichtman, Aron H. (2016-06-09). "Discriminative Stimulus Properties of Phytocannabinoids, Endocannabinoids, and Synthetic Cannabinoids". Current Topics in Behavioral Neurosciences. 39: 153–173. doi: 10.1007/7854_2016_24 . ISBN   978-3-319-98559-6. ISSN   1866-3370. PMID   27278640.

Further reading