Behavior Analysis and Behaviorism
Q & A
The questions below are examples
of the kinds of questions asked by people interested in basic
and applied behavior analysis. If you want to add a question,
please use the BAAM suggestions page. We cannot answer every question
or provide treatment advice. But we can address general inquiries
about the basic and applied behavior analysis, as well as questions
about the history, philosophy, and theory of behaviorism.
Applied Behavior Analysis
keep saying to give rewards for good behavior. Didn’t that
study by Lepper, Greene, and Nisbett* show that intrinsic motivation
undermines external rewards? Didn’t they show that kids who
were promised a certificate for drawing drew less later on?
did. But the rewards had little to do with the effect. The nature
of the social interaction was everything in the study. Your textbook
probably didn’t tell you that the kids who showed low interest
in drawing drew more after receiving unexpected rewards.
deal the teacher offered to the children: “You draw, and
I’ll give you this certificate.” Sounds like an assignment
to me. Hardly anyone finds assignments fun, even the "A"
students. The reward isn’t a reward anymore. It has become
a conditioned aversive stimulus. After that, there is little reason
to expect drawing to become more interesting.
The key is
to avoiding the problem is to reinforce naturally. You don’t
need the “deal.”
R., Greene, D., & Nisbett, R. E. (1973). Undermining children’s
intrinsic interest with extrinsic rewards: A test of the “overjustification”
hypothesis. Journal of Personality and Social Psychology,
What is incidental teaching?
Incidental teaching is one of the
most important things a parent, teacher, or therapist can do.
It is actually just paying attention and reinforcing appropriate
behaviors whenever they occur, even if they occur outside of a
formal behavior program. If a child who does not ordinarily respond
quickly or appropriately to a parent's request does so without
complaining or delay, the parent should immediately and strongly
reinforce the behavior. You should always remember to "Catch
them being good."
I use incidental teaching and it
seems to work well. I also know that giving a higher rate of reinforcement
will help. But I'm so busy that I just can't reinforce more often.
What can I do?
Increasing the rate of reinforcement
isn't the only thing you can do. You can also increase the quality,
size, and duration of reinforcement; it is also often possible
to decrease the delay of reinforcement. Changes in these things
have the same effect as changing the rate of reinforcement. For
example, doubling the size of a reinforcer has the same effect
as doubling the rate of reinforcement. Reinforcing twice as quickly
is the same as doubling the rate. Tripling the duration of reinforcement
is the same as tripling the rate. Thus, consider what would happen
if you were to keep the rate of reinforcement the same but (1)
tripled the duration of reinforcement by using descriptive praise
instead of just saying "good job;" (2) doubled the quality
of reinforcement by using the descriptive praise, and; (3) cut
the delay of reinforcement by half by reinforcing twice as quickly.
This would be the same as taking
the original reinforcer and multiplying it by three for the duration,
two for the quality, and two for the being twice as fast: (3 x
2 x 2=12). That is, without having to reinforce more often, you
have potentially increased the power of the reinforcer by a factor
Good applied behavior analysts know
that it is much easier to get teachers, parents, or therapists
to reinforce better than more often. Reinforcing more often is
extra work, and might require a significant change in the teacher's
routine. Reinforcing better can work just as well, but doesn't
require a change in the teacher's routine or do more work. Of
course, if the teacher can reinforce more often too, that will
improve things even more.
What is "descriptive praise"
and why should I do it?
Descriptive praise is a verbal reinforcer
that includes a description of the behavior that you are reinforcing.
Instead of just saying "good" or "good job,"
you should say, for instance, "Johnny, that's a really nice
picture" or "Sally, you've hung up your coat so nicely."
Generally, you should include the person's name. Always use a
positive description of the behavior. (Don't say to Sally, "I
like the way you didn't throw you coat on the floor.")
Descriptive praise is better than
simple praise because it increases the duration and quality of
the praise, and identifies the behavior being reinforced. By using
the person's name, you increase the chances that the person will
notice that you are delivering a verbal reinforcer and directly
engage with you. If you are reinforcing the behavior of a person
who is non-verbal, the descriptive praise will attach verbal labels
to the activity, perhaps teaching the person more words. Even
a completely non-verbal person will benefit. Longer social contact
is likely to more reinforcing than a brief contact (even if the
person doesn't understand).
What is the difference between
a reinforcer and a reward?
"Reinforcer" is a technical
term; "reward" is an everyday term. A reinforcer is
any event, which when it follows a response, increases that response
in the future. A reward is an everyday term for a reinforcer,
but is less specific. Sometimes things are called rewards even
if they have no effect on behavior. Sometimes a promise is a called
a reward, such as when a reward is offered for the return of a
lost dog. No great harm is created using the term reward for reinforcer,
but it is important to remember that if the reward has no effect
on behavior, it is not really a reward at all.
Do behavior programs only use candy
and food as reinforcers? That is, is it all M&Ms?
No. Applied behavior analysts are
taught to use the most natural reinforcers possible in the situation--the
ones that would occur in the "real world." Descriptive
praise, smiles, social interactions, earning points, tokens, and
other things are preferred to food. Although you will see candy
and food used in programs for people with severe developmental
disabilities, actually most behavior programs don't use "appetitive
reinforcers" at all.
Giving candy or food isn't what
usually happens in the world, and behavior that is rewarded only
with these things is not likely to last very long outside of the
treatment setting. In fact, all good programs include a way to
transfer "contrived" contingencies to "real world"
reinforcers. In technical terms, all good interventions "program
for maintenance and generalization."
When behavior problems are serious
or very few functional behaviors exist, food and other easily
delivered reinforcers are used to quickly establish new functional
behaviors. It is important to deliver a powerful, reliable reinforcer
as quickly as possible when the desired response occurs. Small
pieces of food can be given quickly, and a person who has no verbal
behavior need not "understand" that they are rewarding.
Small tokens can work too if the child understands what they mean.
However, and this is important, you will notice that the delivery
of food in a good behavior program is always accompanied by verbal
praise, a smile, or other social indicators of approval. This
is done on purpose. A well-designed behavior plan should move
as quickly as possible from reinforcers like food and candy to
something more natural. Eventually, you hope to rely exclusively
on real, everyday reinforcers like verbal approval, or even just
a smile. The late Donald Baer of the University of Kansas called
this "entry into natural communities of reinforcement,"
and he was one of the first to study the power of social reinforcement
with young children.
can't seem to find a good reinforcer for my child. She has autism
and doesn't seem very interested in anything but playing with the
water in the sink, watching Sponge-Bob, looking out of the window,
and sitting in her room spinning things. We've tried verbal praise,
tokens, hugs, and candy but she just doesn't respond.
There are three parts to this answer.
The first is that this is a good question. Many otherwise well
designed behavior programs fail because the therapist was insensitive
to the need to have a wide variety of powerful, easy-to-deliver
reinforcers. Sometimes we just don't notice when good reinforcers
are losing their power. We can actually cause newly established
behavior to weaken by continuing to deliver reinforcers that are
not very reinforcing. This is called "extinction." This
is why good behavior analysts have always emphasized using the
widest variety of the most natural reinforcers as possible. Before
starting a behavior program, a good behavior analyst will do a
functional analysis or at least a good assessment to determine
what the potential reinforcers are.
Second, there are probably lots
of reinforcers available to you. You just haven't found them yet.
This isn't a criticism. It is common for people working with people
with developmental disabilities to have have fewer reinforcers
because their children or clients are just not interesting in
many of the things that are reinforcing for most other people.
But if you are going to do a behavior program, you simply need
to find a wide variety of easily delivered reinforcers. Ideally,
you should use smiles, verbal praise, and social interactions
as reinforcers. In fact, if you are working with a person with
autism, making social interactions more reinforcing is one of
your most important goals. Before this happens, you might have
to use food. But it doesn't have to be candy or pieces of cereal.
Anything will do if she will eat it, or just wants it. Don't worry
at this stage if it's nutritious (although if it is nutritious
that's even better). You are also going to want to have a variety
of things ready. Remember that food preferences can change daily
and even from moment to moment.
Third, reinforcers are not just
objects. You have already listed a bunch of good reinforcers in
your question. Anything your daughter does can be used as a reinforcer.
Even problem behavior can be a reward. This is known as the "Premack
Principle," using a common behavior to reinforce a less common
one. Look for other things your daughter does a lot. The opportunity
to watch Sponge Bob would be an excellent reinforcer because you
could use a DVD to show the program whenever you need to. You
can establish a connection between tokens and these activities
to make the tokens reinforcing. If she likes Sponge Bob, the DVD
case itself might be a good "token" because it's associated
with the show. Even if you use tangible or edible reinforcers,
don't forget the social ones. Smile, look at your daughter, and
use descriptive praise whenever you deliver any other reinforcer,
even if it is an activity. By pairing social signals with the
activity, you make yourself and your approval more reinforcing
by association. Eventually, just your attention and approval will
be reinforcing, and you will be able to dispense with the objects
and food altogether.
have read about behaviorists using a lot of punishment. Do they
actually do this?
No. Most behavior analysts use only
positive reinforcement in their programs. Behavior they want to
reduce or eliminate is ignored (extinction) usually while better
alternative behaviors are reinforced. Good behavior analysts know
that people who deliver punishment become punishing by association.
Sometimes "time out" might be used for serious behavior
problems. But even time-out is substantially a form of extinction,
with other components added. A good behavior analyst will use
time-out only for short periods, no more than a few minutes at
most, and even then very judiciously. Time-out might consist of
nothing more than removing learning materials and attention for
a few seconds. In very rare and serious cases of severe head banging,
a device called SIBIS might be used. SIBIS provides an brief,
annoying shock to the leg when the wearer hits his or her head
sufficiently hard. SIBIS is highly effective, sometimes reducing
head banging to zero in one or two trials. It is considered a
treatment of last resort, and its use is heavily regulated and
remains controversial even within the behavior analytic community.
Responsible applied behavior analysts never use it without also
having a good program of positive reinforcement for functional
behaviors. And, a plan to fade SIBIS (or any kind of punishment)
is essential. Behavior management using positive reinforcement
is now so effective that programmed punishment of any kind is
not very common.
is clicker training?
Clicker training is a technique
for animal training that uses the sound of a clicker as a conditioned
reinforcer for shaping behavior. Properly done, clicker training
involves no punishment at all. First, you make the clicker reinforcing
by repeatedly pairing clicks with food. (Click first then immediately
deliver the food. Always deliver the food in the same place.)
When the animal comes immediately to get food when the clicker
is sounded, then you are ready to teach. Now, shape the behavior
you want by immediately clicking the clicker when the animal emits
an approximation or part of the final desired behavior.
Clicker training was developed by
B. F. Skinner in conjunction with his experimental research on
animal behavior in the 1930s and 1940s. He published an article
on the technique in the magazine Scientific American in
1951. Clicker training has been popularized by Karen Pryor, an
ethologist and dolphin trainer. It has now become the standard
method of training animals for performances, work, and obedience.
It has been used with animals ranging from whales and dolphins
to dogs and cats.
cats harder to teach than dogs? Are cats less intelligent?
It is dangerous to get in the middle
of a dog versus cat discussion, but we will take the risk. Cats
are not really less intelligent than dogs. Or, at least we don't
know who is smarter because they can't easily fill in the little
circles on the standardized test forms.
The issue really isn't whether dog
or cats are easier to train. The issue is the range and type of
reinforcers cats and dogs are sensitive to. Dogs are highly social
pack animals. Their behavior can be strongly reinforced by just
a little attention from other members of their pack, especially
the pack leader (you). Thus, there are many opportunities for
a dog's behavior to be shaped by incidental, attention-based reinforcement.
This is one of the reasons dogs seem to take on many human characteristics.
They are taught to be more human by attention we give to human-like
responses. It is also likely that they teach us to be more dog-like
because we respond to their social attention. Cats are less generally
social and less sensitive to attention as a reinforcer. Their
behavior is less likely to be shaped by incidental attention than
dogs'. Unlike dogs, which are hunter/scavengers and will consume
a very wide range of foods at almost any time, cats consume a
narrower range of foods and will often do so only at specific
times. Thus, food reinforcers are almost always more effective
with dogs. However, as cat owners know, cats will quickly learn
to run to kitchen as soon as they hear the can opener. Thus, clicker
training for a cat can be easily done if it is the cat's regular
dinner time, and the clicker is paired with a favorite food. (J.
am teaching my rat to turn in a circle using clicker training. Things
were going fairly well. I was reinforcing each part of a turn, and
he was getting almost all the way around. But now he has “regressed.”
He mostly sits and sniffs the air. I hardly ever get a whole circle
and it takes forever for him to try again. What is going on?
Forensic behavior analysis is difficult.
So this is just a guess. I am going to assume that your rat is
hungry and the food is reinforcing.
It sounds like you are reinforcing
a second or two after the rat has completed a partial turn. You
should be reinforcing while he is turning. The problem with reinforcing
after the turn is completed is that you are essentially reinforcing
the behavior of being stationary. That is, the one response that
is most consistently and immediately reinforced is standing still,
waiting for food. Starting a new turn is farthest from the reinforcer.
Therefore, starting a new turn is becoming weaker while stopping
is becoming stronger.
Animal trainers make a point of
keeping the animal moving. If there is movement, there is behavior
to shape from. If the animal is not moving, you don’t have
much to work with. Reinforce during the turn, not after, and differentially
reinforce quick starts (especially at the beginning of training).
That means if he starts a new turn very quickly, reinforce that
immediately. Of course, reinforce longer and longer turns, but
be a little variable too so he doesn't learn to stop at any one
point to anticipate the reinforcer.
If you do this, it won't be long
before he gets that complete circle. (James T. Todd, 02-01-2006)
Conceptual and Theoretical Issues
instructor says that the Breland's "instinctive drift"
shows the "fundamental weakness of operant theory." What
is "instinctive drift" and why is it so harmful to operant
drift doesn't harm operant theory at all.
has been taken in by generations of academic folklore. "Instinctive
drift" (or sometimes "instinctual drift"), said
to be the gradual shifting of learned behavior back towards instinctual
behavior, exists primarily in textbooks and the imaginations of
critics of behavior analysis. Instinctive drift is not, in fact,
the inevitable outcome of conditioning. And when it does seem
to occur, it is a perfectly obvious and predictable outcome of
operant theory, not a violation of it.
You observe some species-specific behavior during conditioning.
The dog rolls over on cue but begs and whines too. The rat bites
the lever that it also presses for food. The raccoon washes the
large "coin" you are shaping it to put in the bank.
Should these things happen? Sometimes. If they do, you shouldn't
If you are
reinforcing behavior with food, you are also pairing the food
with various objects and events in the training situation. The
food will reinforce certain behaviors, and these will increase
in probability. But through classical conditioning the association
between the food delivery and things in the training context will
cause those things to become conditioned stimuli for food-related
behavior. Pairing the moving lever with food deliveries will cause
the rat to treat the lever like food and bite it. The association
between you and the food will cause the dog to treat you like
a food source, eliciting begging. Pairing the coin and the food
will cause the coin to become a conditioned stimulus that elicits
food-related behavior in the raccoon.
The only way
to see "instinctive drift" as a problem for operant
theory is to forget, as the Brelands did in 1961 when they published
Misbehavior of Organisms," that many things are taught
during conditioning, not just the target behavior. If you use
food reinforcement, then you are also eliciting and conditioning
food-related behaviors whether you want to or not. Sometimes the
elicited behaviors will be strong enough to interfere with the
operant behavior. Just as often or moreso, they will not interfere.
It is the job of the scientist to figure out which situation will
occur and why. Seems odd that students of Skinner, the man who
called attention to the importance of elicitation as a fundamental
behavioral process with his distinction between operant and respondent
behavior, would completely forget about the respondent part. Of
course, the Brelands' article would not have been nearly as interesting
if they had gotten the answer right.
My textbook says that behaviorists
believe that all behavior is learned. Do behavior analysts believe
that all behavior is learned?
No. All behaviorists know that organisms
are born with many specific, sometimes very complex patterns of
behavior. No one believes that spiders learn to spin webs. No
one seriously questions that the tendency of border collies to
herd sheep (versus poodles, for instance) is inborn and due to
selective breeding. But the question is not always where the behavior
came from, but what is making it occur right now, how it might
be changed, or what you might have to do to make it happen in
the future. Behaviorists are often concerned that people believe
that an innate behavior is also hard to change. Some behavior
is hard to change, are and some is not. The degree to which an
unlearned behavior can be changed is as much a function of our
knowledge about the behavior as the behavior itself. It was not
that long ago that the behaviors associated with developmental
disabilities were thought to be almost impossible to change. Applied
behavior analysts often seem to disregard the unlearned origins
of some of the behaviors. The strange self-stimulatory behavior
often exhibited by people with autism certainly has biological
origins. But, the applied behavior analyst is interested in what
will make it change, and is leery of the idea that just because
the behavior might have unlearned origins, it will be hard to
change. They would prefer to assume it can be changed and be proven
wrong (temporarily, until they figure out what to do), than not
try at all.
The first behaviorist, John B. Watson, wrote a great deal about
unlearned behavior. He had done considerable research on the naturally
occurring behavior of animals, particularly sea birds. Thus, he
knew a great deal about instinctive behavior. He even said that
it is impossible to know exactly what an animal has learned without
knowing about its unlearned behavior. He also believed that humans
have many unlearned behaviors. In his book Behaviorism,
he even included a chart of unlearned reflexes in children. But
for Watson, the issue wasn't so much where the behavior came from,
but what could be done with the environment. You can't test the
limits of behavior by just assuming that something is inborn and
cannot be changed. Even it is is inborn, it is certainly possible
to change it. Watson did object to the concept of instinct. He
did not deny that animals often exhibit complex patterns of unlearned
behavior. But, he was concerned that the concept of instinct was
being used to avoid making a real analysis of why and when the
B. F. Skinner also wrote a great deal about unlearned behavior.
For instance, in 1966, he wrote an article called "The Ontogeny
and Phylogeny of Behavior" for the journal Science
which was about how learned and unlearned behavior evolve and
interact. He also believed that the tendency to react emotionally
to aversive stimuli was innate. Imitation, too, might be unlearned
-- although obviously refined quickly by the imitator's successes
and failures with good and bad imitations. Skinner, like Watson,
was concerned about the tendency to attribute all kinds of behavior
to genetics. This, he believed, caused scientists to fail to identify
the variables that actually make the behavior happen. Terms such
as "instinct" were used in place of a real analysis
heard that shaping was "invented" by B. F. Skinner during
World War II. I thought he had shaped a rat named Pliny to drop
a marble down a tube in the 1930s.
According to a 2004 article Gail
Peterson in the Journal of the Experimental Analysis of Behavior,
B. F. Skinner had not actually hand-shaped an operant response
before 1943, when he was developing a guided bomb using pigeons
text). A previous attempt to teach a rat named Pliny to drop
a ball down a tube, first reported in Life magazine in
1937, did not involve response shaping. The environment was shaped
around the rat in a manner that would now be called "errorless
learning." According to Peterson, the discovery of shaping
led Skinner to significantly alter his perspective on verbal behavior,
and look more closely at human behavior generally:
This insight stimulated him
to coin a new term (shaping), and also led directly to a shift
in his perspective on verbal behavior from an emphasis on antecedents
and molecular topographical details to an emphasis on consequences
and more molar, functional properties in which the social dyad
inherent to the shaping process became the definitive property
of verbal behavior. Moreover, the insight seems to have emboldened
Skinner to explore the greater implications of his behaviorism
for human behavior writ large, an enterprise that characterized
the bulk of his post World War II scholarship. (p. 317)
Skinner eventually published a popular
account of hand shaping, "How to Teach Animals," in
1951 in Scientific American magazine.
World's First Look at Shaping: Skinner's Gutsy Gamble
G. B. (2000). The discovery of shaping, or B. F. Skinner's
big surprise. The Clicker Journal: The Magazine for Animal
Trainers, No.43 (July/August), 6-13.
Peterson, G. B. (2004). A day
of great illumination: B. F. Skinner's discovery of shaping.
Journal of the Experimental Analysis of Behavior, 82,
Skinner, B. F. (1951). How
to teach animals. Scientific American, 185, 26-29.
Why did Pavlov use dogs?
Pavlov used dogs in research on
behavior because he was actually a digestive physiologist. At
that time, dogs were commonly used because of the similarity of
the dog's digestive system to the human digestive system. When
he discovered the conditioned reflex, Pavlov was actually studying
the relationship between salivary secretions and food consumption
when he noticed that the dogs were salivating before they were
given food. It looked like they were starting to salivate when
the assistant started the experiment. Pavlov was an extremely
careful experimenter, so he replaced the assistant with a bell.
He repeatedly rang the bell then delivered the food. Before long
the dog would salivate when it heard the bell. He recognized the
importance of this discovery, and began to study the conditioned
reflex in earnest. Because all of his equipment and laboratory
were set up for dogs, he continued to use them in his behavioral
is "preference for free choice" and why is it important
to consider in applied behavior analysis?
Research has shown that organisms
prefer situations that offer a greater variety of choices, even
if they do not avail themselves of the choices. That is, even
if you always choose the same item on a menu, you will still find
the menu itself less appealing if it has fewer items. This general
situation applies to people, pigeons, and probably everything
The classic experiment on preference
for free choice was done by A. Charles Catania and Terje Sagvolden
and published in 1980 in the Journal of the Experimental Analysis
of Behavior, "Preference
for Free Choice Over Forced Choice in Pigeons."
The design was simple. In the first
stage of each trial, pigeons could peck one of two keys. One key
produced a "free choice" situation in which the pigeon
saw a row of four keys: three green and one red. Pecks on the
other key produced a "forced-choice" situation in which
the pigeon saw one green key and three red keys. In either situation,
pecking a green key produced food. Pecking a red key produced
nothing. The arrangement of the colors varied from trial to trial.
Even though all the pigeons reliably
pecked a green key in either situation, always earning food, they
selected the free-choice situation about 70% of time. This shows
that just having a choice is reinforcing, even if the rate of
the reinforcement in both situations is exactly the same.
at least two reasons that free choice is reinforcing. One reason
is that a free-choice situation offers a greater number of reinforcers
relative to a forced-choice situation. We know from research on
the matching law that organisms distribute responses in proportion
to the relative reinforcement value of the different response
options. In the case of the free choice situation, there are three
conditioned reinforcers (green keys). Forced choice has just one
conditioned reinforcer. The reinforcement value of the free-choice
situation is essentially 75%; the value of the forced-choice situation
is 25%. This closely matches the pigeon's behavior--selecting
the free choice situation about 70% of the time.
reason involves prior learning. Organisms have learned that if
all other things are equal, having a choice is more likely to
lead to reinforcement than not having a choice. Having a choice
means that if one of the reinforcers is not appealing, another
might be reinforcing. If there is only one potential reinforcer
available, as in the forced-choice situation, the choice might
not be a reinforcing at that time. Having a choice also means
that if the organism randomly selects a potential reinforcer,
the choice is more likely to be reinforcing. Fewer choices means
fewer chances that something might be reinforcing. In other words,
organisms have learned that options mean more reinforcers, and
they choose to have options.
There are significant practical implications
to this. If you are delivering reinforcers to someone, you should
try to offer an array of choices, not just the one thing you think
the person might want. Consider a situation in which you are working
with a client with a developmental disability who likes baseball
cards. A new card might have worked every time in the past, but
it might not work at this moment. Maybe the baseball season is
over; maybe he has that card; maybe he has lost interest in baseball
cards. If you offered ice cream, a token, a candy bar, and a baseball
card at the same time, it is highly likely that at least one will
be reinforcing. You are also making the reinforcement exchange
even more reinforcing than it might have been simply by giving
the choice. You also give yourself the opportunity to teach your
client to learn to look at and consider the choices. This is a
very important functional independent living skill. (A very good
analysis of the practical issues and difficulties involved in
balancing the right of people with developmental disabilities
to have effective treatment with their right to have free choices
was written by Diane Bannerman-Juracyk and her colleagues at the
University of Kansas; full
A. C., & Sagvolden, T. (1980). Preference for free choice
over forced choice in pigeons. Journal of the Experimental
Analysis of Behavior, 34, 77-86. (abstract)
D. J., Sheldon, J. B., Sherman, J. A., & Harchik, A. E.
(1990). Balancing the right to habilitation with the right to
personal liberties: The rights of people with developmental
disabilities to eat too many doughnuts and take a nap. Journal
of Applied Behavior Analysis, 23, 79-89.(full
What is B. F. Skinner's full
Burrhus Frederic Skinner. Burrhus
was his mother's maiden name. "Burrhus" was a troublesome
name for a child, and he was actually known as "Fred"
to his friends.
What is John B. Watson's middle
He was John Broadus Watson.