“They’re Just Tired”- The Worst Scapegoat Explanation for Behavior

pexels-photo-57686

Why are they acting that way? “They’re just tired.”.  It’s one of those cliches that never goes away. It’s just so easy to use. You can use it for any situation at all to explain away patterns of maladaptive or cranky behavior. Screaming? Tired. Throwing things? Tired. Hitting their siblings? Tired. It’s the explanation that’s got it all-… Except that it’s not exactly true all the time. Exhaustion does exist, sleeping poorly does affect behavior, but there’s a risk in assuming a cause without looking at the exact conditions surrounding the behavior. It’s more work to do so, but it’s worth it.

In Behavior Analysis, we call that kind of thing an “explanatory fiction”. It’s not directly untruthful, but it avoids reality through ease and circular reasoning. Why do they do that thing we don’t like? Oh! They’re tired. It’s not hard to see the practical ease in that either. Everyone in their life has been cranky or acted miserably when they’ve been stretched too thin. The problem comes from the assumption. That assumption takes away all the curiosity and the need to dig for a more sophisticated answer, and it also leads us to a bias of expectation. We’ll ask around post hoc to confirm the broad theory.  Did they sleep well last night? Oh! Well, there was that one time when ____. Anything we get that conforms to our “theory of tiredness” will close the book. Open and shut case. We miss the real reason. We miss the real point. There’s risk in that. We miss out on catching the patterns that become habits that hurt further down the line. We blind ourselves to teachable moments.

The way to avoid all of these pitfalls and to explore the real reason behind these target behaviors is to begin the search right when we spot it. It would be even better if we could give context to what happened before the behaviors occur. A great psychologist named B.F Skinner called this the Three-Term Contingency, and it is a great way to actually get an idea on the triggers, causes, and/or maintaining factors for behaviors that ought not to happen. These are broken down into three things to study: the Antecedent which occurs before the behavior (“What exactly set this off?”), the Behavior which is the exact thing we are looking at, and the Consequence which happens after the behavior occurs (“What did this behavior get or what did it let them escape?”). Now it’s not just enough to ask the questions. We should probably document it too. Write it down. Take notes. Get numbers. How many times are you seeing this specific behavior? We call that Frequency. How long does that behavior last? We call that Duration. We can use this information to inform our conceptualization on what the behavior’s function is. By finding the function, it can lead to us adapting not only the environment to aid in decreasing the behavior but also aid in helping the learner find a better way to engage for what it is they are after. Even if it is a nap.

Let’s talk Functions of behavior. In Behavior Analysis, there are 4 common categories that make it a simple framework to work with: Attention, Access (to something/someone), Escape (to get away from or avoid), or Automatic Reinforcement (which is internal/invisible and mediated by the self). A pattern of behavior that occurs again and again, regardless of how they slept the night before, might lead us in the direction of one of these. Or more than one. A behavior can also be “multiply maintained”. We can either see this as a complication or as a better truth than a simple off-hand answer. Assuming that fatigue and tiredness are the leading factors only gives us the solution of a nap. That may delay the behavior’s reoccurrence, but if you see, again and again, it’s time to take the step and look deeper. The nap is not the answer, only a temporary respite from the behavior. The contingency and history of reinforcement haven’t gone anywhere. Bottom line: It’s more complicated than that, and probably isn’t going away that easily.

pexels-photo-707193

Trade the Nap for some Differential Reinforcement

Now it’s time to get serious. If we’ve gotten this far, and tracked behavior observably as possible, and ruled out our original assumption of an internal factor like “tiredness”, then we need an answer we can use in the world of the awake. Thankfully, behavior is like dinosaurs, it can undergo extinction (that means go away), or it can get stronger if you feed it (reinforce it). The “bad behaviors”, the maladaptive ones that are not a help to the learner or their situation, can be extinguished by simply avoiding the thing that reinforces it. What is it after? Don’t let it get that. What is it avoiding? Don’t let it avoid that either.

Hard work, right?

But that’s not the end of it. You can’t just take away a behavior and leave a void. You need to replace it. So, when it comes to a maladaptive behavior that aims to get something, and it’s adapted to get that thing, you find a better behavior to replace it. The “bad behavior”? Doesn’t get it. The “good behavior”? That gets it. That’s differential reinforcement; reinforcing the good useful stuff and not reinforcing the other stuff that isn’t helpful or good. Here’s a handful of techniques that follow that principle:

The ol’ DRO (Differential Reinforcement of Other Behaviors): This technique is where you reinforce the “other” behaviors. Everything except the thing you want to go away. If you’re targeting a tantrum, you reinforce every other behavior that is not tantrum related. Some people even fold in some timed intervals (preplanned periods of time) and reward gaps of “other” behaviors so long as the target behavior does not occur. Can they go 5 minutes without a tantrum? Great. How about 10? Progress.

“Not that, this instead!” DRI (Differential Reinforcement of Incompatible Behaviors):  This isn’t a large net like the DRO procedure. This one is where a set of behaviors are picked because they make the target “bad behavior” impossible. Let’s say our learner plays the bagpipes too loudly and is losing friends fast. What’s a good DRI for that? Anything that makes playing the bagpipes impossible. Try the flute. Or jump rope. Or fly a kite. Hold a microphone and sing. It’s all the same just so long as it’s physically impossible to do both the replacement and the original target (bagpipes, etc) that we aim to decrease.

“The right choice” DRA (Differential Reinforcement of Alternative Behavior): This is the laser targeted, surgical precision, version of the DRI. It follows a similar principle: Get a behavior reinforced that is NOT the maladaptive one. Except for DRA, this behavior is a single target, and it’s most often one that is more effective and socially appropriate. DRI doesn’t care if the new behavior and old target behavior share a function or purpose. DRA would, in most cases. You aim an alternative better behavior to take the place of the old maladaptive one.

 

The research on all three are varied, but they are tried and true ways to get one behavior to go away while getting other better ones in their place. Some are easier to use in some situations than others. I invite you to explore the research. It’s fascinating stuff. It’s also a lot more effective long-term than assuming the explanatory fiction and hoping it goes away. Why not take action? Why not take control of real factors that could be used for real good and change?

But not right now. You should take a nap. You look tired.

 

 

Just kidding.

 

References:

Cooper, John O. Heron, Timothy E. Heward, William L. (2007) Applied Behavior Analysis. Upper Saddle River, N.J. Pearson/Merrill-Prentice Hall.

Image Credits:

http://www.pexels.com

Love, Psychologically

apple-570965_640

There are some things that are just fun to study because of their vast importance. Love is one of them. There are as many theories about love as there are grains of sand on a shore, but if you’re a scientist, especially a behavioral scientist; you want to focus on the aspects that can be studied; things that we can at least see, hear, or touch, so that we can come to some kind of agreement on their existence. So it might not be so much an invisible force called “Love” we’re using terminology on, but rather “loving”. The romantic relationship, the affiliation between people; what they do, how they do it, how it maintains. What makes loving, and being loved, a unique experience and one that people tend to pursue for years (while others, sometimes, much shorter).

As humans in general, we cannot see any invisible qualia of romantic “love”, but we can see how people respond to one another, how they draw selective attention, how that attention strengthens and becomes a bond, and how they share in that exchange of the affiliation, that relationship. If we think about “love” as magical, and inexplicable, then that makes it very hard to study, doesn’t it? But if we look at what it “looks like”, what people “do” or “exhibit”, then we get somewhere. Love. It happens so much that there surely have to be some common features, and since we are all human, after all, we must share aspects and patterns that over-arch large groups of us. Even entire populations must share some feature, some pattern, that we can call “loving”. How else would there be so much advice out there?

There has been psychological research on this. An abundance of it. Dorothy Tennov’s work on “Love and Limerence”, Keith Davis’ “Relationship Rating”, Beverly Fehr’s “Love and Commitment”, and even Marshall Dermer’s behavioral account of “Romantic Loving”. There are just a few (there are thousands) of many, that will be used to explore some theoretical frameworks for what makes a working relationship work, what the features are, and the appeal of specific patterns of behavior that make up a “loving” affiliation.

We have to assume a little with this. Everyone is different, so specifics are where we would lose this account’s effectiveness of loving. If we assume everyone likes brightly colored eyes, when in fact many find darker color eyes reinforcing (rewarding/appealing), then we’ve assumed too much. If we, on the other hand, assume that every human on earth is subjectively polyamorous, and can come to no conclusions, then we assume too little. We have to find a middle ground that might not explain everything but explains enough.  We want an account of “loving” that is stable, desired, and explains a fully functioning relationship.

rope-1469244_640

What is Love & Loving? (and what’s not?)

Let’s let out some ground rules for our interpretation of this framework. To best interpret this research, and create something that we can actually put into real testable practice, we need to make sure we keep it in the realm of reality. So when we talk about “Love” going forward, we are going to talk about events/behaviors from ourselves and others. Some may be private (inside our head), some may be public (an action we engage in with another person), but all of these things can be more or less concretely defined. Let’s call the process of experiencing and doing these things “loving”. You can engage in love with another person, and they can engage in loving events/behaviors with you. Sounds fun. Now that we have an operational definition to work with; what might that exclude? Let’s talk about Limerence.

Dorothy Tennov developed this concept in 1979 to explain the experience of being “head over heels” with someone. It’s intense. It’s all consuming. Even a little obsessive. As she, and another researcher Lynn Willmott, describe it;  “an involuntary potentially inspiring state of adoration and attachment to a limerent object (the target of infatuation) involving intrusive and obsessive thoughts, feelings and behaviors from euphoria to despair, contingent on perceived emotional reciprocation”.  Let’s break this down and make it a little more “behavioral”. So limerence, is like love, except people exhibit:

  • Intrusive and obsessive thoughts about the person (Private Events).
  • Attachment to a Limerent Object (the person they are obsessed with). Thoughts and interactions with this individual become highly reinforcing, and behaviors seeking them are thus highly reinforced.
  • Reciprocity determines “euphoria” or “despair”. If that Limerent Object ( the person who is being obsessed over) gives a specific type of perceived behavior; it can either be incredibly reinforcing (rewarding), or incredibly aversive. These are two very extreme states.

According to Tennov, this is the type of “loving” we would hope to turn into a relationship or affiliation of “loving” behaviors between two people, but it could not maintain itself as it is. It’s not stable. It’s intense, a flash, but is based on perceptions and obsession (highly repeated private events or “thoughts” about that person). These behaviors do not operate in a healthy way to create or build a relationship. They seem to seek out the other person intensely, but you might notice, they do not seem to hold that person in a regard where a relationship could flourish. This type of limerence is what Tennov found to be dangerous.

It’s not a feeling so much as it is a pattern, and she found 3 ways that it subsides.

Consummation, where the feelings are reciprocated, and ideally, the limerence becomes a more healthy form of attachment. This is the best case scenario.

Starvation, or as behaviorists call it “Extinction”, where the behaviors of obsession/seeking are not reinforced; the other person doesn’t respond. The seeker gets nothing of what they were seeking, so the seeking undergoes behavioral extinction because it no longer serves its function. This is a painful process, the “despair” Tennov spoke of.

Then there’s Transference where the limerence stays, but the limerent object changes. The person they are focusing on gets replaced with another person, and the cycle of intense emotion, intrusive thoughts, etc continues in another direction. In behavioral terms, the response class remains, but the target of those behaviors changes. This type of seeking also seems incredibly unhealthy and hard to sustain a balanced life around.

According to our original operational definition of “loving”, this limerence is not going to work, conventionally. We cannot apply these patterns to a broad population and hope for good outcomes. This is where we need to turn to Keith Davis’ research and Marshall Dermer’s behavioral account of loving to help us out. These researchers took features of “loving” relationships and broke them down into components that most people tend to exhibit. On top of that, they also came up with strategies that might maintain them. Having a loving relationship is good, but maintaining it is also something worth looking into. You might have seen the word reinforcer or reinforcing used a few times. Humans rely on patterns. It’s a big part of how we operate. Think of reinforcers as “things” that keep a pattern going, and reinforcement as the process of strengthening that pattern. Let’s talk “loving” reinforcement and these components of caring.

heart-583895_640

Features of “Loving” and Reinforcing (Maintaining) Them

They (Davis, Todd, as well as Dermer) break down “loving” into three classes of features; Caring, Passion, and Friendship. These are behaviors and traits exhibited in regular patterns and are consistent. They are a common part of the functioning relationship or affiliation with one another. I’ll present a few words from the researchers, and follow up with some actual behaviors that represent examples that a person might engage in.

Features of Caring:

  1. The person “gives their utmost” to the other. Behaviorally speaking, we mean that the effort put in to engage with the other person, and acting for their benefit, is high. Some might say foregoing one’s own reinforcers (rewards) so that the other are reinforced (rewarded).  Here are some examples.
      • Engaging regularly.
      • Being present and focused during engaging.
      • Potentially putting maximum effort for that individual.
      • Potentially sacrificing their own rewarding opportunities, for the sake of the opportunities of the other.

     

  2. The person “championing and advocating” for the other. This is not a quid pro quo situation based on measuring out little bits of effort and support, this is committing to the betterment of that person. It involves social reinforcement.
      • Socially praising that person for actions.
      • Socially praising and supporting efforts of that person.
      • Putting forward resources and social effort for the successes, or approximate successes of that other person.

     

Features of Passion:

  1. “Fascinating” about the other. By fascinating, they mean engaging in thinking or imagining about the other person even when that person is absent. (Think of this as a tempered version of the limerence we spoke about above). These events are what behavioral psychologists call “private events”. They are not observable to anyone else but the respondent.
      • Thinking about the other person regularly.
      • Imagining the other person regularly.

     

  2. Mutual “desiring and experiencing sexual intimacy”. This one is the more obvious “passion” feature. These are both overt and covert (private) behaviors, but most importantly, this behavior is shared between both simultaneously. The reinforcement (rewarding) from one to another is mutual or shared.
      • Engaging and reinforcing “desiring” behaviors between one another.
      • Engaging and reinforcing “sexual intimacy” behaviors between one another.

     

  3. “Desiring mutual exclusivity” with the other person. This is where behaviors are used specifically with one another. One person presents specific, and unique, behavior towards the other and do not engage in these specific behaviors broadly with others outside of the relationship.
      • Unique thoughts or feelings about the other.
      • Unique ways of speaking or responding to one another.
      • Unique patterns of daily behavior with one another.

     

Features of Friendship:

  1. “Enjoying one’s company”. At a very basic level, being around someone should be enjoyable if a relationship is to maintain. This enjoyment could come from;
      • Enjoyment gained from a shared history and specific important events.
      • Enjoyment gained from a conditioning, shared desirable features that have become attributed to one another.
      • Enjoying the repertoire of social behaviors, or activities that person engages in regularly.

     

  2. “Being able to confide” in the individual. Sharing information that has the risk of being exploited, or showing vulnerability. Being able to express specific thoughts or intents with the other person and not expecting a reprisal or betrayal on the part of the other.
      • Sharing secrets, hopes, dreams, aspirations that represent vulnerability.
      • Being able to speak frankly and honestly on topics.

     

  3. “Behaving spontaneously”. With strangers, predictability is the best bet at cooperation and interaction so that no one is put off. This feature represents a tolerance for spontaneity and surprise where there is the potential for the unexpected, and in a sense, a chance of the unknown or risk.
      • Engaging in behaviors that are novel towards the other, with the other in mind.
      • Engaging in novel activities with the other.

     

  4. “Understanding” the other. The verbal behavior (spoken words) make sense to the other and are not misinterpreted.
      • Shared meanings of certain histories or features.
      • A shared understanding of tone of voice.
      • A shared understanding of facial expressions or other predictors others might not pick up on.

     

  5. “Respecting the other”. This is where the judgment, intents, and meaning of the other person are held in a regard that is not distrustful, or disingenuous.
      • Allowing one person to engage in an activity and having faith in that other person’s ability.
      • Engaging socially in terms that promote dignity and value the other.

     

love-1643452_640

Reinforcing the Relationship

These are a lot to juggle at one time. If all of these features are important for both people to engage in while in that state of loving, and the relationship is to maintain for long periods, there must be some way for people to have the time and ability to do so, right? This is where we discuss how and when we can use those features above as practical behaviors, and making those practical behaviors reinforcers (rewards). Reinforcers aren’t just prizes or tangible objects; they can be ANY behavior or change in a stimulus that strengthens another behavior. It’s not just one direction either. One person can reinforce another’s behavior, and have that person reinforce theirs right back. It becomes a cycle, it becomes an interaction where both sides are engaging in these loving features, those romantic behaviors, and being strengthened by one another’s. Here are some suggestions from the research.

Reinforcing a Relationship with Generic and Abundant Reinforcers-

Don’t let the words generic scare you off. This does not mean boring or unoriginal. This means using the common stuff and using it often to strengthen the romantic/loving behaviors in the other person. These are things you have a lot of, or behaviors that are low cost to you, that you can use repeatedly and consistently. This sort of behavioral framework is good for maintaining a relationship.

Given the opportunity, how many Friendship, Caring, and Passion behaviors could you exhibit abundantly an hour? How about per day? Or month? Try looking at these.

  • Smiling
  • Laughing
  • Engaging in a positive tone.
  • Taking the time to understand a point of view.
  • Physical closeness.
  • “Checking in”- frequent social interaction.

Just to name a few. These are easy, quick, require little effort, and can maintain a relationship, or series of interactions through those quick and abundant psychological reinforcement effects. Remember; A big surprise is great, but if you get absolutely nothing from the individual, not a smile, not a word, in between, big surprises aren’t strong enough. The relationship gets frayed, thin. That’s why you use “generic and abundant” social reinforcers from your assumedly impressive romantic repertoire of skills.

Reinforcing a Relationship with Scarce and Idiosyncratic Reinforcers

Now the big surprises come in. These can’t maintain a long and complex relationship by themselves. They, by definition, are scarce, therefore very interesting. These are things you can not provide to another person very often, and they are varied enough that the other person probably would not be able to expect them. These are the high shock-value interactions or rewards, the things that provide a revitalization.  Remember the spontaneity feature? This is where it comes in. These come in when the generic and abundant reinforcers lose efficacy. Sometimes when a thing is too common, people adapt, so you need to throw a little “strange” out there to mix up the predictable delivery of these romantic reinforcers. You can’t expect the scarce big reinforcers to maintain a relationship, but without them, the generic and abundant undergo habituation. Sometimes when something is too predictable and common it loses its reinforcing features. You need to change it up. The mixture of both is where the long-term maintaining of romantic behaviors on both sides meets a good equilibrium.

What about these? Are there any scarce or idiosyncratic reinforcers you could think up from the Caring, Friendship and Passion categories? Can you think of a few specific reinforcers you enjoy? Can you think of a few specific ones that another person might? Try them out and see if they work, or engage in some confiding features to request them. You might just learn something!

Comments? Questions? Leave them below!

References:

  1. Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied behavior analysis. Columbus: Merrill Pub. Co.
  2. Willmott, L., & Bentley, E. (2014). Love and limerence: harness the limbic brain. United States: Lathbury House Limited.
  3. Tennov, D. (1999). Love and limerence the experience of being in love. Lanham, MD: Scarborough House.
  4. Davis, K. E. (1999). What Attachment Styles and Love Styles Add to the Understanding of Relationship Commitment and Stability. Handbook of Interpersonal Commitment and Relationship Stability, 221-237. doi:10.1007/978-1-4615-4773-0_13
  5. Davis, K. E., & Todd, M. J. (1982). Friendship and love relationships. In K. E. Davis, and T. O. Mitchell (Eds.), Advances in descriptive psychology (Vol. 2, p. 79-112)
  6. Dermer, M. L. (2006). Towards understanding the meaning of affectionate verbal behavior; towards creating romantic loving. The Behavior Analyst Today, 7(4), 452-480.

 

Image Credits: http://www.pixabay.com

“Natural Selection” and Human Behavior

blog111

Let’s talk about evolution. Or better yet, let’s talk about human behavior, and how our understanding of it was influenced by evolutionary theory. For context, we will want to mention B.F Skinner, a researcher at Harvard in the 1950’s, who had far reaching impacts in the field of psychology, and an emerging practice of it called behaviorism, especially in its terminology and future usage. Most of his work is still held in great regard today, and although not taken as holy writ, has been the foundation of future research and adaptations of the original work. Skinner viewed behavior in a novel way, one influenced by Darwin’s evolutionary theory.

“Reflexes are intimately concerned with the well-being of the organism. Reflex behavior which involves the external environment is important in the same way. If a dog’s foot is injured when it steps on a sharp object, it is important that the leg should be flexed rapidly so that the foot is withdrawn… Such biological advantages “explain” reflexes in an evolutionary sense: individuals who are most likely to behave in these ways are presumably most likely to survive and pass on the adaptive characteristics to their offspring.” (Skinner, 1953).

His work on conditioning was different than that of Ivan Pavlov’s. While Pavlov worked with reflexes and stimulus pairing, specifically,  Skinner worked with learned (or operant) behavior and used the philosophical lenses of adaptation to do so.

“The process of conditioning also has a survival value. Since the environment changes from generation to generation, particularly the external rather than the internal, appropriate reflex responses cannot always develop as inherited mechanisms… Since nature cannot foresee, so to speak, that an object with a particular appearance will be edible, the evolutionary process can only provide a mechanism by which the individual will acquire responses to particular features of a given environment after they have been encountered. Where inherited behavior leaves off, the inherited modifiability of the process of conditioning takes over.” (Skinner, 1953).

B.F._Skinner_at_Harvard_circa_1950

Selection by Consequences

In Skinner’s theoretical framework for the analysis of behavior, he sets reflexes apart from behaviors omitted as a modifiable response to reoccurring conditions. Operant behavior that had been altered in some way by past consequences. Or, in layman’s terms; experience.

A behavior which occurs following an event with similar conditions to one experienced before, would either have taken on adaptive features to better access/avoid that stimulus, or, if the original behavior failed, would not be likely to omit again in those similar conditions. This is the foundation of behavioral learning theory, and what Skinner called “selection by consequences”.  This form of selection focuses on the consequences of their behavior in order to predict and describe the rate by which they occur in the future. Behaviors, in this sense, can either be strengthened or weakened via selection. We can see this in some of the terminology still used in behavioral science today:

Reinforcement- Responses from the environment that increase the probability of a behavior being repeated. (ie. When a behavior is “rewarded”, it happens more often).

Punishment- Responses from the environment that decrease the likelihood of a behavior being repeated (ie. When a behavior is “punished”, it happens less often).

Extinction- When a conditioned stimulus is no longer paired with an unconditioned stimulus. Or, if the learned (operant) behavior is no longer reinforced; leading to a decrease in future usage of that behavior. (ie. When there is no more “reward”, the behavior has no purpose to reoccur).

 

In each of these terms above we can see the impact of environmental conditions, and the usefulness of behavior, to describe and predict how it is used, and why. This had vast philosophical repercussions for psychology at the time. Viewing learning and experience in an evolutionary sense had wide reaching advantages in the field of psychology. During this period (1950’s), psychoanalysis was still the mainstay of many professionals, but had glaring weaknesses in treating habitual disorders, or even features within disorders. The Freudian “talking-cure” was adept at having individuals speak about their internal events, their past, and conceptualizing the Id, Ego, and Superego as explanatory factors for their behaviors and interpretations on their dreams and thoughts; but there was no direct observable translation to healthy action following this treatment. It also gave many professionals with similar training, wild variation in interpreting.

 

pexels-photo-29545

The Unconscious Mind vs. Selection By Consequences

One psychoanalyst might attribute excessive smoking to a childhood event, while another might attribute it to a symbol of the cigarette/flame itself. The explanation did not come from the event, or any clue from the environment around the individual; it was all estimation of events that could not be seen. When the past was used as a descriptor, it was often speaking of formative childhood and adolescent experiences, not the direct past or future. In a strict behavioral sense; these are explanatory fictions. Circular definitions that cannot be proven or disproven.

Comparing this to Skinner’s fledgling analysis of behavior, you can see the drastic differences in a hypothetical example of cigarette smoking:

Freudian- Psychoanalytic Interpretation (“The Unconscious Mind”) The unconscious mind acts as a repository, a cauldron, of primitive wishes and impulses… The making of a fire and everything connected therewith is filled through and through with sex symbolism. (Freud,  A General Introduction to Psychoanalysis.  1935)
Skinnerian- Behavior Analytic Interpretation (“Selection by Consequences”) The first time a cigarette was lit and smoked, that behavior was reinforced by the consequence (reinforcement). The probability of future smoking behavior was increased by which ever stimulus acted as a reinforcer (taste, chemical interaction, social, etc).

 

You will notice under the behavior analytic interpretation; the behavior is adaptive. If smoking that cigarette was pleasing to the individual, they would seek it more in the future. If it was aversive, they would be more likely to avoid it. It is adaptation within the lifetime of the individual. It requires no intergenerational passing of information or traits. It adapts because it serves a function.

Applications of the classifying of behaviors by function, complex social phenomena, and even verbal behavior itself have been conducted using this evolutionary-minded theory of why behaviors occur, and asking the question “for what reason?”. But this is not limited to direct experience. With this explanation, a behavior would not even need direct influence from a specific condition. This is where “rule-governed” behavior is explained. Let’s take a look at a rule governed behavior that might effect the smoking behavior above:

  • SURGEON GENERAL WARNING: Tobacco Smoke Increases The Risk Of Lung Cancer And Heart Disease, Even In Nonsmokers.

According to Skinner, the “rule” serves as a contingency specifying stimulus. Humans are able to learn from the experiences of others, and can adapt our behavior based on observation and instruction. Those stimuli serve as the consequences that either reinforce, or punish, behavior which in turn effects future probability of those behaviors being omitted.

One could either smoke a cigarette and find it displeasing, or they could be given a warning. Supposing that the cigarette, and the instruction, carried enough punishing value, the smoking behavior would decrease. Both are viable consequence events that can effect rates of future behavior.

This topic focused specifically on changes of behavior of an individual, but can also be used in much broader scope as well (especially with rule-governed behaviors). There is growing interest in the field researching was is called meta-contingencies. This theoretical framework does not exclude “thoughts” either, and labels them as “private events”, behaviors in their own right. While we may not touch on that today, keep these theories in mind. They might be adaptive for you.

 

Questions? Comments? Leave them below!

 

  1. Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied behavior analysis. Columbus: Merrill Pub. Co.
  2. Skinner, B. F (1945). The operational analysis of psychological terms. Psychological Review, 270-277.
  3. Skinner, B. (1960). Science and human behavior. New York: Mc Millan Company.
  4. Darwin, C. (1872). The origin of species. New York: D. Appleton.
  5. Freud, S. (1935). A general introduction to psychoanalysis. New York: Washington Square Press.

 

Image Credits: http://www.pixabay.com, Getty Images, North American Energy Advisory (2017)

 

What is Reinforcement?

ocrpjce6gpk-vitaly

Reinforcement or reward?

Have you ever heard the term reinforcement (in the context of learning or psychology) and wondered what it might mean? Reinforcement, as was originally studied, was a term that came about in behavioral psychology to describe the phenomenon of a consequence event strengthening the behavior that came before it, and increasing the probability of that behavior occurring in the same setting/situation in the future. In essence, it’s a condition that teaches a behavior to occur more often. It’s a cornerstone of learning. [1,3,4]

To help get a good handle on this, let’s break out a time-line:

Step 1 (Before Behavior):
Antecedent· Precedes the behavior.

·Can be a setting or other stimulus (interaction, etc).

Step 2: Behavior

-This is the behavior we are looking at and tracking.

·It is operant. It operates on the environment , or acts on it in some way.

Step 3  (After Behavior):  Reinforcement

· A stimulus following the behavior that is desirable to the person.

Example:
Walking into an ice cream shop.
Example:
Ordering and paying for an ice cream.
Example:
Having and eating the ice cream.

The table above demonstrates a common type of data collection on behavior (called A.B.C), and demonstrates the concept of the three-term contingency. To understand reinforcement of a behavior, you need all three pieces (before the behavior, the behavior, and what happens after). [1,4].

So what makes this different from a reward? Why is the terminology of reinforcement so important? To answer that question, we have to think ahead. A reward can be a one time event. It often follows a behavior that we want to give credit to, but only in reinforcement, are we looking for future evidence that it raises the probability of the behavior happening again. If someone does something and gets a reward, that could be all that happens. If that person continues to do that behavior in a similar setting, following that “reward”, we can call that reinforcement. Reinforcement is all about future behavior from past consequences. [1,3,4].

Looking at this from a learning theory perspective, do you think it is more beneficial to apply a rewarding stimulus before a behavior, or after? Think of this common scenario:

A child is screaming in a shopping cart. A guardian says “Sshhh!! Here, have this chocolate!”, and gives the child chocolate mid-scream. The screaming stops, at least for the duration of eating it. In this situation we see that the chocolate was given before the behavior the guardian wanted to see (quiet, not screaming). But, it was given after the screaming behavior. It is plausible that the screaming behavior will increase in the future in order to get access to chocolate. [3,4]

What about this scenario? A child is screaming in a shopping cart. A guardian says “Sshhhh! If you lower your voice you can have this chocolate.”, and the child eventually stops screaming. The guardian then provides the chocolate, which the child accepts and eats. Here, the chocolate came after the appropriate behavior was demonstrated. It is plausible that the longer duration of quiet, or the act of quieting following the request from the guardian would increase in the future.  [3,4]

Can you spot the difference and importance of when and how reinforcement is used?

vjz7tkhncfk-ben-white.jpg

Types of Reinforcement

There are as many types of reinforcers as there are stimuli in the environment that people enjoy or desire. These stimuli are practically endless in number, and vary from person to person. There are certain things that naturally reinforcing biologically, these are called Primary Reinforcers. They include food, drink, stimulatory pleasure. Then, there are Secondary Reinforcers. These are reinforcers that are conditioned or learned. They include things like money, reading, music. [1,2,4].

Primary Reinforcer Examples: Food, air,  water, warmth, physical contact.
Secondary Reinforcer Examples:  Money, verbal praise (“Good job!”), high scores or grades, trophies.

That’s not all. There are even variations of Reinforcement itself. If reinforcers are the stimuli, then reinforcement is the method by which they are applied. These come in two forms: Positive Reinforcement and Negative Reinforcement.  When we use these terms, keep in mind that they are not a subjective judgement of “good” or “bad”.  Positive Reinforcement is the addition of a stimulus that would increase future probability of behavior. This could be any of the examples listed above in terms of Primary or Secondary reinforcers. Negative Reinforcement is slightly different. It is the removal of aversive stimuli which increases the probability of future behavior. Both positive and negative reinforcement increase behavior in the future. [1,3,4]

Positive Reinforcement Example: A person says hello, and they are greeted with a smile and handshake.

  • The smile and handshake are added, and will influence future levels of saying “hello”.
Negative Reinforcement Example: A person says “Can you turn that down?”, and the other person turns down their loud stereo.

  • The volume level of the loud stereo is removed, and will influence future levels of requesting to “turn it down”.

Misrepresentations

Let’s touch back on our original example. There are sometimes situations that give out false positives of reinforcement, or incorrectly strengthen behaviors we do not mean to increase. Think about the first example of the child in the shopping cart:

  • A child is screaming in a shopping cart. A guardian says “Sshhh!! Here, have this chocolate!”, and gives the child chocolate mid-scream. The screaming stops, at least for the duration of eating it.

The screaming stops, doesn’t it? The request for Ssshh!! was obeyed, wasn’t it? Not exactly. There was no time for the child themselves to show any operant behavior towards that request. The chocolate was provided following the screaming, and in fact interrupted it. This is closer to bribery than it is reinforcement, but since for the sake of this situation chocolate is a desirable stimulus, it can still have a reinforcing effect on the behavior that preceded it: the screaming. It is not the request the child learns to follow, it is the continuous screaming that achieved a more desirable condition. [3]

In that moment, both scenarios did achieve limited quietness, but over the longer scope of future trips in a shopping cart, the condition where the screaming was interrupted by chocolate lends itself to higher future rates of it reoccurring. [1,2,3,4].

Reinforcement can be accidental, and the learner can even be totally unaware of it happening. It does not require conscious effort, awareness, or focus in order for this type of learning to occur, either. In each case, behaviors are strengthened by the placement in time of the reinforcer (and its effectiveness/desirability as a reinforcer). Too soon, and you may strengthen an unintended behavior. Too late, and you may miss the chance for that reinforcer to have an effect. [2,3,4].

Questions? Comments? Write them below!

 

 

References:

1] Skinner, B. F. About Behaviorism. New York: Knopf; 1974.

2] Ferster, C.B., & Skinner, B.F. Schedules of reinforcement. New York: Appleton-Century-Crofts; 1957.

3] Mowrer, O. H. (1960). Learning theory and behavior. New York: Wiley.

4] Skinner, B. F. (1965). Science and human behavior. New York: Free Press.

Photo Credits: http://www.unsplash.com