Why we don’t always prompt: Behavior Analysis meets Vygotsky.

kids-girl-pencil-drawing-159823

In the early 20th century, there was a developmental psychologist named Lev Vygotsky working on theories of learning and development in parallel to many of the behaviorist traditions. If you were to ask a graduate student taking behavior analytic courses who Vygotsky was, they would most likely shrug their shoulders and wonder why that was important. He isn’t Watson. He isn’t Pavlov. He isn’t Thorndike. He isn’t Skinner. He isn’t Lindsley. So, why would a behaviorist ever want to care? Well, it’s because his work ties in so closely to the behaviorist tradition, that you could in some cases use his terminology and frameworks interchangeably and still see the same results. His work can help clarify why we, as behavior analysts, trainers, educators, and even parents, should not prompt every single time we see a child begin to struggle with an endeavor or task.

To an educator or professional following the behaviorist tradition, it’s not all that hard to describe. Prompts help the learner reach a reinforcement threshold that that their response likely could not have reached on its own. Shaping- describes a process by which an emergent behavior which is similar in some way to a target behavior, is reinforced by successive approximations to become the terminal target behavior. Basically, it’s taking an “okay” behavior attempt, and rewarding the behaviors that look closer to improvement until it’s “perfected” enough to reach more naturalistic reinforcement in the broader environment. To a behaviorist, that means looking at what the learner has in their repertroire, what they can do right now, and plan to reward the responses that improve that towards some end goal response. But wait, how exactly do we know when to intervene? And why don’t we intervene every time we see the learner encounter difficulty?

The trouble with that is that sometimes a learner does not actually learn from being prompted too much. Sometimes that reinforcement only contacts the amount of effort the learner expends to receive prompting. Sometimes they become dependent on those prompts, and then it is the educator doing the behavior, and the learner receiving reinforcement. They don’t improve because they have no need to improve. They get the prize every time their educator does it for them. That behavior that the educator prompts, might never transfer through modeling. Why should it, if the reinforcer comes anyway? This is where Vygotsky comes in. Vygotsky believed that there is a Zone of Proximal Development.

Lev Vygotsky was not a behaviorist. In many ways, he was against the methodological behaviorism that was popular at the time which focused on purely observable stimulus-response relationships. Vygotsky also believed that learning was not just a process that drew from a present environment of contingencies, but a broader wealth of cultural and societal forces that accumulate through generations and have impacts that were not directly related to the behaviors at hand. However, when it comes to the Zone of Proximal Development, his theories coincide with what behaviorists would conceptualize as both repertoires and the necessary thresholds for prompting. Vygotsky believed that there was a level at which a learner could successfully accomplish tasks without assistance, and a level at the other end of their developmental range that they could not accomplish without considerable help in the form of prompting. Between that, however, was a zone where a learner could accomplish them with some collaboration and prompting and eventually surpass it to a level of independence. It’s a zone that is in many ways different from individual to individual, but within that zone of proximal development; prompting (or collaboration as he called it) was at its most effective.

Think of it like this:

Zone of the learners “actual” development Zone of Proximal Development The limit of their current developmental ability
These are responses that the learner can perform, and tasks that the learner can complete without any assistance from others. These are tasks and responses that the learner can accomplish with the assistance and prompting of others.

These are tasks and responses that are beyond the learner’s ability to accomplish and can only be produced with considerable support and assistance.

*Behaviorist Footnote:
Think of this as the responses already in the learner’s repertoire. These are “easy”.
*Behaviorist Footnote:

Think of this as the area of “shapable” responses that are likely to lead to independent future responses. Vygotsky called this “scaffolding” but the process of “shaping” is synonymous.

*Behaviorist Footnote:

The client can be prompted through these tasks, but are unlikely to be able to reproduce them even with shaping procedures at this time.

This framework delineates an interesting range where a learner needs and could use the help of an educator or teacher to help prompt them, and when not. In the initial range, prompting is unnecessary and might actually hinder the learner from engaging in those responses in their most independent forms. The learners who can engage in the “easy” responses and find that reinforcement in the broader environment would be more likely to occur in the future. Prompting too much here could stifle that. In the next range, the Zone of Proximal Development, as Vygotsky calls it; prompting could actually be of the most use! These are responses that are viable for occurring and reaching natural reinforcement, but they just need a little help at first to get there. Here, prompting in the form of modeling or shaping could help the learner take their initial responses and bring them to their terminal and most effective independent forms. This is the exciting part. This zone is where the work put in by the educator and teacher could meet maximum return on what the learner can benefit from. Now, we have to be careful not to reach for the moon here. The final zone is where, even with prompting, the learner is unlikely to be able to shape their responses successfully. This, for example, is trying to teach a learner to run before they can walk. They need those foundational responses before they can even be prompted to a more advanced terminal response. An educator who comes across this scenario might be wise to dial the expectations back.

Between those two ranges of “easy” and “unlikely”, we find the responses that can be prompted for the most good. We would not prompt too much, and stifle the learner’s ability to contact reinforcement on their own, but nor would we fail to prompt at all, and miss those responses or behaviors that just need a little push. This is where a behaviorist, teacher, educator, or even parent, can take a thing or two from Vygotsky’s work. And if you’re a tried and true behaviorist who can’t believe that a cognitivist would be mentioned here, I’d suggest an open mind. You might even be surprised about the similarities between Vygotsky and Skinner on private events and “inner speech”. We can touch on that later, but for now, think about the zone of proximal development in your life and practice; what could use a little help?

Likes? Comments? Questions? Leave them all below!

References:

Burkholder, E. O., & Peláez, M. (2000). A behavioral interpretation of Vygotsky’s theory of thought, language, and culture. Behavioral Development Bulletin,9(1), 7-9.
COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
ORMROD, J. E. (2019). HUMAN LEARNING. S.l.: PEARSON.
Image Credits:

A Behaviorist’s Take on Far Cry 5

Far Cry® 5 (3).png

Forewarning to the regular readers; I’m talking about video games today. In particular, a fantastic action-adventure game I was turned on to by friends called Far Cry 5. That’s not an entire truth; I’ve played the predecessors too, but this one stands out to me narratively because it has a story based around social control. As a Board Certified Behavior Analyst, I’m drawn to these things. Imagine a world not so different from ours, where a doomsday religious cult takes control of a part of Montana and spreads a violent vision across the state corrupting the citizens to the new lifestyle of brutalization and indoctrination. That calls for a hero right? That’s the game. The thing that makes this interesting to a behaviorist is how it uses those social forces in-game to create fictional forms of coercion that in many ways matches the existing psychological science of conditioning. I like this game. It’s complex, it’s fun, and I’m going to be testing myself in its new Infamous difficulty mode over the next two weeks and during Extra Life to rack up some more donations for the local children’s miracle network hospital near me (link here and below). I’ll also try to keep spoilers beyond the psychological methodology to a minimum, Let’s get on to the psychology.

In the game, there are several bosses who control section of the map. Each of them represents a different form of that control. Spoiler alert. But honestly, no large reveals here. Joseph Seed is the big boss. He’s a sort of preacher borrowing from several religious traditions to deliver his idea on a “collapse” of society and a vision for a simpler future. He relies on a group/mob mentality, social reinforcement (a semi-Bandura style of vicarious punishment) and a form of authority that borrows from his own charisma and the religious texts he cites. Not too out of the ordinary. His doomsday cult also employs sub-bosses. John, a former lawyer, who is obsessed with having his devotees say YES, and uses similar group and social coercion. Faith, who uses a toxic mix of drugs called Bliss to create hallucinogenic induced indoctrination. Believable to a degree. Then, there’s my favorite and the reason for this post; Jacob. Jacob is a little different. He’s said to have a soldier’s background, but he uses a method of conditioning, which he refers to as a basic classical conditioning, with a substance (drug) related assistance. This puts his subjects into murderous rages/trances when he plays the song “Only You” by The Platters. He tries to make his method sound simple. He tries to make you believe it’s just simple stimulus pairing through classical conditioning.

Jacob does abhorrent experiments with these methods on both animals and humans, causing devastation and treachery across the part of the story. It’s very tragic. The thing is…he’s not just using classical conditioning. Conditioned stimulus with a conditioned response? Not quite. There’s more to it. He tries to explain his method several times and even uses the standard definition of classical conditioning to describe how he creates these diabolical effects, but when we look at the practice there’s a sinister amount of complexity that he leaves out. This fictional boss Jacob might think that it’s simply food deprivation, a song, practice in his chairs/training chambers that do it; but he’s selling himself short. He’s actually using both classical conditioning and operant conditioning. That fiend.

Far Cry® 5 (2)

Jacob’s Classical Conditioning

It might surprise you, but Jacob didn’t invent this form of conditioning. It actually has its origins with a researcher named Ivan Pavlov (and also Edward Thorndike) involving the well-known experiment with bells and salivation. There we see the pairing of a conditioned stimulus with an unconditioned response. Basic stimulus-response psychology. Now, in this fictional world of Far Cry 5, the bad guy Jacob references these things, and even Pavlov (“Pavlovian”) once or twice. I think narratively, it makes sense. He’s training killers. He sees his conditioned stimulus (a song) and their response (murderous rages) to be synonymous with that process. Except… when we look at the training, it’s not that clean. There are parts that seem to follow this method; mainly that he is engaging in a stimulus pairing procedure that works on a learned behavior change for the individual. The environmental event (or stimulus) precedes the response he is looking for. That makes sense too. Even the cutscenes play out the process correctly! We assume the original neutral stimulus “Only You” by The Platters does not lead to murderous rages to an ordinary person. He needs to make that connection happen in his victims by pairing stimulus and response. Jacob pairs that neutral stimulus, with an unconditioned stimulus (threat, through some form of a hallucinogenic and visual process), to elicit an unconditioned response (attack). Then, following this, he presents the newly paired conditioned stimulus (“Only You” song) to elicit the newly conditioned response (attack). Makes sense, right? Somewhat. But look at the training methods a little deeper and we get some complexity. He has the stimuli he wants available. He has the song. He has the wolf pictures, and the predatory images of wolves killing deer, but he also adds something else in… Reinforcement and Punishment during his trials.

Far Cry® 5

Operant Conditioning through Discrete Trial Training (DTT)

The reason I like the Jacob missions so much is that they do use real-world conditioning methods. They just undersell them a little. Jacob, the big bad guy I hated through two playthroughs of this game, uses both classical conditioning and operant conditioning to make his process work. Also, some fictional drugs and hallucinogenics, but let’s focus on what we know. Operant Conditioning is different from Classical Conditioning (or “Pavlovian Conditioning”) in one major way; it focuses on the ability of the subject to respond in a specific way, followed by a reinforcer in order to increase the frequency of that behavior or shape it towards a targeted goal. When someone mentions B.F Skinner, or Skinner Boxes, this is the type of conditioning they are talking about. Again, MINOR SPOILERS. Jacob does that to our character the first time he catches us. It’s not just the classical conditioning process of the song to the natural response of attacking when threatened. He trains our character to make that stimulus and response relationship stronger, and introduce faster and more vicious shaped behaviors to the repertoire of the character. It’s tragic. It’s sad. But his method is theoretically sound. You see, he uses what we behaviorists call Discrete Trials. The situation for each trial is exact. The Discriminative Stimulus (SD) to set it off is the same each time. Here is where the operant part comes in. The character is tasked with eliminating all enemies using the provided weapons, in an interval time frame, to complete the task and receive reinforcement for the chained behaviors. This follows the three-term contingency known as A-B-C. Antecedent. Behavior. Consequence. Let’s break it down.

(ANTECEDENT) aka Discriminative Stimulus- “Only You” Song, and visual presentation of threat-related stimuli.

(RESPONSE) – Eliminating targets.

(CONSEQUENCE)- Added time to the interval to allow for more time to complete the task for further reinforcement, and verbal praise from Jacob in the form of “Good”, “Cull The Weak” etc. This is Reinforcement.

Or… (CONSEQUENCE)- in the form of Punishment. Fail to complete the task by either being killed by enemies, or failing the time interval, and you meet the punishment contingencies of starting over from the beginning, or verbal reprimands in the form of “No”, “You are weak”, “You are not a soldier”, etc.

In other words, Jacob is shaping repertoires. He’s not just pairing behavior. He is creating a series of trained responses, operants if you will, in the presentation of his conditioned stimuli to be completed in a way that he controls. It is the fundamental ingredients of all learning, but he has twisted it a little to make this heroic character fall right into a trap of uncontrollable lapses in judgment and responding in cruel ways that are uncharacteristic or were a part of the character from the start. Chilling, right? But like a rat in a maze, or a box, the character must follow these in order to progress. Press the lever, get the cheese. Shoot the opponents, get the praise and progress.

Far Cry® 5 (5).png

Meta Game Talk: Conditioning The Players

Let’s talk a little about the big picture here. Yes, Jacob is fictional. Yes, this heroic character is fictional too. But when we look at the game from the lens of how it works on player reinforcement and punishment, we can actually see ourselves in the picture of this box. We are also conditioned, if we choose to play the game and continue to play the game, in a way that shapes and sharpens our behavioral repertoires. The same Discrete Trial Training that Jacob puts our character through, we are also participating in, and are contacting that same reinforcement and punishment as though it were our own (broadly speaking). We want to succeed. We want to continue. We want to win.

So, we get faster. We get more accurate. We learn the patterns. This is why we train. As Jacob has said so many times during these repeated trials. Each time, giving us a little more of a challenge. Each time, progressing us with different response repertoires to enact on the challenges in our way. It’s fun. In some ways, it can be a representation of the game as a whole. There are many reinforcers out there to get. Many contingencies to engage with. Even multiple endings (that’s the part that got me doing it twice).

I learned to shoot through both enemies in the revolver scene from the left. I learned to take the submachine gun in the next room and work from low to high, right, center, to left. For the shotgun, I turned corners with two lefts and one right at head level and tapped at the first sign of movement. For the rifle, I stayed low and aimed in short bursts, leading a clear line through the middle, and for the LMG… well, let’s not give it all away just yet. Your repertoires need honing too, and there are many variations that work.

That’s the fun.

ff5

The Behaviorist’s Take:

5/5 Stars for me. This game has been a joy to relax with. It’s challenging, but still can be taken in small parts and missions as time allows. It’s not too much of a time sink for someone on a professional schedule, and not too much of a learning curve for putting half an hour a day in. The story is strong, the emotional bond between the heroic character and the sympathetic (and often funny) people they meet is also a great time. They even let you make your own custom levels and challenges for your fellow players in an Arcade mode. I dig it.

As I mentioned above, this will be my game for the Extra Life 2018 Charity Event taking place the first week of November. I am, believe it or not, the weakest player on my team, but I love talking behaviorism and psychology and will be doing it all day to support the locals in Philadelphia, raising charity funds for the Children’s Hospital of Philadelphia (CHOP). I’m not only an outsider fan of their great work with children, I often have direct contact with the children’s hospital in my day to day work with young populations and can’t speak highly enough about their commitment. Extra Life is a legitimate charity, and 100% of the funds go directly to the children’s hospital. I’m leaving my link below and will be overjoyed if readers could contribute in some part to my goal so I can hold my head up high this year. Any amount at all. I’ll be streaming and will be happy to respond to any comments. Have ideas that I missed? I love those. Send those too.

Extra Life Donation Link

Comments? Like? Questions? Leave them below!

References:

Cooper, John O., Heron, Timothy E.Heward, William L.. (2007) Applied behavior analysis /Upper Saddle River, N.J. : Pearson/Merrill-Prentice Hall

Far Cry 5 [Software]. (2018). Ubisoft Montreal, Ubisoft Toronto.

Image Credits:

Christian Sawyer, M.Ed., BCBA (original Photography/Screenshots)

Steam. http://www.steam.com- Far Cry 5 Logo

Why I Leave My Political Hat At Home

pexels-photo-711009

Opinion piece time. I leave my political hat at home. Or, at least I try to. I leave my belief systems about policy and voting to conversations with friends, Twitter (if I can’t help myself), and the local networking events where local politicians from town hang out- that way it’s just contextual. I’m friends with the local school board. I’m on a first name basis with the mayor of my town. I catch up and chat with the local councilmembers. I have a political life which is just as strong as my professional life. It’s not easy to split the two. More often than not, me deliberating on a choice at work does hit on several pieces of what makes my moral compass orient the way it does. I believe in compassion. I am a behavior analyst- it’s from the behaviorist tradition. It is observational, data-driven, research-based. I don’t allow personal opinion impact what happens with decisions with clients. Thankfully, data does that for me. Is this effective? Yes or no. Why? Well, the data suggests…

I can’t just put up a phase change line on a client’s progress graph because my opinion about a far-reaching political event somehow relates. It’s unfair. It’s my lens getting shifted which impacts more than me if it’s not reined in. The clients are individuals, deserving of individual care. Outside of that, it also means that I have people working with that client which report to me- RBT’s (Registered Behavior Technicians). They worked hard to get that credential. They’ve passed their tests and went through their supervised hours. They are professionals. Would it be fair for me to walk into work with a political or ideological idea in my head and try to bring it up to them? Of course not. That’s not their job. Their responsibility is to the client, based on the real world observable responses and data they see and collect. They depend on my unclouded experience and judgment. Even if they were to be outspoken about a political view (which happens), I can’t let that color my opinion of them or how I treat their judgment. It could. It easily could. But that’s my professional line drawn in the sand.

Here’s a common counter I’ve heard: Things are getting bad here. We need to speak out. We need to take a political stance in our personal and professional lives.

If it involves the vaccine pseudoscience? I’ll bite. I can justify that because the evidence is there and it relates to my work.

But here’s the pickle. The people who bring up that counter argument assume something. They assume that just because we share a job title, and do the same thing, and care about the same pursuits that we have the same political opinion, and I’d be an addition to their circle. Now, when those political views have already been expressed, I can be pretty sure whether I agree or not- and it’s a mixed bag, but surprisingly to some- I don’t share the expected viewpoints. Were they looking for differing viewpoints? I can’t be sure, but it doesn’t feel like it. Is it worth turning a workplace contentious? Is the workplace the place, and the time, to deal with these issues?

“But Chris, surely you don’t support _____.”
“You work with kids though. How could you ____?”
“If you’re not ____ then you’re ____.”
“_____ did something terrible. You can’t support ____ could you?”

I have nuanced viewpoints. They don’t follow a single ideology, or politician. That potentially makes it even worse. My political stance might not align with anyone who is unipolar in their support or views. The world is a big place. The United States is a big place. Pennsylvania is a big place. There are a lot of different people with valid but different views. In my personal life, I can vote with my conscience. I can even refuse to vote if it aligns with my conscience. I can protest who I want to protest. I can talk to local politicians from both parties. I can talk with local third-party candidates. I’m outspoken on education in these settings and with these people. But they don’t report to me. They aren’t my professional peers either. It’s the context that makes sense to me. If I meet someone from work, off the clock, and they want to talk about these issues; then I would be perfectly fine putting my thoughts out there. Discuss. Change my mind. Sure. I’d have to draw a line somewhere though. It can’t get heated. Even the small stuff would have to be calm and rational and most importantly; wouldn’t be evident at work the next day.

In my profession as a Board Certified Behavior Analyst, the board (BACB) that governs how supervisors treat supervisees are pretty clear in many respects. Dual relationships, abuses of power, conflicts of interest- they all have some clear delineation. Politics isn’t mentioned specifically, but imagine a case where there was an outspoken supervisor who did espouse their views and acted on perceived implications of those views at work. Would that affect the people directly reporting to them? How sure could we be that it wasn’t? I stepped into work on November 9th, 2016. I felt it. Whatever it was, it was there. Putting that into the supervisory relationship is a dangerous game, in my opinion. I’m not saying other people can’t do it, but it’s not something I’d feel comfortable with given the potential to go bitter.

I believe that if something needs changing, it can be done with every opportunity that a citizen has. That goes for maintaining a high held value or traditional ideal. People are free to do both. Bringing that explicitly to the workplace, with a position of influence and supervision responsibility, has risks. I’d much prefer to leave that particular hat at home.

 

References:
Just me.

Photo Credits: http://www.pexels.com

Tabletop Roleplaying with a Behavior Analyst

20180803_163713.jpg

There are a vast array of opinions on role playing games. The stereotypes about them are prevalent in the popular culture of movies and televisions shows- mainly depicting the socially inept cliches rolling dice and spouting an incomprehensible language of their own. That type of depiction does get laughs, but it also is unlike anything I’ve seen in reality. I was influenced by those caricatures of role players too. For a long time I did not understand the appeal of piling up in a dark basement, playing a game about pretend people where nothing really mattered and there were so many rules to learn. Where’s the fun in that? It was the wrong outlook, but the right question. There was fun in it. It just took the experience to actually try it out and find it for myself.

Tabletop Role Playing is just a form of collective story telling. If you’ve ever seen a fictional movie and been engrossed in it, or had an idea for a novel, these are the same types of precursor behaviors to putting yourself in someone else’s shoes. There’s a fun to that. Taking on a different personality for a moment, and seeing a viewpoint unlike our own. If we want to get psychological about it, there might be some aspects of Adlerian play theory, or Bandura’s social learning through vicarious reinforcement in there. The gist of it is; one person sets the stage of the story and determines the rules of how the game is played, and the players take on a role and navigate that world for a collective goal (most of the time).

If you’re the type of person who likes making materials like token boards, graphs, or craft projects- this is right in your wheelhouse too.

It’s best to start off as a player before deciding to run your own game. You get to understand group dynamics and how collective story telling works. I was in my 20s when I first started this type of role playing. I started late. I tried a little of everything I could get invited to. Some people like settings with dragons and elves, but that’s not my type of thing exactly. I gravitated towards more realistic settings where interpersonal relationships and psychology was more grounded in humanity. Fictional worlds not too different or fantastic from our own. What I learned quickly is that these games work on Skinnerian principles- many things do, but role playing had a specific feel of reinforcement schedules that was familiar to me. The person who runs the game, sometimes called a referee, sometimes called a DM, sets the scale of what actions are reinforced and what are not.

Sometimes these are fixed reinforcement schedules based on experience: points that are rewarded that can be applied to the characters skills and attributes to make them more proficient, or more hardy to tackle the adventures. A measure of how much the character grows.

Sometimes these reinforcement schedules are variable ratio items: like in-game money, armor for your character, and tools that they can use to tackle different obstacles. A measure of what the character has, or can spend.

The players themselves run into variability by natural consequence; every action they decide to have their character make, if it is a specific skill or difficulty, comes with rolling a die to see if they succeed or fail.

These can be run like any other Skinner box. Compound schedules appear to be the most interesting to players. A fixed ratio that can be expected- perhaps collecting something important for one of the protagonists in a decided location. Or maybe a variable ratio- deciding what foes give up what item or monetary reward for being bested. Some people run their games with combat in mind; every situation is a nail to be beaten down by a well armed adventurer’s hammer. There’s a thrill to that kind of gameplay, but I find that it isn’t compelling enough for me. I prefer to create stories that have the opportunity for danger, but the risk of engaging in combat is sparsely reinforcing and has a greater opportunity for punishment. A live by the sword, die by the sword style of reinforcement schedule. There may be rewards to a quick and brutal choice, but a player can lose their character just as easily. I like using social stories in therapy to develop more adaptive skills. I use that same mindset when designing a game too- why resort to violence when you can talk your way out of trouble?

Say there is a dark concrete room, dim lights, seven enemies outnumber and surround a poorly armed player group. If they choose combat- they would most likely lose. It might work. I would allow it. Let the dice roll and see if they succeed. But more often than not, a clever player can decide to roll their die in a very different way; persuasion. I set the mark much lower for that if they have the right pitch. They make a deal even the most brutal enemy couldn’t refuse. The die is rolled- they win. Now there is one enemy less, and one more temporary friend to the adventure. The other enemies aren’t just going to stick to their hostility- maybe they overheard that, maybe they’re swayed too, maybe this causes division in the enemy group. The player group capitalizes. They play bluff roles. They play intimidation rolls. They play oratory rolls to back their fellow players up with a rousing speech. The tables turn, and now they’re on the side with higher numbers and that piece of the game is won.

That situation is harder to pull off for players. It takes more thought. More coordination. Turn taking. A minute or two to step away from the game, collect their ideas, then bring it back. I’m not trying to run a stressful table here- thinking is allowed. They devise a plan that works better than pulling a sword and pulling a trigger. I reinforce. Experience for “defeating” an entire room. They did after all. “Tangible reinforcers” in game for the characters. They get a bartered deal that they’d never get anywhere else if they’d been violent to these bad guys. Negative reinforcement- they avoid the aversive harm that is revealed to them when they now know- after persuading their enemies- that the enemies outmatched them in hidden weapons. The players used teamwork, not just some haphazard dice throwing about blood and guts. Group bonus. More experience for everyone. Why not? They played the game their way and they played it smart. These were not just four people sitting around a table doing their own random guesses for a quick and easy win, they came together with ideas that I would never have thought up for the story and won it themselves. They changed the story. Now it’s my turn to adjust my ideas to their new role played reality.

Now…It doesn’t always play out that way. Variable reinforcement is a necessity in a game of rolling dice. So is variable punishment. Sometimes the dice roll, and there’s a failure. Or worse- a critical failure! Not only is the prize not won, or the intended action not completed; it was actually a detriment to even try. Players have crashed a car. Blown up a usually harmless household item. Set a pacifist character in the game into a fit of rage and spoiled a whole quest line. That bank vault actually had a skunk in it. It happens. It’s something like a gamble, but when the reinforcement flows heavier than the punishment, it’s all worth it. It evens out. It takes a strong story, it takes a coherent direction and narrative, but the players do all the heavy lifting. They think. They plan. They roll the dice. Everyone has a great time.

You get to see patterns in that. Make it more challenging the next time. More engaging. Take the next story point in a way that you’d never have thought of before.

Let’s not forget that even when the game is done, there’s a friendship there now. People got to know each other a little better. They got to see people they talk to in a different light, more creative to one another, more inventive. Sometimes some playful rivalries come out of it. There’s also a community out there with shared experiences that goes beyond individual play groups and tables. Thousands of other people playing the same game their way. I personally love the community. I have ideas about how to run the game, and run it by others who play the same game but have done it better than me. I adapt. I improve. Sometimes, I even have an idea about how psychosis works in this imaginary world, and reach out to the internet with an interpretation on new rules-….and the creator of the game itself (Maximum Mike Pondsmith) replies.

mm

Talk about fun. Talk about reinforcement. I’ve learned never to underestimate what a good table top roleplaying game can be, or what it can bring to an otherwise ordinary afternoon. If you’ve never tried one? It’s never too late. Groups are out there with every age, every time commitment, and every skill level. Give it a shot. You might just like it.

 

Questions? Comments? Likes? Leave them below.

 

20180803_163713.jpg

Remembering the Pre-Aversive Stimulus

hubert-mousseigne-661465-unsplash

There are some terms and concepts from behavioral psychology’s past that have found themselves buried in time. Tucked away in a journal here or there but largely forgotten. The older research that tracked rates of behavior following “noxious stimuli”, for example- A phrase we don’t use anymore.  Time has also changed the fascination with respondent conditioning and effects that just two (or more) paired stimuli somewhere along the line could change responding for a lifetime. Powerful principles, which with progress now seem so mundane. Somewhere in there, we have the pre-aversive stimulus.

The pre-aversive stimulus had a great role in early behavioral science animal research to describe responding patterns, but the concept easily applies to humans as well. A pre-aversive stimulus, simply put, is the stimulus that reliably precedes an aversive stimulus. Have you ever heard the term avoidance responding? Some people may call that “escape-maintained behavior” in the field but it is effectively just that- engaging in behavior (responding) to avoid a stimulus that was aversive in the past. Running away. Getting away. Dodging it. What signals that, then? The pre-aversive stimulus. It goes even further. Just through respondent conditioning, the pre-aversive stimulus can take on features of the aversive stimulus and become a conditioned aversive stimulus itself. Then there’s another pre-aversive stimulus that could reliably precede that, and with enough second-order conditioning, you could get messy (over)generalization and find all sorts of related stimuli as aversive. Generalized Anxiety Disorder theoretically works on this same principle. It’s not hard to see how this kind of thing can tangle up a person’s life- whether they are able to realize it and vocalize it or not.

 

vosika-french-nests-insect-macro-69983

Wait! Isn’t a pre-aversive stimulus just a kind of SD?

Let’s not jump to any conclusions and mistake a pre-aversive stimulus for an SD just yet. They have some things in common. They’re both stimuli (but so is almost everything else). They can both be considered antecedent stimuli when we look at the framework of the avoidance responding that sometimes follows them. They signal something. All good comparisons- but here’s a big distinction if you don’t remember: A discriminative stimulus (SD) signals reinforcer availability for a specific type of response.

The per-aversive stimulus does not necessarily have to.

In some situations, you could conceptualize a case for negatively reinforced behavior, but that might muddy the definitions of both terms being used concurrently. They speak to different phenomena even though they could describe one particular stimulus. The big difference is that the cue for available reinforcement is not necessary for a pre-aversive stimulus. It is simply a stimulus that has commonly preceded something aversive, or bad.

Example: An individual has been stung by a wasp before. Maybe several times if they were unlucky. Prior to the stinging, they heard the buzzing around a wasp nest.

That buzzing could likely become a pre-aversive stimulus, and through respondent conditioning, a conditioned aversive stimulus itself in the future.

In the research, pre-aversive stimuli tended to evoke “anxiety” in respondents- which was quasi-operationalized to the term conditioned emotional response (CER), also called conditioned suppression. That’s an important distinction to keep in mind. Here, a pre-aversive stimulus appears to suppress or decrease responding- not signal reinforcement for a response like an SD would.

Like freezing near a wasp nest when buzzing is heard. The usual comfortable walking pace (response) is suppressed in the presence of the buzzing sound (pre-aversive antecedent stimulus).

 

n3

Anxiety! Conditioned Emotional Responses! Conditioned Suppression!

Respondent conditioning research has some fascinating lessons that are just as relevant today as they were decades ago. Sometimes in the day to day practice of behavior analysis- things get oversimplified for the sake of ease of practice.

Behavior goes up? Reinforcement is at work.

Behavior goes down? Punishment is at work.

To a degree, those definitions work. Even with our wasp nest example earlier, those initial stings could absolutely punish some future walking behavior. But we can’t forget about the little things- the little preceding stimuli that have so much to do with the actual phenomenon. The buzzing didn’t punish the walking. Don’t forget the antecedents. Don’t forget the respondent conditioning. Taking the time to examine just one more step explains the process so much more clearly.

What conditioned pre-aversive stimuli appear to evoke conditioned emotional responses in your day to day life? Do you see conditioned suppression of behavior, as a result, that would have otherwise been there? What pre-aversive stimuli could be “tagging on” to the effects of an aversive stimulus you’re aware of? Does it evoke any avoidance behavior?

Too simple? Laurence Miller ‘s (1969) work on compounding pre-aversive stimuli might whet your broader research appetite. Citation below.

Thoughts? Comment! Question! Like!

 

References:

Coleman, D. A., Hemmes, N. S., & Brown, B. L. (1986). Relative durations of conditioned stimulus and intertrial interval in conditioned suppression. Journal of the Experimental Analysis of Behavior,46(1), 51-66. doi:10.1901/jeab.1986.46-51
COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
Miller, L. (1969). Compounding of pre-aversive stimuli1. Journal of the Experimental Analysis of Behavior,12(2), 293-299. doi:10.1901/jeab.1969.12-293
Ormrod, J. E. (2016). Human learning. Harlow, Essex, England: Pearson.
Image Credits:
http://www.pexels.com, photographer Hubert Mousseign

A Dad’s Role in ABA Therapy

pexels-photo

Don’t let the title fool you into thinking about this as a division. A father’s role in therapy is the same as a mother’s role in therapy, or any guardian in therapy. Responsibility, respect, love, and contribution. That should be a given. But sometimes it’s not always treated that way.

A recent intake for a client stuck with me. In this intake we were discussing prior ABA services for the child, and how parent training was done, how programs were generalized, and what seemed to fit best with their prior therapy experiences. It’s good to get an idea of these things. Parent participation is important in therapy. Incalculably important. In this particular one, the father mentioned their prior BCBA tended to discard his suggestions on targets, or socially relevant behavior goals. This caused a second or two of an awkward pause where the mother jumped in with a humorous aside about how the BCBA got along much better with her. The thing is, you could see that the way the professional handled that situation limited the father’s future enthusiasm to engage with the process. Some people could often mistake that as the “Dad being distant” cliche, and everything continues as these expectations play out. The problem is, we had a parent interested in a process, who had a voice, and that voice was silenced (ignored) and guided to a false consensus.

There are sometimes these unspoken things, or expectations, in parent roles. Some are traditional things that stick around, some are just artifacts of a bygone era that do more harm than good. Rooting those kinds of things out and making more functional alternatives tend to help the whole process along, relationship wise, responsibility wise, and makes people all together wiser about how they’re behaving and what the expectations are for how therapy will work. Parenting is sometimes rule governed after all. In therapy, professionals, like BCBAs, can sometimes make unspoken rules with unintended consequences. Inferences here. Ignoring something there. The feeling I was getting from this situation above was that there was not an equal input in the last experience with ABA therapy. So, with a little back-stepping to basics, I wrote down all the suggestions both parents had for goals, and funny thing was, Dad said more, and the Mom was surprised. We all learned something. It sounds like a small thing, but imagine what a trend like this could have been long term.

I suggest some very simple ground rules, which should be very obvious:

A client’s mother can have great ideas about therapy goals.

A client’s father can have great ideas about therapy goals.

Any other suitable guardian can have great ideas about therapy goals.

The client themselves can have great ideas about therapy goals.

 

Sometimes these suggestions don’t make sense to us as professionals, sometimes they aren’t age appropriate, sometimes they don’t fit current skill levels, but we don’t just ignore them and silence the people who are invested in the client’s well-being and growth. The whole point here is that there should not be this great distinction between what the Mom can contribute, and what the Dad can contribute. Once we assume one has better ideas, or more time, or more commitment, we do a disservice. Situations may play a role in what happens in actual practice, but those are going to be based on actualities, and not preconceptions. Preconceptions acted on as though they are obervations are not behavior analytic.

Now, there also may be things that we notice between male parents and female parents that are a little different. Sometimes these things are stereotypical. Sometimes the interests follow expectations that we see generalities of in our daily life. We need to make sure we don’t assume too much with these. Treat every situation as though you will be proven wrong. Treat every situation as though you will learn something. Assuming too much is where we always get it wrong. Overlooking things is not scientific.

Data Point of One (Personal Experience Talking)-  On a case, I had a father once who had a different view point on some social goals. There are some situations where the current social goals put the client in what the father called a “weak position” to their peers, based on some peer interactions that had gone a bad route.  At face value, we could either say “NO! The client is expressing themselves! That’s good! What happened wasn’t their fault! Get out of here with that victim blaming!” or, we could take a minute and understand the meaning and sentiment of that worry. The client could be taken advantage of. Social hierarchies exist. Kids take advantage of other kids. Kids hurt other kids. The specific operant behaviors we were teaching here might actually be reinforcing peer aggressive/hurtful verbal behavior. It’s possible. We should probably take a look. Behavior does not occur in a vacuum. It ended up being more complicated than that, but the analysis was warranted. It helped.

pexels-photo-173666

Both parents can contribute. No matter the gender, no matter the outlook, most of the time if you find a parent who cares about their child enough to attend meetings, put the time into the trainings, and are enthusiastic about transferring and generalizing skills, you’ll find someone who can make a contribution to the growth and progress that can not be underestimated. The more hands on deck to getting the client the skills the better. We want more people on our team. We want more people showing love to the client to get them where they can thrive. A large support structure that loves and cares for an individual can make all the difference. We as professionals don’t get to decide who gets a voice and who doesn’t. That’s the lesson.

 

Comments? Questions? Thoughts? Leave them below.

 

Photos: http://www.pexels.com

Behavior Analysis and Personality Psychology

title

Applied Behavior Analysis and Personality Psychology at first glance have very little in common. Applied Behavior Analysis (ABA) comes from the behaviorist tradition of the purely observable, and Personality Psychology features variables that are often seen within the individual and outside of direct measurement. As time moves on in the field of psychology, and the behavioral fields specifically, there is a call for greater breadth and understanding from practitioners across more than one domain. Behaviorism as a field of psychology is alive and well, but sometimes practitioners can pigeonhole themselves (pardon the pun) into the strict traditionalist ideas of the early 20th century, leaving the cognitive revolution and relevant psychological progress aside.

Few people realize, that this is not too a large gulf to bridge.

The topic of personality and temperament in individuals was touched on by B.F Skinner himself in “Science and Human Behavior” (1951) and “Beyond Freedom and Dignity” (1971), but as many would suspect, the meaning of the word personality was operationalized to a series of observable concepts such as “response tendencies”. These tendencies of responding were used to explain how individuals varied in their sensitivity to stimuli. It stands to reason that everyone in their life has come across another individual who was not impacted by a stimulus in the same way as themselves. This is a basic part of humanity. This is the reason we need to clinically perform preference assessments. Individual differences occur regardless of standardized stimuli. No matter how precisely we form a potential reinforcer, no matter how accurate the degree of the amount, or intensity, or even how carefully a schedule is arranged; one person may respond differently to it than another. And that is not including motivating operation factors like deprivation and satiation. Sometimes people are affected by different things in different ways, and they respond to different things in different ways.

Personality Psychology concerns itself with these individual differences. It is a field that is interested in the unique differences of the thinking, behaving, and feeling of individuals. Personality Psychology studies traits or factors based on the similarities and differences of individuals. Some feature traits such as Extraversion, Neuroticism, and Psychoticism (Eysenck Personality Inventory), Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (The Big Five). Others add in the traits of Honesty and Humility (HEXACO). Although there are many different theories on how these personality traits are formed, are measured, and are predictive; they still aim to explain something that strict observation of antecedent or consequence stimuli appears to miss. Behaviorists and practitioners of Applied Behavior Analysis may look at these things and pump their brakes. After all, it seems like a challenge to align the methods found in Personality Psychology to the dimensions of behavior analysis that Baer, et al. constructed in 1968. How does personality fit into a strictly behavioral framework? What about making personality framework conceptually systematic? Or could an experimenter even demonstrate control in a way to be analytic? Baer, Wolf, and Risley themselves said that a self-reported verbal behavior could not be accepted as measurable unless it was substantiated independently. How do we do it, then?

First, we may want to take a step back and work on defining what we are looking at. Behaviorists and ABA practitioners are used to a functional analytic approach which aims to identify exactly that; functional relationships between the environment and clinically targeted behaviors. Personality Psychology, on the other hand, is a little more topographical in how traits are defined. They look at classifying traits by what they present as, how they appear, and reports of how people act, and think, with less emphasis on that environment link. One of the great researchers to bridge these two ways of studying personalities, tendencies, and behavior, was Jeffrey Gray who looked at the personality inventories and questionnaires of Hans Jürgen Eysenck, and developed a theoretical model which related these personality and temperament factors to behavioral inhibition (behaviors likely to be inhibited where cues of punishment or lack of reinforcement are found), and behavioral activation (behaviors likely to be activated in the presence of possible reinforcement or cues of no punishment). Here, personality traits of extraversion and introversion, for example, were related to dimensions of anxiety or impulsivity which could be easier to define and study behaviorally. Gray (1981) was interested in how these traits could explain “sensitivity” (higher responding) or “hypo-responsiveness” (lower responding) to punishment and reinforcement stimuli.

Would someone who was rated higher in extraversion/low-anxiety respond a certain way to social positive reinforcement?

Would someone who was rated higher in introversion/high-anxiety respond a certain way to social negative reinforcement?

These are some questions that might pique the interest on both sides of the fence, both Behavior Analytic, and Personality Psychology. Take any one of those personality traits above, and you may find similar ways to study it behaviorally. The literature on this type of work is impressive. Gray’s work which began in the 1970s, went on for over 30 years. There is a wealth of literature on the topic of his theoretical models, and the topics of the Behavioral Inhibition System (BIS) which relates factors that impact a reduction of responding, and Behavioral Activation System (BAS) which relates factors that impact an increase in response activation, from Gray’s work in 1981. In 2000, Gray & McNaughton presented a third theoretical system called FFFS (fight-flight-freeze system) to explain responses to unconditioned aversive stimuli in which emotionally regulated states of “fear and panic” play a role in defensive aggression or avoidance behaviors. These took into account neuropsychology and went even further to suggest links to conflict avoidance in humans in day to day life. The literature on this is absolutely fascinating in how it finds a way to bring behavioral analytic concepts to a new arena.

Could it be possible for one day to see Personality Psychologists talking about reinforcement and punishment sensitivity? How about Behavior Analysts talking about traits when considering consequence strategies? At the very least, it’s a conversation that neither field might have had without knowing. We can only hope to gain from stepping outside of traditional boundaries and broaden our intellectual horizons.

Comments? Questions? Thoughts? Leave them below!

References:

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some
current dimensions of applied behavior anlysis. Journal of
applied behavior analysis, 1(1), 91-97.

Big Five personality traits. (2018, April 19). Retrieved from https://en.wikipedia.org/wiki/Big_Five_personality_traits
Farmer, R. F. (2005). Temperament, reward and punishment sensitivity, and clinical disorders: Implications for behavioral case formulation and therapy. International Journal of Behavioral Consultation and Therapy,1(1), 56-76. doi:10.1037/h0100735
Gray, J. A. (1981). A Critique of Eysenck’s Theory of Personality. A Model for Personality,246-276. doi:10.1007/978-3-642-67783-0_8
Gray, J. A., & McNaughton, N. (2000). The neuropsychology of anxiety: An enquiry into the functions of the septo-hippocampal system. Oxford: Oxford University Press.
Hans Eysenck. (2018, April 14). Retrieved from https://en.wikipedia.org/wiki/Hans_Eysenck

HEXACO model of personality structure. (2018, April 22). Retrieved from https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure

Skinner, B. F. (1953). Science and human behavior. New York: Macmillan.
Skinner, B. F. (1971). Beyond freedom and dignity. New York: Knopf.
Image Credits:

http://www.pexels.com

Beyond Good, Evil, Freedom, and Dignity

BF.NA comparison of concepts from B.F Skinner’s “Beyond Freedom and Dignity” and Friedrich Nietzsche’s “Beyond Good and Evil”.

 

There was something about these two books that piqued my interest, and it was not until reading them again, together, that I saw that the similarities went beyond the titles. For those who have not been introduced to these individuals and their contributions; Friedrich Nietzsche was a 19th-century philosopher known for dealing with topics of existentialism and nihilism, and Burrhus Frederic (B.F) Skinner was a 20th-century psychologist and behaviorist interested in the natural science of behavior. Aside from the similarities in their names, and the names of the titles of their two works, few parallels have been drawn between these figures. I think there is a great deal of overlap, conceptually, between these two books, and although the conclusions of both authors diverge quite differently, the path and observations on the world and history are strikingly alike.

When it comes to B.F Skinner, I have been interested in the academic and philosophic lineage of his work, and existentialist philosophers have never been a reference or topic I’ve noticed before. Pragmatism, yes, and Roy A. Moxley (2004) did an amazing piece on the influences of Charles Sanders Pierce & John Dewey on Skinner’s conceptualization of the three-term contingency and broader behavioral selectionist theory. No Nietzsche. Not even once as far as I could tell. It raises some questions with me, then, in how these two books are so similarly constructed. Both seem to tackle a very similar topic, broad as it is, the actions of people, and their morality (which comes very close to dignity, in Skinner’s usage, in my estimation). They start with Western history and philosophy and even reference the same ancient Greek precepts as foundations to build their arguments and points from. Both appear to lead up to their current history and take into account their contemporary issues when presenting their philosophical conclusions. I am not a professional book reviewer or a literary scholar, so this process of literature exploration is outside of my wheelhouse, but I would like to lay out some pieces from both of these works to open the door comparatively. Both of these authors picked the right word “Beyond”. Both works present a series of presuppositions in their contemporary times and aim to progress past them rationally.

Skinner and Nietzsche: The Problems of Their Times

Context is important when reading and interpreting both of these authors. They were both big thinkers. Brilliant. Both wildly controversial. That tends to mean they had opinions, unpopular ones, but ones that they put out into the world rigorously supported by the assertions in their work.

Nietzsche was born in 1844, in Germany, and served in the Franco-Prussian war where he received grievous injuries that he never recovered from. “Beyond Good and Evil” was written after that. After the war, he wrote on the contemporary topics that he believed were essential to human progress and critiqued entrenched falsehoods that he believed were subverting people’s potential and lives. Morality was a big subject for him. Unlike other existentialist philosophers of his time, he was not so backseat and uncertain about it. He proposed that morality was separate from the Western religious belief systems and structures that were entrenched in society, and believed that willpower had the power to transcend these societal limitations. Traditional morality (societal and religious), to him, was making people weak. They needed to improve themselves, with their own morality and their own will, to be strong. In “Beyond Good and Evil” (1886), Nietzsche suggests that the words “Good” and “Evil” were malleable concepts that change over time, and were not fixed. Fear was a motivator for morality, he proposed, and that there was a mistake in believing that “mass morality” or the moral beliefs of the groups/society had any higher importance than an individual’s personal morality. Hold onto that thought.

Skinner’s work in “Beyond Freedom and Dignity” (1972) came from a very different time historically. In the 1970’s, the Cold War raised probabilities of worldwide escalation and catastrophe. In the first chapter alone, Skinner broached the topics of overpopulation, global starvation, nuclear war, and disease. Skinner did some philosophical work himself, but his main focus was as a psychologist and behaviorist interested in focusing on psychology as a natural science, to see human behavior as measurable and observable, and aim scientific pursuit as a “technology of behavior” to solve the problems of our time. In many ways, it was a utopian idea, and he expands on that vision in his fictional work “Walden Two”. Engineering society with this science was within humanity’s grasp. Skinner looked broadly at the ills of the world, and believed that there were some pieces of cultural and societal misunderstanding that was holding it back. Like Nietzsche, his observations strayed away from metaphysical interpretation. Skinner believed that natural sciences like physics and biology had made the leaps that psychology had not. People were still hung up on antiquated interpretations of human behavior. To Skinner, it was the environment and history of reinforcement/punishment that could be used to describe human action. He believed that mentalistic concepts such as “inner capacities” were circular, and lead to no useful distinction of a phenomenon or process that could benefit scientific discovery. Human behavior could be shaped by environment, and act on the environment as an operant. His work aimed to remove the ideas of absolute human freedom, and dignity in the sense of viewing the human being as the “fully autonomous man”; these were not practical representations of human behavior to Skinner. Full autonomy, free choice, with no input from the environment was nonsensical, which begged the question as to how free will was actually free when it was under the control of environmental stimuli, to begin with. Conceptualizing human behavior under the contingencies that Skinner proposed, including reinforcement and punishment, removes those antiquated and pre-scientific distinctions, and by removing them, people would no longer be under any false illusions and could take control of their behavior.

 

Where They Come Together, and Where They Differ

Both Nietzsche and Skinner’s line of thought come from a disagreement with the broader idea of humanity by contemporary society. For Nietzsche, it was a societal and religious misunderstanding of morality. For Skinner, it was a societal and historical pre-scientific misunderstanding of human behavior. Both “Beyond Good and Evil” and “Beyond Freedom and Dignity” do touch on similar points by their end: human behavior and morality. Both authors hit the same nail in two very different ways, both using historical context to do so and their own interpretation and findings from their own work and lives. There are some interesting divergences too, mainly on the topic of science and empirical materialism. B.F Skinner was very much interested in the material world and observable findings, which nearly 100 years prior, Nietzsche also had to deal with. In Nietzsche’s time, the late 19th century, these concepts were still budding, but rational observation of the world and the field of psychology was relatively recent in the form of psychoanalysis. He describes some of his ideas on the topic of science and the metaphysical soul in “Beyond Good and Evil”:

“Between ourselves, it is not at all necessary to get rid of “the soul” thereby, and thus renounce one of the oldest and most venerated hypotheses—as happens frequently to the clumsiness of naturalists, who can hardly touch on the soul without immediately losing it. But the way is open for new acceptations and refinements of the soul-hypothesis; and such conceptions as “mortal soul,” and “soul of subjective multiplicity,” and “soul as social structure of the instincts and passions,” want henceforth to have legitimate rights in science. In that the NEW psychologist is about to put an end to the superstitions which have hitherto flourished with almost tropical luxuriance around the idea of the soul, he is really, as it were, thrusting himself into a new desert and a new distrust—it is possible that the older psychologists had a merrier and more comfortable time of it; eventually, however, he finds that precisely thereby he is also condemned to INVENT—and, who knows? perhaps to DISCOVER the new.

Psychologists should bethink themselves before putting down the instinct of self-preservation as the cardinal instinct of an organic being. A living thing seeks above all to DISCHARGE its strength—life itself is WILL TO POWER; self-preservation is only one of the indirect and most frequent RESULTS thereof. “- Nietzsche (1886)

You can see here that Nietzsche is still strongly proposing that even in the area of science, psychology, and the soul, that willpower is an overlooked and undeniably important factor. I do find an interesting subpoint in there, in the process of invention and discovery by new psychologists, which nearly a century later would include Skinner himself. Although Nietzsche was strongly against the idea of science reducing everything to material reality, and I believe would take strong opposition to Skinner’s ideas on mentalistic representations of “soul” and morality, there is a great deal they share in their ways of tackling broader problems of their time, and interpretations of humanity as open to the future and unfixed. Humanity, to them, was not something that is and always will be the same. For very different reasons, Skinner and Nietzsche had a strange optimism of humanity in the wide and open possibility that either willpower, for Nietzsche, or contingencies for Skinner, could do for humanity as a whole.

B.F Skinner took a look at human morality himself in “Beyond Freedom and Dignity” when exploring the concept of cultural control, or behavioral control from the contingencies of a broader group, which included cultural, or rule-governed behavior and walked the line of evolution in both cultural and biological aspects both effecting one another to form a morality that was also “created” in a sense by evolution and sensitivity to cultural factors of control. Biological evolution making us sensitive to the evolution of cultural contingency. It’s a point that packs a punch.

“The practical question, which we have already considered, is how remote consequences can be made effective. Without help a person acquires very little moral or ethical behaviour under either natural or social contingencies. The group supplies supporting contingencies when it describes its practices in codes or rules which tell the individual how to behave and when it enforces those rules with supplementary contingencies. Maxims, proverbs, and other forms of folk wisdom give a person reasons for obeying rules. Governments and religions formulate the contingencies they maintain somewhat more explicitly, and education imparts rules which make it possible to satisfy both natural and social contingencies without being directly exposed to them.

This is all part of the social environment called a culture, and the main effect, as we have seen, is to bring the individual under the control of the remoter consequences of his behaviour. The effect has had survival value in the process of cultural evolution, since practices evolve because those who practise them are as a result better off. There is a kind of natural morality in both biological and cultural evolution. Biological evolution has made the human species more sensitive to its environment and more skilful in dealing with it. Cultural evolution was made possible by biological evolution, and it has brought the human organism under a much more sweeping control of the environment.”-Skinner (1972)

Two very different views, both denying a common cultural interpretation or framework for psychology, human behavior, and morality, but leaving a wide berth for future change, that in a sense is within humanity’s realm of control. I found those two shades of interpretation to be incredibly interesting, especially in morality. Remember that Nietzsche was well aware of the impact of “group morality”, and advised against its importance over the individual’s morality. Skinner also makes a nod to group forms of morality and seems to believe we are uniquely and biologically sensitive to it. I would love to have heard a conversation between the two of them on that. This is just the tip of the iceberg too. I suggest anyone who found their interest piqued to read both works and come to conclusions of your own.

By Christian Sawyer, M.Ed., BCBA

Thoughts? Comments? Questions? Leave them below!

 

References:

Moxley, R. A. (2004). Pragmatic selectionism: The philosophy of behavior analysis. The Behavior Analyst Today, 5(1), 108-125.

Nietzsche, F. N. (2007). Beyond good and evil. Place of publication not identified: Filiquarian Pub.

Ozmon, H. (2012). Philosophical foundations of education. Upper Saddle River, NJ: Pearson.

Skinner, B. F. (1971). Beyond freedom and dignity. New York: Knopf.

 

Image Credits: Wikipedia

Overcoming the Fear of Failure

pexels-photo-271418

This is a topic I see very often in clinical practice. Not only that, but it affects everyone at one point in their lives. When I am working on skills with my clients who are able to vocalize and express these fears, I see a pattern inherent to everyone who has ever encountered something new. In Applied Behavior Analytic research, sometimes we like to operationalize this phenomenon as “aversion”, or “presentation of an aversive novel stimulus”. Whatever we call it, it is the same thing. Engaging in something new and uncomfortable in a goal directed way is a challenge that we have to confront. Clinically, I prefer to have the individual guide their own process and become aware of their own specific aversions and behaviors. It makes the practice of confronting these stimuli as self-initiated, and self-guided as possible.

I prefer the word confront because it has a better ring to it than “desensitization”. When it comes to coming face to face with a stimulus or situation where we have to either perform or adapt, confront just seems to carry the operant theme more than the passive “desensitizing”. Failure is a scary and aversive thing.  We can define it as a condition where our operant behaviors are unsuccessful. Efforts which are not reinforced. It’s perfectly natural to want to avoid a contingency with no reinforcement. When we face something we are afraid of, or a new situation where we might not be sure we can succeed; we are facing that fear of failure. Maybe it is a fear of not being able to complete a required activity of success, or putting yourself out there socially and being received amiably. There is something universally human to that kind of hesitation. In ABA we call that an “escape-maintained” behavior, and when the behavior serves no real purpose to protect us, it tends to hold us back. When failure is that fear, then we tend not to even try.

In clinical practice, be it Applied Behavior Analysis (ABA) or any other Cognitive Behavior Therapy (CBT) the advice is all the same; it takes presentation (and sometimes repeated presentation) of that stimulus in a controlled situation until that aversive situation becomes neutral. This is called controlled exposure. That is where the real progress happens. When someone meets that situation, faces it, and can come through the other side fearing it less (or finding it less aversive), it is a step in the right direction. You may also hear the term “graduated exposure”, which denotes the concept of fading in stimuli or related stimuli in from least to most in order to acclimate in steps. A common example is if someone is scared of spiders or animals, they would be shown a picture first across the room, and gradually get closer to the picture before moving on to any examples of the real deal. Habituation is the term commonly used for becoming used to something, to the point where the stimulus becomes tolerable, if not neutral.

These same principles can be used when actively trying to overcome a fear of failure too. Generally, we come across things that are new to us. These can be either unconditioned stimuli (things we are “naturally” fearful of) and conditioned stimuli (things we have learned to be fearful of). Public speaking in front of large groups is an example of an unconditioned stimulus (for some, but it can be conditioned for others) while taking tests is a common example of a conditioned stimulus. Both present a challenge that we have to act on (engage in operant behavior) in order to be reinforced. Be it someone you are helping in clinical practice, or yourself, you can use these same foundational principles of graduated exposure. If the situation is not reinforcing in itself, keep in mind that you can always improvise your own reinforcement (reward) in order to make adapting easier. Using reinforcement alongside challenging situations can make them less aversive through a process called conditioning. The act of practicing this process on yourself is called self-management.

pexels-photo-725255

Consider these steps when trying to formulate your own graduated exposure:

  1. Find the situation which you feel is important to engage in or achieve (Target).
  2. Break it down into it’s smallest components (Task Analysis). 
  3. Pinpoint which part, exactly, is causing the most aversion or fear (Aversive Stimulus). 
  4. Document, to the best of your ability, the behaviors you engage in along the way (Data Recording/Self-Monitoring). Do these behaviors help, or do they hinder? 
  5. Practice engaging with a facsimile or similar situation where the stimulus or stakes are not so high (ie. If public speaking is the target try practicing a speech in front of 1 person first). 
  6. Reinforce (reward) any toleration or approximation of success! This is the most important step. 
  7. Gradually shape these practice simulations to simulate the “real” objective as closely as possible. 
  8. Do not rush it. Challenge yourself, but be mindful that this is a process, not a race.

Take it slow. Document everything you can. Learn. Improve. The process is where the fear of failure is overcome. Often it takes more than one contact with the situation to get accustomed. I’ve used this process on myself more times than I can count. As a person who has found large exams, public speaking to crowds, public competition, and even engagement in new and unfamiliar situations; the end-goal is all the same. It is something that is worth facing because the outcome is a socially important, or beneficial to us. The aversion, or fear, is not helpful or adaptive. Facing these situations and designing the process oneself is empowering.

Self-Management is one of the greatest strategies in ABA. If someone can find a way to manage their own behavior successfully then it is the ideal situation. Self-monitoring and self-management also have the unique bonus of being able to handle what Behaviorists call “covert behaviors” (thoughts, etc). Covert behaviors are things that are not visible to outside observers but are still able to be tracked and recorded by the person experiencing them. Accuracy and specificity is important here, and can vastly improve a personal insight into their own patterns of behavior. This doesn’t have to be a single person job either! Even though someone can monitor their own behavior, they can also bring trusted friends/family/cooperators into the process of reinforcement and help to keep them on track.

Independence, and knowledge about yourself, while overcoming a challenge.

What could be better?

 

Comments? Questions? Leave them below!

 

References:

  1. Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied behavior analysis. Columbus: Merrill Pub. Co.
  2. Wood, S. E., Wood, E. R., & Wood, E. R. (1996). The world of psychology. Boston: Allyn and Bacon.

Photo Credits:

  1. pexels.com Pexels Stock Photos

 

Is the intelligence barrier real for occupation training?

people-3253355_640

 

This post is more speculative, and an exploration into current research, than a tried and true ABA topic I usually expound on. I saw something that struck me this morning on Twitter. The claim that an individual with an IQ less than 80 could not be trained to functionally work in society. I know for a fact this is not the case, because I’ve seen and worked on it, but I wanted to get my sources down to confront this Tweet.

It was harder than I expected.

I wanted a single consensus of an answer, but unfortunately could not find one. I think I know why, and the answer does not specifically have to do with the IQ scores of the participants, it has to do with how that training is done in relation to the population. We’ll touch on the details of that below.

I have personally worked on hundreds of Applied Behavior Analytic cases, with a broad range of ages, abilities, intelligence, and skills. I have seen more success than I have plateaus. I’ve seen employment aids and training work. The challenge of the process is certainly true, but I dislike the idea of firm impossibilities. This may influence how I first took affront to that Tweet. The research is vast, but the narrative I’ve come to understand does not simply allow an IQ score to determine a cut off for functionality in the workplace. Not exactly. Let’s look at the research I was familiar with:

 

people-woman-coffee-meeting

Rusch & Hughes (1989) in the Journal of Applied Behavior Analysis-  “An Overview of Supported Employment”. They used the common term “Supported Employment” for individuals with disabilities and were focused mainly on those individuals sustaining paid work. The paid work part was fairly important to them, and I’d argue that maintaining paid employment is a reasonable counter to the claim that training is ineffective with the target populations. This study did explore the “place and train” model, which later studies found to be less than optimal, but the findings here did find a measure of success. Some individuals did benefit from these methods. That’s the important finding. They were able to sustain paid work in society. Their terminology for intelligence scoring is a little outdated in this study. We use the term Intellectual Disability these days. They used the terms “mentally retarded”- “mildly”, “moderately”, and “profoundly” specifically. Looking up the diagnostic criterion used at the time, we can see that Rusch and Hughes had the following distribution:

Out of 1,411 individuals with disabilities sustaining paid employment, 8% of these individuals fell within the “mentally retarded” category with IQ scores below 70.

  • 10% of these individuals fell within IQ score ranges of 20-25 (“profoundly mentally retarded”)
  • 45% of these individuals fell within IQ score ranges of 35-55 (“moderately mentally retarded”)
  • 38% of these individuals fell within IQ score ranges of 50-70 (“mildly mentally retarded”)
  • <8% of these individuals fell within IQ score ranges of 70-80 (“borderline mentally retarded”)

So, even with outdated “place and train” models, this study does give us some information on some level of effectiveness that supported training can meet the criterion and disprove the Tweet, and this was as of 1989 referencing successes from decades prior. There are a place for individuals with a vast range of intelligence scores in society. Problem solved, right?

Wait just a minute. There are some challenges in the training process that can not be overlooked. Challenges that might just hint at why people believe that supported training does not work. We see in Rusch and Hughes the successes of certain methods for a small amount of the population. Since then, we’ve seen some longitudinal studies that have raised more questions than they’ve given us answers, and raised more challenges than we thought were there.

 

color-3207345_640.jpg

Conroy & Spreat (2015)- Journal of Policy and Practice in Intellectual Disabilities

Conroy and Spreat titled their study a “Longitudinal Investigation of Vocational Engagement”, and were interested in how individuals with intellectual disabilities remained employed during a 15 year period from 1999-2004.

An important point I want to bring up first is the concept of Self-Determination, which is the point which all people have to make choices about their lives. An individual, no matter their situation, can make choices about their own lives freely. That includes employment. So when we speak about supported employment, this is due to the individual wanting to work, and maintaining that employment freely.

What Conroy and Spreat were studying were vocational attendance, and quality-of-life data. They found a similar trend in individuals receiving both residential supports and day-to-day supports:

“The overall amount of vocational, prevocational, and nonvocational activities changed sharply during the 15‐year period. Vocational and prevocational activity declined, while nonvocational engagement more than doubled, both in numbers of people and hours. During the same time period, the number of employed individuals consistently declined, as did the total number of hours worked.”- (Conroy and Spreat, 2015)

So we see a trend here where worked hours decrease over time, and nonvocational engagement increased with the studied population. Why could that be? According to Conroy and Spreat, it was due to “segregated forms of vocational activity”. These individuals were not in society working side by side as we saw in the older “train and place” method with Rusch and Hughes, they were doing workshops and prevocational activities separately. Those factors, according to Conroy and Spreat, seemed to have a large effect on the downturn of worked hours.

Again, I see a theme here. The individuals themselves had no innate limitation to working those hours, but the vocational training and workshops appeared to play a role in either the disinterest in maintaining employment, or maybe it was not a good fit for those individuals for that particular skill. That system of separating out workshops and prevocational skills from inclusion with the broader population just did not seem to be effective. So, what is an alternative?

teamwork-3237649_640.jpg

Lattimore & Parsons (2006)- Journal of Applied Behavior Analysis article titled “Enhancing Job-Site Training of Supported Workers With Autism: Reemphasis on Simulation” was a great find. It had everything I was looking for. I wanted to seek out a (evidence based) reasonable solution that had individuals in the work place (job-site), engaging with the broader population, and had a degree of success. But, they came up with a challenge (and solution to) I had not seen before: Job-Site training alone is sometimes insufficient for quick skill acquisition. Simulation (prevocational training, like what we see used in Conroy and Spreat) added in to the job-site supports seemed to be the key to speeding that acquisition up.

“Job-site training occurred in a small publishing company during the regular work routine, and simulation training occurred in an adult education site for people with severe disabilities. Two pairs of workers received training on two job skills; one skill was trained at the job site and the other was trained using job-site plus simulation training. Results indicated that for 3 of the 4 comparisons, job-site plus simulation training resulted in a higher level of skill or more rapid skill acquisition than did job-site-only training. Results suggested that job-site training, the assumed best practice for teaching vocational skills, is likely to be more effective if supplemented with simulation training”- (Lattimore and Parsons, 2006)

In this study, adults with severe disabilities (the DSM-V IQ score for this population is 25-40) were tested in conditions where on-site community employment training and support were given. Interestingly, both were effective, but skill acquisition was much faster when simulation (off site training) was provided as well. This combination was a fascinating read for me, because it tied some of the factors that the previous two studies saw as challenges.

There is a mountain of research out there, and this just scratched the surface, but this exploration did seem to reinforce my original anecdotal belief that an IQ score alone is an insufficient barrier, and shows an ignorance to the power of effective training and applied behavioral therapy. This is a complex problem, and one I might not have been able to boil down into a single tweet, but one I am happy to see researchers coming up with solutions to every day.

 

Thoughts? Comments? Leave them below.

 

Sources:

Lattimore, L. P., Parsons, M. B., Reid, D. H., & Ahearn, W. (2006). Enhancing Job-Site Training of Supported Workers With Autism: A Reemphasis on Simulation. Journal of Applied Behavior Analysis, 39(1), 91-102.

Rusch, F. R., & Hughes, C. (1989). Overview of supported employment. Journal of Applied Behavior Analysis, 22(4), 351-363. doi:10.1901/jaba.1989.22-351

Spreat, S., & Conroy, J. W. (2015). Longitudinal Investigation of Vocational Engagement. Journal of Policy and Practice in Intellectual Disabilities, 12(4), 266-271. doi:10.1111/jppi.12136

 

Image Credits: http://www.pexels.com, http://www.pixabay.com