Philosophic Doubt- When Scientific Inquiry Matters

There are important assumptions, or attitudes of science, which ground scientific study across all disciplines; Determinism, Empiricism, Experimentation, Replication, Parsimony, and Philosophic Doubt. The last one holds a key role in how we deal with the information we gain from science, and what we do with it in the future. Philosophic Doubt is the attitude of science which encourages us to continuously question and doubt the information, rules, and facts that govern our interpretation and understanding of the world (universe, etc). Philosophic Doubt is what has practitioners of science question the underpinnings of their belief, and continually do so, so that their understanding is based on consistently verifiable information. Philosophic Doubt cuts both ways- it can have a scientist test the truthfulness of what others regard as fact, but that means they also must take on the same level of scrutiny and skepticism in their own work. To some, Philosophic Doubt is a gift that has helped them expand on their ideas and shape them beyond the initial first experimental steps. To others, Philosophic Doubt is a detrimental form of skepticism clawing at information or beliefs that they hold dear. These views are not new, and in fact we can find traces of this disagreement going back to the 19th century. Here we will explore the assumption of Philosophic Doubt, including proponents and detractions both old and new.

Why do we need Philosophic Doubt anyway?

Philosophic Doubt is important to science because it has an effect on how the progression of scientific work takes place. It has scientists test their own assumptions, hypotheses, and underlying beliefs, even if those are held precious to them, against replicable evidence and new future findings. Philosophic Doubt drives experimentation, and it precedes replication as well. It is what underlies the empirical drive for seeking evidence. Without philosophic doubt, science can go wrong. A hypothesis could be formed based on inaccurate information which would never be retested. Subjective experience could entrench anecdotes in a study as a broader experience than they are. A scientist could start with what they want to find, and cherry pick only what fits their assumption. These examples are the risks of not taking Philosophic Doubt in to account. Sometimes it can simply boil down to the scientist wanting to be right, against keeping an open mind that they might not be. Holding the assumption that there is a benefit to questioning findings or previously accepted beliefs is not a slight against past experience or belief, but rather a better way of interpreting future information if it were to challenge it. Questioning is a part of science, but not everyone thought so.

“In Defence of Philosophic Doubt”

Authur James Balfour, a 19th century philosopher, debater, and scientist, took this topic head on in “In Defence of Philosophic Doubt”. Unlike today, opponents of Philosophic Doubt at the time were more interested in comparing the empirically-heavy scientific beliefs to a more open metaphysical series of alternatives- that is, they were more interested in comparing science to non-scientific belief systems as the truth of reality. When it came to psychology, there were idealists, and realists, and stoics at each others throats with concepts that could not be observed or proven. As you might already be able to see, comparing metaphysical constructs to an assumption that has them continually question their arguments and points, makes metaphysical assertions all the harder to make. Scientific points, however, make Philosophic Doubt a little easier to withstand:

Under common conditions, water freezes at 32 degrees Fahrenheit

Employing Philosophic Doubt, we can continually circle back to this assertion to test it again, and again. Pragmatically, there comes a point where we only question these basic and well founded particulars when we have reason to do so, but the doubt is always present. Sometimes for precision, sometimes to be sure that we are building off of the knowledge correctly, and others to help with the replication and experimentation assertions that grow science. Balfour was a strong proponent of natural sciences, and the use of this kind of questioning. Science founded on observation and experimentation was something truly important to him. Keep in mind, the 19th century was shaped by scientific discovery at a pace never before seen. Balfour kept an even head about this, and believed more in the assumptions of science as the path to understanding the natural world. Propositions which states laws, or which stated facts, had to be built on concrete science and not just personal belief or anecdote. Some of his points we would take as obvious today- for example, when using comparative probability, would we run an experiment or trial just once, or twice? Multiple times? If we ran something like this just once, it wouldn’t be comparative probability, but if we ran it twice and accepted this as the final answer to the question we would miss out on the further replication and experimentation on the subject. The curiosity that Philosophic Doubt embodies would keep the experiment and replication going. Without Philosophic Doubt, we fall into a trap of not questioning initial assumptions or findings.

Another interesting thing about Balfour’s work is that it came at a time where there was a great deal of belief in a mechanical universe that followed strict Newtonian laws. At the time, this was compared with more metaphysical alternatives. Balfour cautioned everyone to continually use philosophic doubt and to question both belief systems- even if the “mechanical universe” was winning by a landslide at the time. If we were to take Balfour’s points and stretch them into the future, we might see how he would have found some justification in further development in physics- quantum mechanics for example, where the Newtonian mechanical universe which was seen as sufficient to explain everything, falls a little short. Without that testing of the original tenets of physics, the use of Philosophic Doubt, we might not be where we are now. The analysis of Balfour’s work could go on for entire chapters, but I would like to top it off with an excerpt on the topic of the evolution of beliefs, and the reluctance to test our own personal beliefs:

“If any result of ‘observation and experiment’ is certain, this one is so- that many erroneous beliefs have existed, and do exist in the world; so that whatever causes there may be in operation by which true beliefs are promoted, they must be either limited in their operation, or be counteracted by other causes of an opposite tendency. Have we then any reason to suppose that fundamental beliefs are specially subject to these truth-producing influences, or specially except from causes of error? This question, I apprehend, must be answered in the negative. At first sight, indeed , it would seem as if those beliefs were specially protected from error which are the results of legitimate reasoning. But legitimate reasoning is only a protection against error if it proceeds from true premises, and it is clear that this particular protection the premises of all reasoning never can possess. Have then, then, any other? Except the tendency above mentioned, I must confess myself unable to see that they have; so that our position is this- from certain ultimate beliefs we infer than an order of things exist by which all belief, and therefore all ultimate beliefs, are produced, but according to which any particular ultimate belief must be doubtful. Now this is a position which is self-destructive.

The difficulty only arises, it may be observed, when we are considering our own beliefs. If I am considering the beliefs of some other person, there is no reason why I should regard them as anything but the result of his time and circumstances.” -Arthur James Balfour, “In Defence of Philosophic Doubt” (1879).

Back to Basics- Science and Philosophic Doubt

In “Applied Behavior Analysis ” Cooper, Heron, and Heward begin their first chapter with the basics of what science is, specifically behavioral science, and the assumptions and attitudes of science including Philosophic Doubt. Cooper, et al., consider these foundational concepts in science as a whole and relate their importance to psychology and behavioral science. In their words:

“The attitude of philosophic doubt requires the scientist to continually question the truthfulness of what is regarded as fact. Scientific knowledge must always be viewed as tentative. Scientists must constantly be willing to set aside their most cherished beliefs and findings and replace them with the knowledge derived from new discoveries.

Good scientists maintain a healthy level of skepticism. Although being skeptical of others’ research may be easy, a more difficult but critical characteristic of scientists is that they remain open to the possibility- as well as look for evidence that their own findings and expectations are wrong.” -Cooper, Heron, Heward, “Applied Behavior Analysis”, (2017).

Bonus! B.F Skinner
“Regard no practice as immutable. Change and be ready to change again. Accept no eternal verity. Experiment.”- B.F Skinner, 1979

The sentiment behind Philosophic Doubt and science is that of openness and humility. Not only is the scientific work we read subject to doubt, but our own as well. The latter is the most difficult part- challenging our own beliefs constantly, challenging our most cherished propositions and reasoning. To some, this is something that expands the horizon of future knowledge infinitely, to others; a hard trail to follow that is no easy task. In either case, perhaps this brought up the importance of Philosophic Doubt, and how it ties in with the other assumptions in science as a challenging but inseparable part of the process.

Comments? Thoughts? Likes? Questions? Leave them below.

References:

1. Balfour, A. J. (1921). A defence of philosophic doubt: being an essay on the foundations of belief. London: Hodder & Stoughton.

2. Cooper, J. O., Heron, T. E., & Heward, W. L. (2017). Applied behavior analysis. Hoboken, NJ: Pearson.

3. Skinner, B. F. (1953). Science and human behavior: B.F. Skinner. New York: Macmillan.

Token Economies: What Behavioral Science and Blockchain Technology Have In Common

“Token Economies”- two words springing up at Blockchain and Cryptocurrency summits and conferences with increasing regularity. Token Economies have been used by behavioral scientists and practitioners for decades, but recently they have taken off in the field of Blockchain and Cryptocurrency technologies. Both applications use the term “Token Economy” interchangeably. In technology conferences and summits, it is the original behavioral psychology definition that is used to describe the concept. The tech field is now using the original token economy concept and expanded it to apply to what some might call the future of commerce and currency. Exciting stuff. Here, I will break down the basic concepts of what a Token Economy is, and how both behavioral scientists/analysts use them, as well as the new application in the technology by Blockchain and Cryptocurrency developers.

tokens

The Token Economy

Let’s break it all down. What is a token economy? A token economy is a system where tokens, or symbols, are used as conditioned reinforcers which can be traded in for a variety of other reinforcers later. It is not a bartering system or prize system where objects/access/services are given directly following a target behavior, but a conditioned stimulus (token) without necessarily any intrinsic value that is agreed upon to add up to exchange or buy another reinforcing item. A common example that most of us are used to is money. Paper money, specifically, can be considered a part of a token economy in that it is “traded in” towards some terminal reinforcing stimulus (or “back up reinforcer” as it is called in behavior analysis). The paper money is a conditioned reinforcer because it has no necessary intrinsic value but has conditioned value for what it can eventually be used for within the token economy.

This was taken up originally by behavioral researchers in the 1960’s, as a form of contingency management for the reinforcement of “target behaviors”- or prosocial learning, in therapy situations. Reinforcers are important psychologically because, by definition, reinforcers change the rates of behavior that they follow. They can help teach life-changing skills, or alternatives to some destructive or undesirable behavior quickly. But, reinforcers can be tricky too. People can become bored or satiated with tangible rewards, such as food, but within a token economy, reinforcement can be delivered in the form of tokens and allow for a later exchange or choice of any number of possibilities desirable to that individual. By pairing these tokens with access to “primary reinforcers” (reinforcers that are biologically important) or other “secondary reinforcers” (stimuli that have learned value), the tokens themselves become rewarding and reinforcing- thereby creating a sustainable system of reinforcement that defies the satiation and boredom variables that the researchers originally found as barriers to progress. Alan Kazdin’s work “The Token Economy” is a fantastic resource on the origins and research that began it all.

What can a token be? Nearly everything. But, it has to be agreed upon as a token (given some value for exchange) in order to serve as a token for the purpose of trading it in, or buying with it. Giving someone a high five after doing a great job at work, for example, is not a token. It is a reward, and possibly a reinforcer, but it was not conditioned to have value, and cannot be saved or exchanged. Tokens also need not necessarily be physical, or tangible. They can be symbols, or recorded ledgers, so long as that information can be used for the exchange in the corresponding token economy. This is where blockchain and cryptocurrency technologies tie in to the original behavioral science understanding of a token economy. Can data, or information, serve as a token and be used in a token economy if it is agreed upon to have value and worth exchange? If you haven’t heard of BitCoin (a Cryptocurrency), the answer is yes.

2rarqo

Blockchains and Cryptocurrencies

What is Blockchain then? And what is a Cryptocurrency? Using our original definitions of tokens and token economies, for data or information to be considered tokens they have to be able to be exchanged and have value that can be traded within the token economy. Blockchain technology solves this by creating units of data called “blocks”. These blocks, simply put, are a growing list of data records that contain a “cryptographic hash” of previous blocks. These linked blocks form a ledger which is resistant to duplication and tampering. In layman’s terms, unlike most data that people can manipulate and come into contact with day to day, a “block” within this Blockchain cannot be altered or copied and maintains a faithful record of time and transactions. Resistance to copying/duplication means that it cannot be forged, and resistance to altering means that this data (the record of information) is seen as reliable. If we create a currency using this technology, then we have the means to create units, or tokens, that are individual, can be traded, and have a consistent and (for the cases of this introduction) unalterable record of transaction. Assigning value to this creates a digital currency called Cryptocurrency. Tokens. Transactions can take place using these blockchains. These transactions take place person to person (“peer to peer” or P2P), meaning that once a unit of cryptocurrency is exchanged from one person to another, it resembles very much a physical exchange of all other forms of currency. This exchange does not require any medium, such as a bank, like physical currency does in online banking for example.

Blockchain and Cryptocurrency developers, then, would be looking to create a form of token currency that can be traded within this broader token economy- that is both reliable enough to be used by enough people to catch on or become commercially viable, while still maintaining the benefits of a cryptocurrency (security, privacy, etc) over traditional currency. These cryptocurrencies, these units of data, these blocks, have no intrinsic value themselves. They are tokens in the very real sense that the original behavioral research intended. Their usage and effects, then, appear to follow in the same vein. Currency can be reinforcing, reinforcement can alter behavior, and once a token takes on value through the conditioning process; it can be truly valuable in its own right as a “generalized reinforcer”- a reinforcer that is backed up by many other types of reinforcers. A dollar, for example, as a widely used currency can be used for a nearly countless number of goods, services, and transactions. This makes it a good generalized reinforcer. The more a token can be traded for, the better a generalized reinforcer it becomes.

Will a form of cryptocurrency, like Bit Coin, gain this same traction as a currency, or token, to access other reinforcers in trade? Many people say yes. That’s where both behavioral scientists and blockchain developers can both find excitement in each new development and innovation.

Likes? Comments? Questions? Did I get it wrong ? Leave your comment below!

References:

  1. Alan, K. (n.d.). The Token Economy: A Review and Evaluation. New York, NY 1977: Springer. doi:10.1007/978-1-4613-4121-5
  2. Blockchain. (2019, January 13). Retrieved from https://en.wikipedia.org/wiki/Blockchain
  3. COOPER, JOHN O. HERON, TIMOTHY E. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. Place of publication not identified: PEARSON.
  4. What is Simple Token (OST) [Audio blog post]. (2018, August 22). OST Live Podcast

Image Credits:

http://www.imgflip.com

http://www.smilemakers.com

May I have your attention please? The Nominal Stimulus vs. The Functional Stimulus

cellular-education-classroom-159844.jpeg

Hm?

What’s that?

Sorry, I wasn’t paying attention.

You’ll see this happen in some case studies, research articles, classrooms, and even therapeutic practice. A situation laid out with everything in mind to elicit the predictable response. You ask “What’s two plus two?” and eagerly await the “four!”…but it doesn’t happen. You call out to someone who’s wandered off “Hey! Over here!”, and they keep on walking. You picked out your discriminative stimulus so well but the response had little or nothing to do with it. You were missing the big piece of responding to stimuli that is absolutely obvious on paper, but so easily overlooked: Attention.

Stimulus-Response contingencies are a good place to start with explaining why this is so important, because they’re often the simplest and easiest to explain. One thing happens, a response follows it. The in-between that goes unsaid is that the respondent was actually able to perceive the stimulus, otherwise the response was either coincidental or unrelated. The stimulus that is never perceived, or attended to, is called a Nominal Stimulus. It happened. It was presented purposefully. It’s not a discriminative stimulus. It plays no role in selection. The individual is unaware that it even occurred. Nominal stimuli are the “everything else” in a situation that the intended respondent is not attending to.

Imagine a teacher in a classroom helping a student write their name. They first prompt by demonstrating how the name is written. The student does not copy it. So they take the student’s hand and physically guides them through the name writing start to finish, then they reinforce with some great descriptive praise to reinforce. Great! The student learned something, right? They’re more likely to at least approximate name writing in the future, right? How about the first letter?

Not if they were looking up at the ceiling the whole time. Nominal Stimulus.

The teacher may have set up a great visual demonstration, planned out a prompting strategy, and planned out a reinforcer to aid in learning the target behavior- but not one of those things were effective, or even meets their respective intended definitions, without the student’s attention. What the teacher was actually looking for, with any of their attempts, was a Functional Stimulus.

A functional stimulus, attended by an individual, that signals reinforcement for a specific behavior? That is the feature of the discriminative stimulus (SD) that elicits previously reinforced behavior. It’s received by the respondent in a meaningful way.

The lesson here in this distinction is that observers can sometimes assume stimulus-response relations or failures in responding because they are working with situations that present Nominal Stimuli instead of Functional Stimuli. Without distinguishing the attendance of the respondent, one could simply document a discriminative stimulus occurred when it had not. That would lead to inaccurate data, and further inaccurate intervention development based on those inaccuracies.

Check for attention. Always. It may not always be the easiest thing to discern. Auditory attending is not as easy to infer as visual attending is, but by keeping the nominal and functional stimuli in mind, you are in a better place to test for conditions that better facilitate both.

Let’s try one more example.

Take this guy in the car. He’s got his phone out. Just got a text. Now THAT was one sweet discriminative stimulus. Tons of reinforcement history signaling behind that one.

pexels-photo-804128

The street lights in front of him? Nominal stimuli.
The stop sign down the road? Nominal stimulus.
The cars on either side of him? Nominal stimuli.

Not all unattended stimuli are nominal stimuli exactly, but in a society, these signals (lights, signs, other people’s proximity) are delivered with the intended purpose of changing or governing the responses of people in order to make sure everyone drives in an orderly and safe(ish) way. Even when a person is attending, partially, to an array of stimuli around them; all supposedly “important” in one way or another, some don’t actually register without specific attention.

One more example. Last one, I promise.

793aa52a1859d8673ffb417128a80425--autism-classroom-classroom-ideas

An instructor is working with a non-verbal child to build communication. They are seated at a desk. The child is staring off at one of the walls and reciting some continuous vocal stereotypy to themselves. The instructor is guiding a communication board- a page with the alphabet on it.

They… rapidly… move the board’s position in front of the child’s finger, anticipating and…prompting… the words “I W A N T L U N C H”. They stand up with glee and reinforce this…method… with a “Great job! Let’s get lunch!”. The child continues to stare off at the wall, and continue the repetitive stereotypy until lunch is brought over.

What might that instructor infer from this process if they were not thinking about nominal stimuli? Well, they might infer that the process was in any way impacted by the child’s responding. Or, that the board and prompting was received in any way by the child. It could get a little confusing.

That’s the importance of nominal and function stimuli.

Questions? Comments? Likes? Leave them all below!

References:

Healy, A. F., & Weiner, I. B. (2013). Experimental psychology. Hoboken, NJ: Wiley.

Ormrod, Jeanne Ellis. (2012) Human learning /Boston : Pearson

How I Designed An Effective RBT Training Program

people-woman-coffee-meeting

When the BACB introduced the Registered Behavior Technician (RBT) credential, I remember reading the email all the way back in 2013 and having my brain race over just how credentialing the entry-level ABA practitioner might work. It was in a sense revolutionary- how many people in psychology or education undergraduate programs would believe that they could work full time in a field related to their study before graduation, and be credentialed for it? I knew that eventually it would be my role to design an effective and efficient RBT training protocol to give the new ABA practitioners a solid education in the basics of ABA and therapy implementation. I started as a therapist myself. I went through the process that eventually lead to my BCBA graduate studies and certification. The question now was, if I were starting all over what would I have wanted to set me up for the greatest chance of success?

The BACB Standards

The RBT Task List and the training guidelines are essential. These are necessary for counting towards the credential. There are certain things you can expound on, and present in variations of practicality, but everyone passing the 40 hour training essentially has to learn the exact same thing as it’s listed by the BACB. Think of this as you would an approved course sequence for BCBA classes. I think it has the best current structure to the material necessary for this level of service, so even if you are thinking of using training that does not necessarily require the RBT credential, this is a great “ABA style” guideline for the necessary skills.

I like to think of it this way: You don’t want to train staff for just the challenges they face right now for clients, you want them to be prepared for all the appropriate behavior-analytic necessities down the line too.

When I designed my first RBT training protocol in 2015, I stuck to the RBT Task List and BACB standards for RBT training to the letter (I still do, but added and improved upon it). My picks for what would be used as the source material to fill out this training were: B.F Skinners collected works, the Baer, Wolf, and Risley article, and my favorite, “Applied Behavior Analysis” by Cooper et al. For my first run at this, I broke it down into modules with discussion board posts, quizzes, and “chapter exams” for each section. While this material was being run, I also preferred to apply these to real-world client situations so that what they saw on the pages could be implemented. Every trainee would meet a real client, see how the material relates, and practice shadowing/running some of the techniques.

Some of the trainings I was seeing online were just Q&A material presentation with quizzes, but no actual practice. That gave me pause because I knew the next step required by the BACB was the competency assessment and that requires actual in vivo clinical skill usage for most sections.

Training to meet the standards of the Competency Assessment and the RBT Exam

My initial 40-hour training program was built from the ground up based on the framework that the BACB set out. It was not until the actual trainee feedback came back from both the competency exams (in clinical skills judged by a BCBA), and the results of the RBT Exam (a Pearson computer testing center exam) that I was able to get the correct amount of information to expand on the areas where most were finding the difficulty. If you’ve taken the BCBA exam, the RBT exam is very similar in style. It’s tough, but has a smaller and more appropriate task list for the job. Multiple choice, but best answer. None of that easy 3 answers are wrong, 1 answer is right. My second training, which I created at the tail end of 2015 and revised twice into 2016 aimed to treat a common difficulty that was arising from my experience with the gaps between the Competency Exam and the RBT Exam. The RBT competency exam was a breeze for people who could use the language and run the data collection, graphing, and skill acquisition skills with clients in the real clinical setting, but when it came to the Pearson exam; using the terms to answer the questions was much tougher. Familiarity with terms was good enough to pass clinical muster, but that RBT Exam was a tough nut to crack.

I adjusted my training to fill this skill gap I was seeing with some of the applicants by testing the terminology during the competency assessment during the option interview section. The BACB has a great sheet they included onto the RBT Competency checklist, which has a series of opened ended topics that the applicant discusses with the testing BCBA. I took those, and adapted some of the tougher relations- People could tell me what frequency data was, but they couldn’t explain what continuous measurement was used for. People could tell me when we used partial and whole intervals, but couldn’t describe why discontinuous measurement was appropriate for a situation. People could use prompting during a discrimination training program, but couldn’t always figure out how to stimulus fade if they saw improvement. Could a 40 hour training really condense all of this down to meet the rigor of the exam? The competency was a breeze for most applicants after the training, but the test required some considerable additional studying if they had never practiced ABA clinical work before. So, I kept going with it, tweaking my training to give some obvious terminology repetition, gain some fluency with the practical in-person time during the 40 hour training. Pass rates went up. Not to 100%, but higher than the first version.

pexels-photo-459793

Competency Exam as Feedback

The Competency exam step was really something interesting. It was where we saw the independent clinical skills of the RBT, therapeutically, with real clients, and measurable results. It had interobserver agreement (IOA) built into it. It hit on every topic of the task list, but on the other hand, an applicant could potentially display 1/4 skills and still pass a section which had its own challenges. But we got great feedback from the applicants during the process, how they saw the training, how comfortable and proficient they felt they were, how proficient they actually were.

The competency exam did allow for some limited roleplay where the in-vivo skills were impractical for the situation, but we used those sparingly. The real situations often challenged the applicants in ways that we the observers could not have thought up. There were teachable moments, there were even sometimes failures of the competency, but the next week they were back to try again.

We all learned a lot.

The trouble was that the competency exam was technically separate from the 40 hour training. Someone could come to apply for a position, require a competency exam, but already have a 40-hour certificate from an online training site. More often than not, when it came to skills like discrete trial training (DTT), or other skill acquisition routines that required more than objective maladaptive behavior measurement; these applicants would simply struggle. The prompting techniques were sloppy. “Least to most” was not in their vocabulary. “What is chaining?”– we’d hear. Orientation for the position was not enough, even with a legitimate online 40-hour training. This was also feedback. Was our training process over teaching? Was it too difficult or complex for what the RBT role was designed for? Were we demanding too much from our applicants during the RBT process? If so- how were we to measure that? And how exactly were these materials we were training with, and competency testing with, serving us so well with the applicant results on the RBT Exam through Pearson. Did these outside trainings have the same post-training measures? Did they use feedback? It would be impossible to survey them all. The answers came from somewhere else. The people who actually trained and tested with us the entire way through. Our first year RBTs. They did my training, they passed the competency, they passed the RBT Exam, and they were still working under qualified BCBAs directly in therapy every single day.

pexels-photo (1)

Registered Behavior Technicians as Models in the RBT Training Process

If I were to say one thing that gave the greatest leap in how well we could get new applicants through the process, pass the material, retain the material, and pass both the competency exam (which was a little subjective based on BCBA), and the RBT Exam (which was the be-all objective computer test)- it was the inclusion of practiced Registered Behavior Techs in the 40-hour initial training process itself. They had the viewpoint that a veteran BCBA might not. They went through the most recent RBT Task List updates. They passed their renewal competencies. They knew what these new applicants would have to know not only to pass the competency, but the RBT exam, and prospective employment under a BCBA. They knew it all.  It was late 2017. We had feedback from applicants, we had feedback from post-exam takers, and now we had feedback from the VIP RBTs that were involved in training the new staff.

We had the process down to a well oiled machine. Sometimes we had people slip through the cracks. Sometime we had no calls and no shows. Sometimes people just had test anxiety and had to retake. But the actual practice and feedback from all pieces and perspectives at all levels helped shape it to the form I currently use today. It kept it fresh. I cut out some of the useless parts that didn’t seem to help as well as hands on practice- the discussion board posts. I added more hours into the 40 that were hands on. More terminology usage. More skill transfer checks. Same RBT Task List. Same BACB framework, but with a multi-level feedback and checks and balance system. Everybody had a part to play in the training of the applicants now, and those applicants held on to the things that they would teach to the next applicants that came through once they were RBTs.  Then those RBTs wanted to be BCBAs. Tens. Maybe close to a hundred now.

Three years. Enough time for a graduate program. Enough time for 1500 hours of supervision. Enough time for a BCBA exam cycle. I saw the next generation grow up with ABA right before my eyes.

I love it.

 

Questions? Comments? Leave them below.

Image Credits: http://www.pexels.com

Beer and Behavior Analysis

20180518_195046.jpg

There’s been a shift in culture towards beer recently. Twenty years ago, if you saw the title “Beer and Behavior” you would absolutely expect a scathing speech of the abuses of the drink. This is not going to be that. I assume everyone reading this to be responsible. I’m interested in modern context. The beer industry has grown, become more varied, and those varieties have become more available. Craft brewing has taken off to previously unforeseen heights and different styles and personal recipes of beer are becoming available to the public like never before. It’s amazing. People are demanding more beer, and craft brewers are making it.

Now when there’s socially significant behavior out there, it can be studied. When people engage with their environment, their society, over something they want and will pay for it’s worth knowing how that works. I wanted to see how we could apply some of the concepts we use in Applied Behavior Analysis (ABA) to get an idea of it. Behaviors as the consumers, and behaviors as the provider. That’s where Midnight Oil Brewing Company came in to provide the setting for studying and some insights on what the process is like on both sides of the bar. That night, in particular, they had nine of their craft beers on tap and full-house of people engaging in operant behaviors to gain access to them.

Now let’s talk behavior.

Beer can be a Reinforcer. Think of a reinforcer as a type of stimulus that resembles a reward. What makes a reinforcer special is that it maintains or increases the likelihood of the behavior that precedes it. Think of it like this-

A person walks up to the bar and asks for a beer, maybe a Serenity session ale, the bartender pours that beer and hands it to them.

Assuming that the beer is what they like, and they find it reinforcing, the consumer would be likely to return to that same bar and order again. That’s reinforcement. To break it down further- The consumer’s behavior (requesting) operates on the environment for access to that beer. Access to the beer is socially mediated by talking to the bartender and the eventual exchange of money, but if they get access to the beer and like it, the reinforcement acts on that requesting behavior’s presentation in the future. The requesting behavior happens again or might even happen more often. There was a big if in there though. The beer had to be enjoyable, or reinforcing, to the individual for it to work. People have different tastes, and as you may be aware, not all people like all types of beer.

20180518_194907.jpg

Beer Flights can be a Preference Assessment for Reinforcers. A preference assessment is a tool used to figure out which stimuli are reinforcing at a given time. This is done by a presentation of a varied set of stimuli to an individual, which they have access to and engage with, and eventually, you get a hierarchy from that. By looking at what gets chosen more, you can tell which stimulus a person likes best at that given moment. Preferred stimuli make for great reinforcers for behavior. Now at a taproom or bar, we can use these preference assessments to determine our own hierarchies of the types of beer we enjoy. This can help us weed out the types we do not like, which help us not select them in the future, from the types we do like.

A person has a flight of 9 beers in front of them. They try all nine, but only like and continue to drink the Stouts, Porters, and Saisons.

On the other side of the bar, a bartender can observe a person with a flight of beers, and use the information from watching what beers were selected and consumed at higher amounts, to make better suggestions for that person’s next choice to order. A little rapport building goes a long way. (I know that I tend to order more of the suggestions of a bartender that understands my preferences. Personal opinion-data point of one.). On the business side of things, having consumers choose a selection of beers they enjoy repeatedly can have long-term reinforcing tendencies on their return and future consumption. Imagine an example of a person mistakenly trying a few beers in a row of a style they dislike. This could punish beer seeking and buying behavior- the opposite of reinforcement. Knowing where to guide a consumer is useful information. The trend of behavior can go in both directions, and a preference assessment could be key in making the experience enjoyable for everyone.

Taprooms can employ J.R Kantor’s Setting Events to create an environment to facilitate engagement from consumers not only as paying customers but prosocially with one another. Some people call this ambiance. Some people call this the “feel” of a place. In early behavior analytic research, behaviorists like J.R Kantor were interested in antecedent stimuli, “things” in the environment that could either prime behavior, or discriminate (select) specific behaviors to occur. These are stimuli, variables in the environment, that may influence certain behaviors to occur over others.

Larger spaces with a higher number of tables could lead to a higher retention of served consumers, more bartenders responding to requests could lead to higher rates of (responsible) beer requests, larger tables could lead to groups forming, televisions playing a specific program could retain specific like-interested individuals, and play-oriented items like boardgames could provide alternative sources of reinforcement and retain consumers on the premises for longer.

The potential is endless, and many of these examples would have to be fine-tuned and tested for practicality, but these are all things that could be set in place before someone even steps foot in the door. Antecedents are powerful things. But Setting Events aren’t the only concepts out there that explore them- there are also Motivating Operations. We’ve talked about Reinforcers, and even Punishers. These are stimuli that have an effect on future behavior, but there was a great researcher named Jack Michael that noticed that there are factors that can momentarily affect the value of those stimuli, and the behaviors seeking them.

Thirst and Hunger can be Unconditioned Motivating Operations. When you see the word Motivating Operation, take the common well-known word of “Motivation” to guide your understanding of it. Unconditioned just means that it is something innate, or not learned. Unconditioned Motivating Operations (UMOs) are often based on natural biological drives, and in taprooms and bars, the most common ones we see are based on deprivation and satiation. Thirst is a great example of a UMO.

If a person is thirsty, a beer is more likely to be a strong reinforcer, and their behavior to seek it out is more likely. The same with hunger, as a UMO for food-seeking behavior, and food as a reinforcer.

The same, however, can go for satiation. If someone is full, that satiation acts as a UMO and abolishes the seeking behaviors and reinforcement value of food or drink.

Beer can involve Conditioned Motivating Operations too. Conditioned Motivating Operations (CMOs) are just like Unconditioned Motivating Operations; they momentarily alter the value of a reinforcer- like beer. The only difference is that these are conditioned, or learned. The research on these has been back and forth. Some say their effects are noteworthy, and others say these theories don’t hold much water. I think they can make a great way of conceptualizing how preferences, or reinforcement values, can be affected by a person’s learned history. To that end, I’m going to try and make a taproom, or beer example, for each type of CMO.

Surrogate Conditioned Motivating Operation (CMO-S)- A surrogate CMO is something that alters the value of a reinforcer because it has been paired with an Unconditioned Motivating Operation, and takes on its effects. Here’s a craft beer example:

Unconditioned Motivating Operation- Deprivation. The value of beer is going to be higher.

Surrogate Conditioned Motivating Operation- “Last Call”. The value of beer is going to be higher due to a paired deprivation scenario (UMO) in the past.

In these conditions, we can speculate that it would have a behavior-altering effect in the same way deprivation does, and a value-altering effect on the beer as a reinforcer for requesting right before time runs out. A deprivation (UMO) has been paired with the “Last Call” stimulus enough that it takes on some of that effect.

Reflexive Conditioned Motivating Operation (CMO-R)- A reflexive CMO alters the value of its own removal. Behaviorally, this is called “discriminated avoidance”. Learned avoidance to a specific thing. Basically- a person is presented with something, they’ve experienced it in the past as something aversive or bad, and they want to get away from it. Just the presentation is enough to cue behaviors to avoid it. Here is a personal Beer CMO-R I’ve experienced.

Conditioned Stimulus- A saison in the middle of a beer flight, which ruins the flavors of otherwise amazing beers tasted afterward.

Reflexive Conditioned Motivation Operation- Seeing the word Saison on a beer flight list. All behaviors that can get the bartender to NOT include it are altered (more likely).

Saisons (NS) are okay types of beers on their own, but again, personal data point of one, ruin the palate for the tastes that follow it when they are in a beer flight (CMO-R). The presentation of a saison in a beer flight is enough for someone (me) to engage in behavior for its removal.

Transitive Conditioned Motivating Operation (CMO-T)- A transitive CMO is something a little broader, and looser, conceptually. It involves an alteration of the value of another stimulus. Generally, through improvement. Like the other CMOs, this is also based on a persons learned history. Some traditional examples like to go for the blocking of a behavior chain, leading to another stimulus to solve it becoming more valuable. I much prefer the “My Friend Has That Beer And Now I Want It Too” transitive conditioned motivating operation conceptualization. For this to work, it requires a learned history of a friend that often selects delicious beer. This delicious beer paired history also has a discriminative quality of “being better” than the persons first choice before. Their friend just picks the better beer every time. It’s not fair. Let’s play it out like this.

Person’s Requesting Behavior: “I’d like an Insomnia Stout”.

Friend’s Order Afterwards: “I’d like you to layer this Doc Brown Ale with the Dark Matter Stout on top.”

Transitive Conditioned Motivating Operation- This value altering condition (Friend’s Order) may not have physically blocked the first response (Person’s First Request), but it is a stimulus presentation with a value altering effect strong enough create the need for a stimulus change.

Person’s Second Requesting Behavior: “NO WAIT! Cancel that first one. I also want that Doc Brown Ale with the Dark Matter Stout on top.”

What do you think? Has that happened to you before? Could it be explained by the transitive conditioned motivating operation? I think it just might.

So we’ve gone through some Behavior Analysis, and we’ve gone through some Beer. Do you have any other examples of common human behavior that could be explained by these terms, or others, behavior analytically?

Questions? Comments? Arguments? Leave them below!

References:

COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
Wahler, R. G., & Fox, J. J. (1981). Setting events in applied behavior analysis: Toward a conceptual and methodological expansion. Journal of Applied Behavior Analysis,14(3), 327-338. doi:10.1901/jaba.1981.14-327
Big Thanks:
to Midnight Oil Brewing Company

“They’re Just Tired”- The Worst Scapegoat Explanation for Behavior

pexels-photo-57686

Why are they acting that way? “They’re just tired.”.  It’s one of those cliches that never goes away. It’s just so easy to use. You can use it for any situation at all to explain away patterns of maladaptive or cranky behavior. Screaming? Tired. Throwing things? Tired. Hitting their siblings? Tired. It’s the explanation that’s got it all-… Except that it’s not exactly true all the time. Exhaustion does exist, sleeping poorly does affect behavior, but there’s a risk in assuming a cause without looking at the exact conditions surrounding the behavior. It’s more work to do so, but it’s worth it.

In Behavior Analysis, we call that kind of thing an “explanatory fiction”. It’s not directly untruthful, but it avoids reality through ease and circular reasoning. Why do they do that thing we don’t like? Oh! They’re tired. It’s not hard to see the practical ease in that either. Everyone in their life has been cranky or acted miserably when they’ve been stretched too thin. The problem comes from the assumption. That assumption takes away all the curiosity and the need to dig for a more sophisticated answer, and it also leads us to a bias of expectation. We’ll ask around post hoc to confirm the broad theory.  Did they sleep well last night? Oh! Well, there was that one time when ____. Anything we get that conforms to our “theory of tiredness” will close the book. Open and shut case. We miss the real reason. We miss the real point. There’s risk in that. We miss out on catching the patterns that become habits that hurt further down the line. We blind ourselves to teachable moments.

The way to avoid all of these pitfalls and to explore the real reason behind these target behaviors is to begin the search right when we spot it. It would be even better if we could give context to what happened before the behaviors occur. A great psychologist named B.F Skinner called this the Three-Term Contingency, and it is a great way to actually get an idea on the triggers, causes, and/or maintaining factors for behaviors that ought not to happen. These are broken down into three things to study: the Antecedent which occurs before the behavior (“What exactly set this off?”), the Behavior which is the exact thing we are looking at, and the Consequence which happens after the behavior occurs (“What did this behavior get or what did it let them escape?”). Now it’s not just enough to ask the questions. We should probably document it too. Write it down. Take notes. Get numbers. How many times are you seeing this specific behavior? We call that Frequency. How long does that behavior last? We call that Duration. We can use this information to inform our conceptualization on what the behavior’s function is. By finding the function, it can lead to us adapting not only the environment to aid in decreasing the behavior but also aid in helping the learner find a better way to engage for what it is they are after. Even if it is a nap.

Let’s talk Functions of behavior. In Behavior Analysis, there are 4 common categories that make it a simple framework to work with: Attention, Access (to something/someone), Escape (to get away from or avoid), or Automatic Reinforcement (which is internal/invisible and mediated by the self). A pattern of behavior that occurs again and again, regardless of how they slept the night before, might lead us in the direction of one of these. Or more than one. A behavior can also be “multiply maintained”. We can either see this as a complication or as a better truth than a simple off-hand answer. Assuming that fatigue and tiredness are the leading factors only gives us the solution of a nap. That may delay the behavior’s reoccurrence, but if you see, again and again, it’s time to take the step and look deeper. The nap is not the answer, only a temporary respite from the behavior. The contingency and history of reinforcement haven’t gone anywhere. Bottom line: It’s more complicated than that, and probably isn’t going away that easily.

pexels-photo-707193

Trade the Nap for some Differential Reinforcement

Now it’s time to get serious. If we’ve gotten this far, and tracked behavior observably as possible, and ruled out our original assumption of an internal factor like “tiredness”, then we need an answer we can use in the world of the awake. Thankfully, behavior is like dinosaurs, it can undergo extinction (that means go away), or it can get stronger if you feed it (reinforce it). The “bad behaviors”, the maladaptive ones that are not a help to the learner or their situation, can be extinguished by simply avoiding the thing that reinforces it. What is it after? Don’t let it get that. What is it avoiding? Don’t let it avoid that either.

Hard work, right?

But that’s not the end of it. You can’t just take away a behavior and leave a void. You need to replace it. So, when it comes to a maladaptive behavior that aims to get something, and it’s adapted to get that thing, you find a better behavior to replace it. The “bad behavior”? Doesn’t get it. The “good behavior”? That gets it. That’s differential reinforcement; reinforcing the good useful stuff and not reinforcing the other stuff that isn’t helpful or good. Here’s a handful of techniques that follow that principle:

The ol’ DRO (Differential Reinforcement of Other Behaviors): This technique is where you reinforce the “other” behaviors. Everything except the thing you want to go away. If you’re targeting a tantrum, you reinforce every other behavior that is not tantrum related. Some people even fold in some timed intervals (preplanned periods of time) and reward gaps of “other” behaviors so long as the target behavior does not occur. Can they go 5 minutes without a tantrum? Great. How about 10? Progress.

“Not that, this instead!” DRI (Differential Reinforcement of Incompatible Behaviors):  This isn’t a large net like the DRO procedure. This one is where a set of behaviors are picked because they make the target “bad behavior” impossible. Let’s say our learner plays the bagpipes too loudly and is losing friends fast. What’s a good DRI for that? Anything that makes playing the bagpipes impossible. Try the flute. Or jump rope. Or fly a kite. Hold a microphone and sing. It’s all the same just so long as it’s physically impossible to do both the replacement and the original target (bagpipes, etc) that we aim to decrease.

“The right choice” DRA (Differential Reinforcement of Alternative Behavior): This is the laser targeted, surgical precision, version of the DRI. It follows a similar principle: Get a behavior reinforced that is NOT the maladaptive one. Except for DRA, this behavior is a single target, and it’s most often one that is more effective and socially appropriate. DRI doesn’t care if the new behavior and old target behavior share a function or purpose. DRA would, in most cases. You aim an alternative better behavior to take the place of the old maladaptive one.

 

The research on all three are varied, but they are tried and true ways to get one behavior to go away while getting other better ones in their place. Some are easier to use in some situations than others. I invite you to explore the research. It’s fascinating stuff. It’s also a lot more effective long-term than assuming the explanatory fiction and hoping it goes away. Why not take action? Why not take control of real factors that could be used for real good and change?

But not right now. You should take a nap. You look tired.

 

 

Just kidding.

 

References:

Cooper, John O. Heron, Timothy E. Heward, William L. (2007) Applied Behavior Analysis. Upper Saddle River, N.J. Pearson/Merrill-Prentice Hall.

Image Credits:

http://www.pexels.com

Behavior Analysis and Personality Psychology

title

Applied Behavior Analysis and Personality Psychology at first glance have very little in common. Applied Behavior Analysis (ABA) comes from the behaviorist tradition of the purely observable, and Personality Psychology features variables that are often seen within the individual and outside of direct measurement. As time moves on in the field of psychology, and the behavioral fields specifically, there is a call for greater breadth and understanding from practitioners across more than one domain. Behaviorism as a field of psychology is alive and well, but sometimes practitioners can pigeonhole themselves (pardon the pun) into the strict traditionalist ideas of the early 20th century, leaving the cognitive revolution and relevant psychological progress aside.

Few people realize, that this is not too a large gulf to bridge.

The topic of personality and temperament in individuals was touched on by B.F Skinner himself in “Science and Human Behavior” (1951) and “Beyond Freedom and Dignity” (1971), but as many would suspect, the meaning of the word personality was operationalized to a series of observable concepts such as “response tendencies”. These tendencies of responding were used to explain how individuals varied in their sensitivity to stimuli. It stands to reason that everyone in their life has come across another individual who was not impacted by a stimulus in the same way as themselves. This is a basic part of humanity. This is the reason we need to clinically perform preference assessments. Individual differences occur regardless of standardized stimuli. No matter how precisely we form a potential reinforcer, no matter how accurate the degree of the amount, or intensity, or even how carefully a schedule is arranged; one person may respond differently to it than another. And that is not including motivating operation factors like deprivation and satiation. Sometimes people are affected by different things in different ways, and they respond to different things in different ways.

Personality Psychology concerns itself with these individual differences. It is a field that is interested in the unique differences of the thinking, behaving, and feeling of individuals. Personality Psychology studies traits or factors based on the similarities and differences of individuals. Some feature traits such as Extraversion, Neuroticism, and Psychoticism (Eysenck Personality Inventory), Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (The Big Five). Others add in the traits of Honesty and Humility (HEXACO). Although there are many different theories on how these personality traits are formed, are measured, and are predictive; they still aim to explain something that strict observation of antecedent or consequence stimuli appears to miss. Behaviorists and practitioners of Applied Behavior Analysis may look at these things and pump their brakes. After all, it seems like a challenge to align the methods found in Personality Psychology to the dimensions of behavior analysis that Baer, et al. constructed in 1968. How does personality fit into a strictly behavioral framework? What about making personality framework conceptually systematic? Or could an experimenter even demonstrate control in a way to be analytic? Baer, Wolf, and Risley themselves said that a self-reported verbal behavior could not be accepted as measurable unless it was substantiated independently. How do we do it, then?

First, we may want to take a step back and work on defining what we are looking at. Behaviorists and ABA practitioners are used to a functional analytic approach which aims to identify exactly that; functional relationships between the environment and clinically targeted behaviors. Personality Psychology, on the other hand, is a little more topographical in how traits are defined. They look at classifying traits by what they present as, how they appear, and reports of how people act, and think, with less emphasis on that environment link. One of the great researchers to bridge these two ways of studying personalities, tendencies, and behavior, was Jeffrey Gray who looked at the personality inventories and questionnaires of Hans Jürgen Eysenck, and developed a theoretical model which related these personality and temperament factors to behavioral inhibition (behaviors likely to be inhibited where cues of punishment or lack of reinforcement are found), and behavioral activation (behaviors likely to be activated in the presence of possible reinforcement or cues of no punishment). Here, personality traits of extraversion and introversion, for example, were related to dimensions of anxiety or impulsivity which could be easier to define and study behaviorally. Gray (1981) was interested in how these traits could explain “sensitivity” (higher responding) or “hypo-responsiveness” (lower responding) to punishment and reinforcement stimuli.

Would someone who was rated higher in extraversion/low-anxiety respond a certain way to social positive reinforcement?

Would someone who was rated higher in introversion/high-anxiety respond a certain way to social negative reinforcement?

These are some questions that might pique the interest on both sides of the fence, both Behavior Analytic, and Personality Psychology. Take any one of those personality traits above, and you may find similar ways to study it behaviorally. The literature on this type of work is impressive. Gray’s work which began in the 1970s, went on for over 30 years. There is a wealth of literature on the topic of his theoretical models, and the topics of the Behavioral Inhibition System (BIS) which relates factors that impact a reduction of responding, and Behavioral Activation System (BAS) which relates factors that impact an increase in response activation, from Gray’s work in 1981. In 2000, Gray & McNaughton presented a third theoretical system called FFFS (fight-flight-freeze system) to explain responses to unconditioned aversive stimuli in which emotionally regulated states of “fear and panic” play a role in defensive aggression or avoidance behaviors. These took into account neuropsychology and went even further to suggest links to conflict avoidance in humans in day to day life. The literature on this is absolutely fascinating in how it finds a way to bring behavioral analytic concepts to a new arena.

Could it be possible for one day to see Personality Psychologists talking about reinforcement and punishment sensitivity? How about Behavior Analysts talking about traits when considering consequence strategies? At the very least, it’s a conversation that neither field might have had without knowing. We can only hope to gain from stepping outside of traditional boundaries and broaden our intellectual horizons.

Comments? Questions? Thoughts? Leave them below!

References:

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some
current dimensions of applied behavior anlysis. Journal of
applied behavior analysis, 1(1), 91-97.

Big Five personality traits. (2018, April 19). Retrieved from https://en.wikipedia.org/wiki/Big_Five_personality_traits
Farmer, R. F. (2005). Temperament, reward and punishment sensitivity, and clinical disorders: Implications for behavioral case formulation and therapy. International Journal of Behavioral Consultation and Therapy,1(1), 56-76. doi:10.1037/h0100735
Gray, J. A. (1981). A Critique of Eysenck’s Theory of Personality. A Model for Personality,246-276. doi:10.1007/978-3-642-67783-0_8
Gray, J. A., & McNaughton, N. (2000). The neuropsychology of anxiety: An enquiry into the functions of the septo-hippocampal system. Oxford: Oxford University Press.
Hans Eysenck. (2018, April 14). Retrieved from https://en.wikipedia.org/wiki/Hans_Eysenck

HEXACO model of personality structure. (2018, April 22). Retrieved from https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure

Skinner, B. F. (1953). Science and human behavior. New York: Macmillan.
Skinner, B. F. (1971). Beyond freedom and dignity. New York: Knopf.
Image Credits:

http://www.pexels.com

Symbols and Notation in Behavior Analysis

pexels-photo-356079.jpeg

Symbols and notation in behavior analytic research is fascinating. I find myself thrilled coming across the diagrams in the professional literature and getting so much from so little. A few letters, an arrow, a nice Δ (delta); it’s beautiful. If you are familiar with journals like the Behavior Analyst, The Journal of Applied Behavior Analysis (JABA), or The Journal of the Experimental Analysis of Behavior, you might have encountered some of these symbols. Now what these symbols and notations do, is help take large concepts like a Response, or Stimulus, or Reinforcement and Punishment, and lay them out into an orderly system of presentation without the need for paragraphs of explanation. Let’s look at this one for example:

SR

It shows some very common symbolic notation.

S, stands for stimulus.

The arrow, stands for “followed by” or “elicits” depending on whether it’s operant or respondent.

R, stands for response.

These are the foundational pieces of behavior analytic symbol and notation. I’ve created a chart below to show you these and some of the other variations you might come across.

Symbols

We can see some interesting variations between the notation symbols, mainly when it comes to how we use them in terms of conditioned and unconditioned. When we are talking about stimuli and responses that are not reinforcers/punishers, we use the abbreviations; S for Stimulus, R for Response, C for conditioned, and U for unconditioned. The status of the stimulus or response as either conditioned/unconditioned always comes as the first letter of the initialism.

When we talk about reinforcement, punishment, discriminative, and delta, the S for stimulus always comes first as a capital letter, followed by the type of stimulus in superscript. Now, unlike the basic conditioned/unconditioned stimuli/responses above, these superscripts use capitalization to distinguish between a conditioned reinforcer/punisher, and an unconditioned reinforcer/punisher, so remember to keep an eye out for that. Unconditioned punishers and reinforcers use a capital letter in superscript, while conditioned punishers and reinforcers use a lower case letter in superscript. Following the conditioned/unconditioned formatting, we distinguish between “positive” and “negative” by using + for positive reinforcers and punishers, and – for negative reinforcers and punishers.

This is very helpful when we want to nail down exactly what kind of contingencies we are seeing. You may remember that reinforcement is a process where a behavior is more likely to occur in the presence of an antecedent, because it has been reinforced in the past in those conditions. What that kind of reinforcer was, is important. Was it unconditioned? Things like food, water, etc. The basics things we as humans seek out naturally.  They are very effective, but can become subject to satiation. Now what about an unconditioned reinforcer? Something trained, or taught, through past experience. Money is a common one, tokens as well, or even art. The distinction between conditioned and unconditioned is no small gap, conceptually, so we want to be clear when we read these symbols as to what we are actually talking about.

Now that we have the symbols, let’s combine what we know to examine this example!

SR+

We would read this as, a Stimulus (S) is followed by a Response (R) which is followed by the presentation of an Unconditioned Positive Reinforcer SR+.

What kind of examples can you come up with? Leave them below!

 

 

 

Sources:

COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.

Sundel, M., & Sundel, S. S. (2018). Behavior change in the human services: behavioral and cognitive principles and applications. Thousand Oaks, CA: SAGE Publications, Inc.

Photo Credits:
http://www.pexels.com

 

 

Extra Life Case Study: Massed vs. Spaced Trials in the Acquisition of Skilled Motor (Video Game) Tasks

For this article, we have a special purpose; to bring awareness to a fantastic non-profit organization called Extra Life, whose goal is to raise donations for the Children’s Miracle Network Hospitals, which gives much needed funding to families who need it. (Donation links at the bottom of the page!)

Today, the topic is video games; which is the main focus of Extra Life’s audience. To bring some psychological expertise, and applied behavior analytic focus to this topic, we had two volunteers come up to test their mettle on (arguably) one of the most difficult video games to master and beat: “I Wanna Be The Boshy”. On the surface, a very simple looking game. Move a character with a keyboard or analog stick, along treacherous environments without touching obstacles, enemies, or projectiles. That is, until you realize how impressive the reaction time needs to be in order to progress through the levels; upwards of 2-5 responses per second. Each mistake, has a punishing restart to the beginning of the level or section, relying on the player’s skill to not only learn the pattern of motor responses to complete each section, but enter them reliably with perfect timing and order.

5
I WANNA BE THE BOSHY!

In many cases, this game requires months to beat (rare cases excluded). With this time frame, we were able to watch recordings of our two players (etanPSI & LonestarF1) via a streaming service named Twitch, which provided the video of game-play that could be reliably studied and analyzed for the target behavior skills necessary to master and beat this game.

For this particular study, we chose the target behavior of successive correct responses, and used frequency data as our metric to gauge progress through the levels. For example, one correct response may navigate a particular jump, a second may require maneuvering for a landing, and a third for another jump to a moving obstacle, all within 1.5 seconds, totaling 3 successive correct responses for that particular challenge. On average, during our tracked trials, a particular level or challenge requires a minimum of 43 successive correct responses in one minute of play, order to continue.

3
Analyzing the Players Behavior

If we want to understand the game from a behavior analytic, and psychological point of view, we need to discuss some terms:

Reinforcement: Think of reinforcement as a rewarding stimuli, that has the benefit of increasing the target behavior in the future. A reward which is successful in making a response (game playing, etc) happen more often, is called a reinforcer.

  • In this specific case, success following a trial serves as a conditioned reinforcer for the player, where beating a section, or a boss, is reported as the goal and achievement to be earned.

Responses: This is what the person does. Any behavior that follows a specific target stimulus, is considered a response.

Punishment: These are the opposites of reinforcement. They are consequences that decrease the likelihood of a behavior of occurring in the future.

Frequency (and Rate): Frequency and rate respond to behavior that occurs over a set amount of time. For example, if our general target is 43 correct responses in 1 minute of time, then we would want our rate of successive correct responses to near that amount to give us the greatest chance of success.

Discrete Trials: A discrete trial is often used in a clinical condition where a discriminative stimulus (SD) precedes a response, which is then reinforced when that response is the target behavior. The good thing about video games, is that each level, or screen, can be considered a discrete trial; as correct responses are reinforced with continuing the game, while failures (and punishing stimuli) cause it to be repeated.

Massed Trials: Massed trials refers to the use of discrete trials in close proximity after each other, so that no interrupting behavior occurs between them. In other words, repetition. For our gaming example, this would be restarting immediately after each failure to continue to the original starting point of the previously failed trial.

Spaced Trials: Spaced trials refer to a training condition where each discrete trial is separated by a pause, where various behaviors and stimuli unrelated to the next discrete trial may be engaged with. Think of this like a break condition. The player can take a breather, talk to the fans, take a drink of water. All of these things occur between trials, so that there is a gap between them.

2
The Experiment

Our friendly experiment required our players, etanPSI and LonestarF1 to attempt to engage in 30 trials in both conditions. The first condition would be Massed Trial, which involved 30 complete repeats without any interruption between trials. Successes could continue on to the next section, but repeats would require the trial to begin again without any (controllable) pause or break. The second condition would be Spaced Trials, where our players would be required to take at least a few seconds between trials, to chat, breathe, take a drink of water, or any other free-operant behavior in that gap. We did not limit our players to a specific time limit on these, but on average, they ranged between 10-30 seconds. We would then compare the two to see which appeared to give the players the best improvement benefit.

Our players reported themselves to be motivated to beat the game, and the challenge of proceeding through the game served as conditioned reinforcers. This free-operant preference assessment appeared to have some validity, as these players put themselves through over 60-270 trials per recorded play period, well above our 30 (60 with both conditions) trial requirement for the experiment. The players were free to agree to the conditions of the experiment, or deny them as they felt appropriate. Tracked periods that did not meet the criterion for the experiment were discarded, and the next session which did was counted. We called it “Science Mode”, when the players were agreeing to the experiment terms. Over all, 80% of Massed Trials tracked fit the experimental criterion, and 62% of Spaced Trials tracked fit the criterion. This provided us with a breadth of data to work with in getting a general idea of the factors which may be in play which attribute to their specific learning styles and abilities in completing the game itself. By the end of the tracked periods, both players had successfully completed the game, and beaten the final boss.

During this period, both players went through high rates of failure conditions, where successive conditions of failure within 10 responses were common when they impacted enemy projectiles, environmental hazards, or incorrect landings. This was a common function of the game’s difficulty, which had a degree of punishment effect on responding. In more cases than not, these conditions did not cause either etanPSI or LonestarFI to quit the game completely, but instead lead to a naturally chosen pause between situations to either breathe, react with a verbalization, or take a moment to process. In the conditions where Massed Trials were being tracked, these series of 30 responses were discarded, but when Spaced Trials were being tracked, these series were kept if they held to the same spaced pattern for following responses.

Our target goal for this experiment was to see how they remained within the average number of successive responses (43) per minute, that had been tracked from successful win conditions previously. Our range for their responses were tracked between 20 and 60, on a Standard Celeration Chart. By tracking the average of 30 tracked responses, (some as low as 1, others as high as 77 per minute), we were able to place the average within these intervals on to a chart and compare them to same, or close-proximity day responses from both conditions.

Previous research by Fadler, et al., and others they referenced (Foos, et al (1974), Rea et al, (1987), suggests that Spaced Trial is the superior method of skill acquisition, but it was noticed during etanPSI and LonestarFI‘s play styles that Massed Trials were preferred. Cursory investigations of other players showed the same. Faster restarts appeared to give higher rates of reinforcement, which in turn lead to success within a single day’s time that might not have been possible if play had been delayed or discontinued. It did appear that during this period, higher rates of repetition of these pattern based motor behaviors, did effect the end result of success.

In their article “The acquisition of skilled motor performance: Fast and slow experience-driven changes in primary motor cortex” Karni, et al (1998), suggests that there are different types of learning stages, and that experience driven changes to the brain effect two different types of learners in different ways; “We propose that skilled motor performance is acquired in several stages: “fast” learning, an initial, within-session improvement phase, followed by a period of consolidation of several hours duration, and then “slow” learning, consisting of delayed, incremental gains in performance emerging after continued practice. This time course may reflect basic mechanisms of neuronal plasticity in the adult brain that subserve the acquisition and retention of many different skills.” which they demonstrated in their study as well. We will not go too deeply in to biological factors in this article (since we did no MRI’s on our players), but if you have interest the article is cited below. However, this “fast learning” does appear to coincide with our conceptual Massed Trial format of learning, and the within-session improvement phase may be a factor in what we are seeing in the results of etanPSI and LonestarF1.

7
The Results

The results from our experiment was astounding. We found a clear favor in both the player’s preferred style of trial, and the ability for their skills to improve with it. Both players ranged in similar failure (0-1) and win (~43) successful responses per minute, and both in cases leading to successes against particularly difficult bosses exceeded these by going over 70 successive correct responses per minute!

With etanPSI we were also able to see some situations where both spaced and massed trials, interspersed, had a greater degree of success than when they were split by 30 consecutive trials each. When he was able to engage in repetitive environment/platform based difficulties, Massed Trial was more successful, but when dealing with alternating projectile challenges from game bosses, Spaced Trials were useful to mitigate the punishing effects of failure conditions. Higher volume vocalizations, high intensity percussive maintenance to gaming instruments, and broader vocabulary, appeared to lend a restorative effect to attentiveness and responding rates to the following massed trial conditions.

Capture

A Dpmin-11EC Standard Celeration Chart from our experiment.

In both conditions, we were able to see consistent Acceleration of gained successive correct responses per minute, from Massed Trials, which may have also been in part to the increase in difficulty as the players progressed, requiring higher outputs of responses. Nevertheless, the players did rise to the occasion and appear to hold to improvement in responding and pattern recognition & responding, over the course of 30+ trials per day. Where many had failed and given up, these two players had not only succeeded, but excelled at an incredibly difficult game.

4
The Fun!

Now that you know the story of our fun experiment, here’s where you can donate and thank our amazing players for their time and skill, as well as help the lives of countless children receiving medical services through a hospital on the Children’s Miracle Hospital Network! 100% of all donations go directly to charity, and are tax deductible! Help our player’s team to exceed their goal and change lives!

Donate to our amazing experiment volunteers!

etanPSI’s Extra Life Page

LonestarFI’s Extra Life Page

Like the science? Donate to the behaviorist!

Chris S’s Extra Life Page

References:

  1. Karni, A., Meyer, G., Rey-Hipolito, C., Jezzard, P., Adams, M. M., Turner, R., & Ungerleider, L. G. (1998). The acquisition of skilled motor performance: Fast and slow experience-driven changes in primary motor cortex. Proceedings of the National Academy of Sciences, 95(3)
  2. Wimmer, G. E., & Poldrack, R. A. (2017). Reinforcement learning over time: spaced versus massed training establishes stronger value associations
  3. The Precision Teaching Learning Center.- http://www.precisiontlc.com/ridiculus-lorem/

Photo Credits: etanPSI & Lonestar F1 http://www.twitch.tv

Love, Psychologically

apple-570965_640

There are some things that are just fun to study because of their vast importance. Love is one of them. There are as many theories about love as there are grains of sand on a shore, but if you’re a scientist, especially a behavioral scientist; you want to focus on the aspects that can be studied; things that we can at least see, hear, or touch, so that we can come to some kind of agreement on their existence. So it might not be so much an invisible force called “Love” we’re using terminology on, but rather “loving”. The romantic relationship, the affiliation between people; what they do, how they do it, how it maintains. What makes loving, and being loved, a unique experience and one that people tend to pursue for years (while others, sometimes, much shorter).

As humans in general, we cannot see any invisible qualia of romantic “love”, but we can see how people respond to one another, how they draw selective attention, how that attention strengthens and becomes a bond, and how they share in that exchange of the affiliation, that relationship. If we think about “love” as magical, and inexplicable, then that makes it very hard to study, doesn’t it? But if we look at what it “looks like”, what people “do” or “exhibit”, then we get somewhere. Love. It happens so much that there surely have to be some common features, and since we are all human, after all, we must share aspects and patterns that over-arch large groups of us. Even entire populations must share some feature, some pattern, that we can call “loving”. How else would there be so much advice out there?

There has been psychological research on this. An abundance of it. Dorothy Tennov’s work on “Love and Limerence”, Keith Davis’ “Relationship Rating”, Beverly Fehr’s “Love and Commitment”, and even Marshall Dermer’s behavioral account of “Romantic Loving”. There are just a few (there are thousands) of many, that will be used to explore some theoretical frameworks for what makes a working relationship work, what the features are, and the appeal of specific patterns of behavior that make up a “loving” affiliation.

We have to assume a little with this. Everyone is different, so specifics are where we would lose this account’s effectiveness of loving. If we assume everyone likes brightly colored eyes, when in fact many find darker color eyes reinforcing (rewarding/appealing), then we’ve assumed too much. If we, on the other hand, assume that every human on earth is subjectively polyamorous, and can come to no conclusions, then we assume too little. We have to find a middle ground that might not explain everything but explains enough.  We want an account of “loving” that is stable, desired, and explains a fully functioning relationship.

rope-1469244_640

What is Love & Loving? (and what’s not?)

Let’s let out some ground rules for our interpretation of this framework. To best interpret this research, and create something that we can actually put into real testable practice, we need to make sure we keep it in the realm of reality. So when we talk about “Love” going forward, we are going to talk about events/behaviors from ourselves and others. Some may be private (inside our head), some may be public (an action we engage in with another person), but all of these things can be more or less concretely defined. Let’s call the process of experiencing and doing these things “loving”. You can engage in love with another person, and they can engage in loving events/behaviors with you. Sounds fun. Now that we have an operational definition to work with; what might that exclude? Let’s talk about Limerence.

Dorothy Tennov developed this concept in 1979 to explain the experience of being “head over heels” with someone. It’s intense. It’s all consuming. Even a little obsessive. As she, and another researcher Lynn Willmott, describe it;  “an involuntary potentially inspiring state of adoration and attachment to a limerent object (the target of infatuation) involving intrusive and obsessive thoughts, feelings and behaviors from euphoria to despair, contingent on perceived emotional reciprocation”.  Let’s break this down and make it a little more “behavioral”. So limerence, is like love, except people exhibit:

  • Intrusive and obsessive thoughts about the person (Private Events).
  • Attachment to a Limerent Object (the person they are obsessed with). Thoughts and interactions with this individual become highly reinforcing, and behaviors seeking them are thus highly reinforced.
  • Reciprocity determines “euphoria” or “despair”. If that Limerent Object ( the person who is being obsessed over) gives a specific type of perceived behavior; it can either be incredibly reinforcing (rewarding), or incredibly aversive. These are two very extreme states.

According to Tennov, this is the type of “loving” we would hope to turn into a relationship or affiliation of “loving” behaviors between two people, but it could not maintain itself as it is. It’s not stable. It’s intense, a flash, but is based on perceptions and obsession (highly repeated private events or “thoughts” about that person). These behaviors do not operate in a healthy way to create or build a relationship. They seem to seek out the other person intensely, but you might notice, they do not seem to hold that person in a regard where a relationship could flourish. This type of limerence is what Tennov found to be dangerous.

It’s not a feeling so much as it is a pattern, and she found 3 ways that it subsides.

Consummation, where the feelings are reciprocated, and ideally, the limerence becomes a more healthy form of attachment. This is the best case scenario.

Starvation, or as behaviorists call it “Extinction”, where the behaviors of obsession/seeking are not reinforced; the other person doesn’t respond. The seeker gets nothing of what they were seeking, so the seeking undergoes behavioral extinction because it no longer serves its function. This is a painful process, the “despair” Tennov spoke of.

Then there’s Transference where the limerence stays, but the limerent object changes. The person they are focusing on gets replaced with another person, and the cycle of intense emotion, intrusive thoughts, etc continues in another direction. In behavioral terms, the response class remains, but the target of those behaviors changes. This type of seeking also seems incredibly unhealthy and hard to sustain a balanced life around.

According to our original operational definition of “loving”, this limerence is not going to work, conventionally. We cannot apply these patterns to a broad population and hope for good outcomes. This is where we need to turn to Keith Davis’ research and Marshall Dermer’s behavioral account of loving to help us out. These researchers took features of “loving” relationships and broke them down into components that most people tend to exhibit. On top of that, they also came up with strategies that might maintain them. Having a loving relationship is good, but maintaining it is also something worth looking into. You might have seen the word reinforcer or reinforcing used a few times. Humans rely on patterns. It’s a big part of how we operate. Think of reinforcers as “things” that keep a pattern going, and reinforcement as the process of strengthening that pattern. Let’s talk “loving” reinforcement and these components of caring.

heart-583895_640

Features of “Loving” and Reinforcing (Maintaining) Them

They (Davis, Todd, as well as Dermer) break down “loving” into three classes of features; Caring, Passion, and Friendship. These are behaviors and traits exhibited in regular patterns and are consistent. They are a common part of the functioning relationship or affiliation with one another. I’ll present a few words from the researchers, and follow up with some actual behaviors that represent examples that a person might engage in.

Features of Caring:

  1. The person “gives their utmost” to the other. Behaviorally speaking, we mean that the effort put in to engage with the other person, and acting for their benefit, is high. Some might say foregoing one’s own reinforcers (rewards) so that the other are reinforced (rewarded).  Here are some examples.
      • Engaging regularly.
      • Being present and focused during engaging.
      • Potentially putting maximum effort for that individual.
      • Potentially sacrificing their own rewarding opportunities, for the sake of the opportunities of the other.

     

  2. The person “championing and advocating” for the other. This is not a quid pro quo situation based on measuring out little bits of effort and support, this is committing to the betterment of that person. It involves social reinforcement.
      • Socially praising that person for actions.
      • Socially praising and supporting efforts of that person.
      • Putting forward resources and social effort for the successes, or approximate successes of that other person.

     

Features of Passion:

  1. “Fascinating” about the other. By fascinating, they mean engaging in thinking or imagining about the other person even when that person is absent. (Think of this as a tempered version of the limerence we spoke about above). These events are what behavioral psychologists call “private events”. They are not observable to anyone else but the respondent.
      • Thinking about the other person regularly.
      • Imagining the other person regularly.

     

  2. Mutual “desiring and experiencing sexual intimacy”. This one is the more obvious “passion” feature. These are both overt and covert (private) behaviors, but most importantly, this behavior is shared between both simultaneously. The reinforcement (rewarding) from one to another is mutual or shared.
      • Engaging and reinforcing “desiring” behaviors between one another.
      • Engaging and reinforcing “sexual intimacy” behaviors between one another.

     

  3. “Desiring mutual exclusivity” with the other person. This is where behaviors are used specifically with one another. One person presents specific, and unique, behavior towards the other and do not engage in these specific behaviors broadly with others outside of the relationship.
      • Unique thoughts or feelings about the other.
      • Unique ways of speaking or responding to one another.
      • Unique patterns of daily behavior with one another.

     

Features of Friendship:

  1. “Enjoying one’s company”. At a very basic level, being around someone should be enjoyable if a relationship is to maintain. This enjoyment could come from;
      • Enjoyment gained from a shared history and specific important events.
      • Enjoyment gained from a conditioning, shared desirable features that have become attributed to one another.
      • Enjoying the repertoire of social behaviors, or activities that person engages in regularly.

     

  2. “Being able to confide” in the individual. Sharing information that has the risk of being exploited, or showing vulnerability. Being able to express specific thoughts or intents with the other person and not expecting a reprisal or betrayal on the part of the other.
      • Sharing secrets, hopes, dreams, aspirations that represent vulnerability.
      • Being able to speak frankly and honestly on topics.

     

  3. “Behaving spontaneously”. With strangers, predictability is the best bet at cooperation and interaction so that no one is put off. This feature represents a tolerance for spontaneity and surprise where there is the potential for the unexpected, and in a sense, a chance of the unknown or risk.
      • Engaging in behaviors that are novel towards the other, with the other in mind.
      • Engaging in novel activities with the other.

     

  4. “Understanding” the other. The verbal behavior (spoken words) make sense to the other and are not misinterpreted.
      • Shared meanings of certain histories or features.
      • A shared understanding of tone of voice.
      • A shared understanding of facial expressions or other predictors others might not pick up on.

     

  5. “Respecting the other”. This is where the judgment, intents, and meaning of the other person are held in a regard that is not distrustful, or disingenuous.
      • Allowing one person to engage in an activity and having faith in that other person’s ability.
      • Engaging socially in terms that promote dignity and value the other.

     

love-1643452_640

Reinforcing the Relationship

These are a lot to juggle at one time. If all of these features are important for both people to engage in while in that state of loving, and the relationship is to maintain for long periods, there must be some way for people to have the time and ability to do so, right? This is where we discuss how and when we can use those features above as practical behaviors, and making those practical behaviors reinforcers (rewards). Reinforcers aren’t just prizes or tangible objects; they can be ANY behavior or change in a stimulus that strengthens another behavior. It’s not just one direction either. One person can reinforce another’s behavior, and have that person reinforce theirs right back. It becomes a cycle, it becomes an interaction where both sides are engaging in these loving features, those romantic behaviors, and being strengthened by one another’s. Here are some suggestions from the research.

Reinforcing a Relationship with Generic and Abundant Reinforcers-

Don’t let the words generic scare you off. This does not mean boring or unoriginal. This means using the common stuff and using it often to strengthen the romantic/loving behaviors in the other person. These are things you have a lot of, or behaviors that are low cost to you, that you can use repeatedly and consistently. This sort of behavioral framework is good for maintaining a relationship.

Given the opportunity, how many Friendship, Caring, and Passion behaviors could you exhibit abundantly an hour? How about per day? Or month? Try looking at these.

  • Smiling
  • Laughing
  • Engaging in a positive tone.
  • Taking the time to understand a point of view.
  • Physical closeness.
  • “Checking in”- frequent social interaction.

Just to name a few. These are easy, quick, require little effort, and can maintain a relationship, or series of interactions through those quick and abundant psychological reinforcement effects. Remember; A big surprise is great, but if you get absolutely nothing from the individual, not a smile, not a word, in between, big surprises aren’t strong enough. The relationship gets frayed, thin. That’s why you use “generic and abundant” social reinforcers from your assumedly impressive romantic repertoire of skills.

Reinforcing a Relationship with Scarce and Idiosyncratic Reinforcers

Now the big surprises come in. These can’t maintain a long and complex relationship by themselves. They, by definition, are scarce, therefore very interesting. These are things you can not provide to another person very often, and they are varied enough that the other person probably would not be able to expect them. These are the high shock-value interactions or rewards, the things that provide a revitalization.  Remember the spontaneity feature? This is where it comes in. These come in when the generic and abundant reinforcers lose efficacy. Sometimes when a thing is too common, people adapt, so you need to throw a little “strange” out there to mix up the predictable delivery of these romantic reinforcers. You can’t expect the scarce big reinforcers to maintain a relationship, but without them, the generic and abundant undergo habituation. Sometimes when something is too predictable and common it loses its reinforcing features. You need to change it up. The mixture of both is where the long-term maintaining of romantic behaviors on both sides meets a good equilibrium.

What about these? Are there any scarce or idiosyncratic reinforcers you could think up from the Caring, Friendship and Passion categories? Can you think of a few specific reinforcers you enjoy? Can you think of a few specific ones that another person might? Try them out and see if they work, or engage in some confiding features to request them. You might just learn something!

Comments? Questions? Leave them below!

References:

  1. Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied behavior analysis. Columbus: Merrill Pub. Co.
  2. Willmott, L., & Bentley, E. (2014). Love and limerence: harness the limbic brain. United States: Lathbury House Limited.
  3. Tennov, D. (1999). Love and limerence the experience of being in love. Lanham, MD: Scarborough House.
  4. Davis, K. E. (1999). What Attachment Styles and Love Styles Add to the Understanding of Relationship Commitment and Stability. Handbook of Interpersonal Commitment and Relationship Stability, 221-237. doi:10.1007/978-1-4615-4773-0_13
  5. Davis, K. E., & Todd, M. J. (1982). Friendship and love relationships. In K. E. Davis, and T. O. Mitchell (Eds.), Advances in descriptive psychology (Vol. 2, p. 79-112)
  6. Dermer, M. L. (2006). Towards understanding the meaning of affectionate verbal behavior; towards creating romantic loving. The Behavior Analyst Today, 7(4), 452-480.

 

Image Credits: http://www.pixabay.com