Performance Data Collection For All Professionals

No matter what you do, you’ll often find yourself in a position to either teach a skill or train someone in a proficiency you have. In some cases, many times. One of the most necessary parts in my line of work is data collection on human behavior and performance. I’ve met hundreds of professionals and paraprofessionals over the years who see how behavior analytic therapy and training are delivered using daily data collection and measurement and often get asked “Do you have a spare sheet I could use?”. Workshops, after school programs, camps, job training events, painting classes, apprenticeships, exam prep, clinical trainings, driving courses, and other various skill based events have all had opportunities for me to show off what data collection can be used for, and how it can be applied to any profession where one person needs to learn a new skill and their performance needs to be evaluated in a well defined and stable way. If this is something that you do, or have an interest in doing, I have just the form for you. In just 15-30 minutes of reading and reviewing the instructions below, I aim to make sure you learn and can use the following cool tools from the world of applied behavior analysis:

  • How to track data on performance for a single day and across days.
  • What a “Cold Probe” is, and how you can use it to configure and adjust your training plan.
  • What “Discrete Trials” means, and how you can use them to work on a single or multiple skills in a single training session and deliver effective feedback for performance improvement.
  • How simple and effective percentage data is for performance.
  • How to practice a trained skill repeatedly without become repetitive.
  • When to deliver reinforcement (social praise) for success, and when to deliver prompts (correction).
  • How to compare today’s performance of your client to their future or past performance and use visual analysis of the data to make better decisions.
  • What “behavior coding” is and why defining our target performance goals matters.
  • How to do an analysis of component skills and break your trained skill down into pieces.

I am attaching the link to this performance data collection tool below. You can either print it out and use it in free writing, or use it digitally if you carry a tablet or similar device. This pdf has been formatted to use text fields for typing in easily, a spot to import your logo in the heading with no fuss, and the data sections can be clearly exported into the spreadsheet software of your choice. There does exist some very advanced software out there that can do more than this. This is not the be all-end all, and if linear regressions, or reversal designs are your thing, this might not check all of your boxes. I suggest visiting other subscription software for the research level of analysis you might use in a human operant lab, but if you want something practical, with ease of use, and is completely free of charge, by all means enjoy the form below.

Instructions:

Let’s talk about the top portion of the form for a moment where we have three fields:

  • Name:
  • Date:
  • Instructions:

When we are training an individual, or even a small group of individuals, we need a way to separate out performance data so that we do not get confused when it comes to evaluation and analysis of it. Each individual stays separate from one another, and each day’s performance is distinct from another. The “Name” section here applies to the individual you are training, and not the trainer. We also will need the date of the training so that we can review our data in order, and include instructions if we have multiple trainers performing the same training across different times or dates. Every profession is different and every trainee is going to require different skills, so I will not be able to describe every form of instruction you might want to use here. I would suggest something concise and to the point. Your co-trainers on the topic would likely understand the skills and only need an instructive structure in delivering the training. For example, if we had a client who we wanted to train to high proficiency in jump roping for their schoolyard double-dutch competition, we might want our trainers to know what to have ready.

Cold Probes:

In behavior analytic terminology, a “cold probe” is something that you do to test a skill without prompting or incentives to see where the client’s performance is without assistance. Simply put, at the start of your training or teaching section, you ask them to perform the skill and see how they do. Can they do it completely independently to your established level of competence? If so, you might mark a “Y” for “Yes”. If not, you might mark a “N” for “No”, and that gives you an idea of where that day’s training targets might focus on. Cold probes are useful when you have a client who has mastered something, or maybe is coming in for the first time, and you want to see if they can produce that specific target of performance on demand. Reviewing the cold probe isn’t a final answer on whether that person has or does not have a skill in their repertoire, but it can give you an example of their unaided performance for you to use your training judgement on for what they might need to be taught, practiced, or have a long term strategy for performance improvement on. Cold probes are tools, not something to make or break a training plan on. Performance can fluctuate. Use them to determine a focus for that day, but keep in mind it might only be a part of your overall goal for the client. You can also use cold probes to remove a planned part of the training that day that might not be worth giving extra time for. If our imaginary jump roping client can perform their three alternate foot step jumps without aid, perhaps we gear our training topics for the day for something a little more advanced to make the best use of our time.

A Component Skills Analysis and Discrete Trial Training (DTT):

We can use our cold probe data to figure out what skills we can target for improvement. Often, when we come across a difficulty in competency with a trainee, the skill is often made up of smaller more basic skills, or have a precursor skill that needs to be strengthened before they can move on to the original target skill. Discrete Trial Training (DTT) is a process by which a complex skill is broken down into smaller component behaviors which are taught in order to meet the original target. They are “discrete” or singular component skills which are set up in a distinct training opportunity, where we can follow up demonstration of a skill with either praise/reinforcement when performed correctly, or prompting/feedback when there are errors in need of our assistance. Each practice opportunity is a new chance to try again and build towards greater success. The number of trials you use is not set in stone, but for this training sheet I have provided five opportunities for each component skill. Let’s talk about our example jump-roper. What would happen if our trainee did not perform their alternate foot jump to our criterion of success? Take a look at the sample data below.

In this example we’ve had our trainee demonstrate the skill five times, with each component skill being performed an equal number of times. What might this data suggest? Is our trainee having difficulty in all areas? Probably not. In this case we see that they are able to lift their left foot into a jump perfectly for all tracked trials, but when it comes to the right foot, and the heels being up during jump roping, we see errors. A good part of using these types of trials is that you can compare performance in one component behavior to another. Look at the data above. You will see that the right foot lifting, and the heels up components share a trend of errors. That could lead us as trainers to suspect that there might be a relation between the two, and our training and corrective procedures can be tailored at this point for helping them improve. With this style of data collection we can pin point exactly where errors occur, which makes our training time tailor fitted to the need and increases our efficiency.

Do not forget about reinforcement in these stages. Reinforcement is what increases rates of the target behavior that it follows. We praise and reward as soon as a success, or approximation to success (improvement) is seen. By praising and rewarding what goes right, we can keep that level of performance high. We can use reinforcement following prompts to maintain a level of engagement and improvement. Do not simply focus on the errors alone. Target the successes and reinforce them. A solid training procedure is heavy on reinforcement.

Percentage Data and Analysis:

In our trial data above we use percentage data as a form of measuring performance and success. In this scenario, using five trials means that each trial counts as a distinct 20% of the final score. When we measure performance we want to make sure we have a criterion by which we consider mastery. Not all skills necessarily, or humanly, can be done with 100% every single time. In most cases, keeping to 80-90% as a goal is not a bad benchmark to have in mind. It is well above blind luck, and with proficiency at those levels, it is often easier to discover patterns of what environmental stimuli correlate to higher performance than others. Does our jump roping trainee do better during our individual training than they do in front of peer crowds on the playground? A variance of 20% or more might let us know that if we see a pattern emerge over time.

The sheet above is structured so that you can export data from the probe and trial sections into a spreadsheet, where you can use a visual analysis (graph) of your choice. I, and many professionals, enjoy line graphs which show percentage of performance by date. By combining the results of multiple daily data sheets, you can create graphs and perform a visual analysis of progress in a way that is cleaner than raw data. By comparing the date of the data sheet, with the final percentage scores of success you can see something like this.

Reviewing performance data with your client (or their caretaker) is key. Visual data presentations like the one above can be a tool in your toolbelt to make large trends easier to understand. Line graphs are easy ways to show trends and to use that visual to breakdown where their performance was, compared to where it is now. Even if you see a negative trend, this can be a great tool to discuss what might be going on outside of the training and analysis that might be a factor. You can even learn what is impacting the graph, but might be missing from the training regimen. No data is ever wasted. It is all a resource.

Behavior Coding:

The final sections of the sheet involve spots where you can do what we call in the field of behavior analysis, and research in general, behavior coding. Behavior coding is a process by which you operationally define your target performance skills in observable and measurable ways. When you are working with a team, or with multiple trainers, your success can depend on whether everyone is measuring the exact same things the exact same way. We want as much inter-observer agreement as possible. Coding makes that possible.

Let’s take an example from our jump roping client above. One of the component skills we chose was “Left Foot Up (Jump)”. That can be confusing without further explanation. It could use an operationally defined and coded skill. We can use our behavior coding section to put simple and quick definitions so that everyone measuring that skill in the future knows exactly what it looks like, and what we consider success. The better our coding, the more sensitive our data is. We want to find a middle ground of detail without being obfuscating with too much wording. There is a difference between precision, and a code that makes tracking impractical. The main goals we want are something we can observe which lets anyone watching have the same opportunity to track it exactly as we might, and measurable, meaning that our behavior coding of our target skill can fit into the data tracking format.

For example: “Left Foot Up (Jump)”- The left foot is lifted up completely from the ground during a jump with enough space for the jump-rope to clear it underneath.

You may increase the precision of your measurement to match the distinct needs of the skill, but the goal is to be sure that everyone tracking data on that skill is using the same definition. This one above is what I would consider low to medium in precision, but will do for what we need it for. Match your definitions and coded behavior to your specific profession and needs, but be sure it is not vague or subjectively unobservable (“a spirited and joyful jump” could mean just about anything to anyone). If you need to use what some would consider subjective language, try coding for that as well (“Joyful” is defined as smiling during a jump, etc.).

Keeping a Running List of Component Skills:

Component skills which become mastered or are ongoing targets for future weeks can be listed on the second page as well. This helps us distinguish how we broke down our probed larger skills into their discrete and distinct components. Keeping a list of what we have worked on, and what we have yet to work on, can give us better ideas for future trials to run in the next training opportunity, give us a log of what was mastered or completed in a previous training, and give us a section for note taking on the component skills that would fit the needs of your professional training. I would suggest if you use the component skill section to determine future training targets, less is more. Training ten skills within an hour or two makes sense, but over training tens of skills within a time frame might lead to lesser mastery across the entire list. Focus on the most important component skills that make up the larger cusp skills. You may find success in picking your particular targets for each training session, or week.

Further Training:

I hope you enjoyed the material here and the review. It would be impossible for me to include every potential usage of these sheets, and the more complex data analysis processes you might want to use them for, but if you have need of further training, consultation, or simply questions, you may reach me on this website or email at csawyer@behavioralinquiry.com. I would be happy to help you with further training on this data sheet, how to adapt and construct your own, and any further interest you might have in performance tracking or behavior analysis.

Comments? Questions? Leave them below.

Getting Back Up After Failure

Failure is a tough topic to bring up but a necessary one. When we are in it, it’s all we can think about. When we are past it, we often do not want any further reminders of it. Failure, behaviorally, and psychologically, is a part of everything we do as a variable, and factors in to every future strategy we use. It is a part of our past that defines how we interact with the future. In a previous writing I discussed “Overcoming the Fear of Failure”, but this one will be about what to do when it happens to us. How do we move on? How do we grow from it? How to we set our future expectancies to do better? To what do we attribute failure to? All of these and more are necessary to making each failure a stepping stone to a future success, or else we might find ourselves in a loop generating ever worse strategies. Instead, we need to learn to get back up. Let’s talk now about some of the research we have on the topic and how we might navigate failure and find motivation from it.

Mastery Orientation vs. Learned Helplessness

When it comes to deriving motivation from failures, both big and small, the strategies that we develop in childhood have a great deal of influence on our current behavior. You may have heard of the term “learned helplessness” before, which describes a pattern of behavior of low motivation and outputs after repeated failures. The individual receives so little reinforcement following their actions that they simply do not continue to try. Diener and Dweck (1978) popularized these concepts in a study on youths that they split into two groups based on patterns and strategies that they observed without being taught. They found that some children when faced with repeated challenges and varying degrees of failure would either consistently give up, and reduce responding, while others would re-assess and modify their responding based on the inputs of their failure. The researchers were very interested in the cognitive strategies that both of these groups displayed, all without any coaching, and determined that even at a very young age, there were clear distinctions on these two types by their ideas on their loci of control. A locus of control is a belief system that people use to determine whether they have control of outcomes, or if outside forces do. A person with an internal locus of control would see the results of their actions as largely based on their own actions and future control. An individual with an external locus of control would see the results of their actions as largely impacted by an outside force or their environment. Now, there is a part of this study that some consider a little unfair. No matter what answer the children gave to their respective stimuli at the start, they were told they were incorrect. How they responded afterwards largely correlated based on how they viewed their loci of control.

Mastery oriented individuals appeared to generally attribute their failures to a lack of effort or something they’d missed. Even at that age, their first reaction focused on pivoting and reassessing.

Learned helpless individuals tended to attribute the failures to the situation as largely beyond their control (in this case, without knowing it, they were technically right as far as the experiment was concerned).

So what happened?

Mastery oriented individuals kept trying, kept changing their responses based on feedback, and largely kept at the task longer than the other individuals. They showed no decline and became more sophisticated in their strategy use (which was eventually validated).

Learned helpless individuals tended to show a progressive decline in the use of good-problem solving strategies and began to include less sophisticated and poorer problem solving strategies. Ones that would be even less likely to work.

This model of attribution is still used to this day, but has a few caveats. Unlike this study, in the real world, people are not always one or the other. In many cases, and complex problems, it requires using multiple loci of control, but also understanding whether the factors we evaluate and learn from are stable (long term) or unstable (temporary). The stability of an attribution is its relative permanence as a factor. If you know you are good at jumping rope, meaning you have high ability, you have a stable factor to consider your next success with. But, if you attribute jumping rope to how much effort your legs can put out, then the source of success is unstable—effort can vary and has to be renewed on each occasion or else it disappears. We’ll talk a little more about how effort and ability works in a second. The important part is that when it comes to evaluating our part in the grand scheme, the internal locus of control tends to help us perform better.  Let’s look at some examples.

It rained today and we got all wet. We hate that. What if it rains tomorrow and we don’t want to be rained on? Would a belief system around an internal loci of control make sense if we focus purely on ourselves and ignore the sky? Not very well. No matter how many strategies we might attempt based on our own feedback, we are unlikely to change the weather. On the other hand, a person using this internal loci of control might decide to travel away from the storm as a strategy, bring an umbrella, or wear a rain coat, which has some functionality for them but the rain still happens where they once were. Internal loci of control work best when we take into account our solutions but do not ignore the immutable environmental factors.

What about using an external loci of control on task performance? Perhaps we’d like to pick up three items off of our room’s floor within ten minutes. We might begin to generate all the reasons why we cannot, and how far the floor is from our fingers, and how many other factors there are between the items and the trash can, leading to very low performance on this task within a time frame. It’s the room that’s messy. It’s been messy for days now. So messy. So much mess too. What if we just pick up one thing then go back to bed? It’s still messy. Might as well not. Then, we’ve just effectively wasted time generating non-functional thoughts (poor strategy), and nothing was done (poor outcome). That isn’t helpful either.

Generally speaking, when it comes to our own behavior, within our own repertoires of ability, it is wiser to use an internal locus of control to conceptualize our potential impact on tasks and problems. When there are larger systems and unavoidable outcomes from the outside, it does not hurt to consider what lies in an external locus of control. We, as individuals, cannot control everything. But, as we see above, when faced with continual failure feedback, utilizing an internal locus of control early on can help us come up with strategies which mitigate the external circumstances and perhaps land us in a better spot. There is no harm in generating increasingly sophisticated strategies to put ourselves into better conditions and allow the external factors outside of our control to be managed from ever increasing positions of control and strategy on our part. Sometimes when failure comes, it comes after we thought we had a great strategy focusing on our own improvement and it just did not work.

How do we do it? How do we take back some semblance of control when the waves of failures keep coming?

Consider that the concepts of a locus of control, and how our actions impact our goals are called attributions, and have an effect on our future behavior and how we respond to challenges. When we attribute too much to external causes, it can lead us to decrease our attempts. When we attribute too much to internal causes, it can sometimes lead to more sophisticated problem solving, but blind us to other factors might be outside of our control and narrow our perspective too much.

Mediating these attributions not just in the moment of the first failure we come across, but those that follow can help us create a better perspective on our situation. We can also rely on our social circle, relay our experiences, to see if others can help us see what we might have missed and help our future strategies find better success.

  • Evaluate your current attribution and locus of control of the problem.
  • What are some ways we can evaluate our own pattern of responding and improve it? (Internal Locus)
  • What are some environmental factors that impacted our failure that our behavior did not change (External Locus)
  • How do we refine our strategy so that our next attempt can put us in a better position against those environmental variables if they happen again? Can we mitigate what held us back?

Purposive Behaviorism and Re-Training our Attributions

As individuals we can create systems that help us maintain a level of reinforcement to offset failure, and as social creatures, help create an environment of positive interactions that can help us both realize our achievable goals and find strategies to access them. Thankfully, we have concepts and theories at our disposal to explain the hows and whys. Let’s talk Purposive Behaviorism and how we can re-training our Attributional Theories.

If you’ve read my other works on this site, behaviorism itself is familiar to you. Purposive Behaviorism goes beyond the more mechanistic systems of reinforcement and punishment, stimulus and response, that you see in some of the more traditional theories. Yes, reinforcement is important to keep us moving forward. Yes, punishment (failure) can knock us back. But we are human, and complex beings, and a good analysis always takes that into account. From a purposive behavior standpoint, we use goals and work hard to achieve them. That is an intrinsic part of what it is to be human. In older theories by Edward Tolman, the term cognitive map was developed to describe how we do that. Our cognitive map is how we envision our path to our goal. We all have beliefs, unspoken ones, that a specific action on our part will get us closer to an intended consequence or goal. Let’s call these expectancies. They cover both the behavior we intend to do, and the goal we intend to achieve with them. It’s a roadmap. Tolman also believed that we learn from our successes and failures largely through a latent process. There is an automaticity to reinforcement that helps us pick up what has worked and set aside what has not worked, and integrating more cognitive and conscious strategies to what we have learned latently is the best way to move forward. Keep in mind not just what you can remember and consciously recall, but also what might have been learned latently from the experience.

When we map out our actions to meet a goal, we often give ourselves a time frame (hopefully realistic) in which to reach them. By giving our goals, or conceptual map of how we achieve them, a context in time we help judge how to act and what to expect. Generally speaking, acting now is always better than acting later unless you have a more advantageous use of time further along to position towards your goal. With our expectancies in mind, we have our actions, our goals, and our time frame. As adults, we also learn to discriminate effort from ability. Effort can be defined as the amount of energy or resources we must expend to progress towards the goal, while ability may be defined by our existing proficiency or skills that can achieve it. In most situations it is a combination of both effort and ability that help us reach complex goals.

Let’s reintroduce failure here. Let’s say that we mapped out our goal, we made our attempt to the best of our effort and ability, and we find that we simply did not meet success. Perhaps we even see repeated failure. It can be easy to get disheartened, and even travel down that path of learned helplessness, but we should do everything we can to avoid it. Let’s imagine that we did our best to conceptualize our locus/loci of control, and they were as accurate as they could be, but we still missed the mark. We tried, we failed. Let’s say our expectancy, our goal and plan to reach it, is still very important and we do not want to change the goal. How do we use our time most effectively now to get back up and try again? We need to re-train ourselves, and that means re-training our attributions.

Do we have the ability to achieve this next step in our goal? What did our failure show us?

Did we apply the necessary effort to achieve the next step in our goal? What did our failure show us?

Were our attributions on stability based around factors that were stable (ability) or unstable (effort)?

The combination of evaluating our ability and effort and attribute our failure and successes along these variables is key to knowing when something can be achieved alone, if further training, resources, or additional help from others is needed, and how to adjust our plans going forward to include these more sophisticated and evaluated plans that came from the experience. Failure here is a teacher. It’s not always easy to maintain effort after a failed attempt even if the ability was there. To retrain ourselves to analyze our attributions of the failure correctly, we must take some time to evaluate the factors. Use this tool from Dweck (2000), who we saw in that earlier study too, below to take a particular situation you might have been in the past, and see where the attributions fall.

Plug some of your attributions in the grid above and see where they fall. Do you think anyone else evaluating your situation might have a different series of attributions for it?

We tend to get the best results out of ourselves and planning ahead by attributing a reasonable portion our previous successes to internal and stable causes. What went right in the situation within our ability, even if there was an ultimate failure, that we can consistently do again? Example: I might not have won the race, but this was close to my best personal time yet.

When analyzing our failures, we can go wrong in attributing things entirely to unstable and external causes. Things that we see as completely out of our control, and leaves nothing for us to work and grow on. Example: I was going to go in to work today but then the roads were so busy and you know I can’t drive on busy roads…

The take away:

  • Turning failures into successes takes analysis of what happened.
  • Sometimes we analyze the situation well and can think of some improvements for next time focusing on our internal factors.
    • “Stable Dimension” attributions help us reflect on our ability and how to improve it.
    • “Unstable Dimension” attributions help us reflect on our level of effort and if we can improve it next time.
  • If we see many attributions leaning in the unstable or external direction, maybe it could take an extra pair of eyes to help us get a new perspective.
    • Reaching out to a trusted friend, or experienced advisor on the topic.
    • Re-evaluating the attribution by considering internal factors.
  • Learned helplessness can arise from attributing too much to external factors, avoiding evaluation of internal factors, leading to poor problem solving and less sophisticated goal directed behavior.

Getting back up after failure requires analysis of our actions, re-training our attributions to avoid learned helplessness, and consistent effort going forward.

What are some attributions you’ve thought about recently? Have the behaviors you’ve used to reach those goals been effective? Have they been ineffective? How has your belief system on the locus of control impacted the process? Have you utilized others to help you with alternate perspectives?

Comments? Questions? Feedback? Leave them below.

References:

Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied Behavior Analysis. Merrill.

Edward Chace Tolman. (2015). Introduction to Theories of Learning, 302–326. https://doi.org/10.4324/9781315664965-16

Hoose, N. A.-V. (n.d.). Educational psychology. Lumen. Retrieved November 11, 2021, from https://courses.lumenlearning.com/edpsy/chapter/attribution-theory/.

Molden, D. C., & Dweck, C. S. (2000). Meaning and motivation. Intrinsic and Extrinsic Motivation, 131–159. https://doi.org/10.1016/b978-012619070-0/50028-3

Schunk, D. H., Meece, J. L., & Pintrich, P. R. (2014). Motivation in education: Theory, research, and applications. Pearson Education Ltd.

Tolman, E. C. (1967). Purposive behavior in animals and men. Irvington.

Image Citations:

Title image: Getty Images/iStockphoto
Attribution Grid: Christian Sawyer, M.Ed., BCBA

Philosophic Doubt- When Scientific Inquiry Matters

There are important assumptions, or attitudes of science, which ground scientific study across all disciplines; Determinism, Empiricism, Experimentation, Replication, Parsimony, and Philosophic Doubt. The last one holds a key role in how we deal with the information we gain from science, and what we do with it in the future. Philosophic Doubt is the attitude of science which encourages us to continuously question and doubt the information, rules, and facts that govern our interpretation and understanding of the world (universe, etc). Philosophic Doubt is what has practitioners of science question the underpinnings of their belief, and continually do so, so that their understanding is based on consistently verifiable information. Philosophic Doubt cuts both ways- it can have a scientist test the truthfulness of what others regard as fact, but that means they also must take on the same level of scrutiny and skepticism in their own work. To some, Philosophic Doubt is a gift that has helped them expand on their ideas and shape them beyond the initial first experimental steps. To others, Philosophic Doubt is a detrimental form of skepticism clawing at information or beliefs that they hold dear. These views are not new, and in fact we can find traces of this disagreement going back to the 19th century. Here we will explore the assumption of Philosophic Doubt, including proponents and detractions both old and new.

Why do we need Philosophic Doubt anyway?

Philosophic Doubt is important to science because it has an effect on how the progression of scientific work takes place. It has scientists test their own assumptions, hypotheses, and underlying beliefs, even if those are held precious to them, against replicable evidence and new future findings. Philosophic Doubt drives experimentation, and it precedes replication as well. It is what underlies the empirical drive for seeking evidence. Without philosophic doubt, science can go wrong. A hypothesis could be formed based on inaccurate information which would never be retested. Subjective experience could entrench anecdotes in a study as a broader experience than they are. A scientist could start with what they want to find, and cherry pick only what fits their assumption. These examples are the risks of not taking Philosophic Doubt in to account. Sometimes it can simply boil down to the scientist wanting to be right, against keeping an open mind that they might not be. Holding the assumption that there is a benefit to questioning findings or previously accepted beliefs is not a slight against past experience or belief, but rather a better way of interpreting future information if it were to challenge it. Questioning is a part of science, but not everyone thought so.

“In Defence of Philosophic Doubt”

Authur James Balfour, a 19th century philosopher, debater, former Prime Minister, and scientist, took this topic head on in “In Defence of Philosophic Doubt”. Unlike today, opponents of Philosophic Doubt at the time were more interested in comparing the empirically-heavy scientific beliefs to a more open metaphysical series of alternatives- that is, they were more interested in comparing science to non-scientific belief systems as the truth of reality. When it came to psychology, there were idealists, and realists, and stoics at each others throats with concepts that could not be observed or proven. As you might already be able to see, comparing metaphysical constructs to an assumption that has them continually question their arguments and points, makes metaphysical assertions all the harder to make. Scientific points, however, make Philosophic Doubt a little easier to withstand:

Under common conditions, water freezes at 32 degrees Fahrenheit

Employing Philosophic Doubt, we can continually circle back to this assertion to test it again, and again. Pragmatically, there comes a point where we only question these basic and well founded particulars when we have reason to do so, but the doubt is always present. Sometimes for precision, sometimes to be sure that we are building off of the knowledge correctly, and others to help with the replication and experimentation assertions that grow science. Balfour was a strong proponent of natural sciences, and the use of this kind of questioning. Science founded on observation and experimentation was something truly important to him. Keep in mind, the 19th century was shaped by scientific discovery at a pace never before seen. Balfour kept an even head about this, and believed more in the assumptions of science as the path to understanding the natural world. Propositions which states laws, or which stated facts, had to be built on concrete science and not just personal belief or anecdote. Some of his points we would take as obvious today- for example, when using comparative probability, would we run an experiment or trial just once, or twice? Multiple times? If we ran something like this just once, it wouldn’t be comparative probability, but if we ran it twice and accepted this as the final answer to the question we would miss out on the further replication and experimentation on the subject. The curiosity that Philosophic Doubt embodies would keep the experiment and replication going. Without Philosophic Doubt, we fall into a trap of not questioning initial assumptions or findings.

Another interesting thing about Balfour’s work is that it came at a time where there was a great deal of belief in a mechanical universe that followed strict Newtonian laws. At the time, this was compared with more metaphysical alternatives. Balfour cautioned everyone to continually use philosophic doubt and to question both belief systems- even if the “mechanical universe” was winning by a landslide at the time. If we were to take Balfour’s points and stretch them into the future, we might see how he would have found some justification in further development in physics- quantum mechanics for example, where the Newtonian mechanical universe which was seen as sufficient to explain everything, falls a little short. Without that testing of the original tenets of physics, the use of Philosophic Doubt, we might not be where we are now. The analysis of Balfour’s work could go on for entire chapters, but I would like to top it off with an excerpt on the topic of the evolution of beliefs, and the reluctance to test our own personal beliefs:

“If any result of ‘observation and experiment’ is certain, this one is so- that many erroneous beliefs have existed, and do exist in the world; so that whatever causes there may be in operation by which true beliefs are promoted, they must be either limited in their operation, or be counteracted by other causes of an opposite tendency. Have we then any reason to suppose that fundamental beliefs are specially subject to these truth-producing influences, or specially except from causes of error? This question, I apprehend, must be answered in the negative. At first sight, indeed , it would seem as if those beliefs were specially protected from error which are the results of legitimate reasoning. But legitimate reasoning is only a protection against error if it proceeds from true premises, and it is clear that this particular protection the premises of all reasoning never can possess. Have then, then, any other? Except the tendency above mentioned, I must confess myself unable to see that they have; so that our position is this- from certain ultimate beliefs we infer than an order of things exist by which all belief, and therefore all ultimate beliefs, are produced, but according to which any particular ultimate belief must be doubtful. Now this is a position which is self-destructive.

The difficulty only arises, it may be observed, when we are considering our own beliefs. If I am considering the beliefs of some other person, there is no reason why I should regard them as anything but the result of his time and circumstances.” -Arthur James Balfour, “In Defence of Philosophic Doubt” (1879).

Back to Basics- Science and Philosophic Doubt

In “Applied Behavior Analysis ” Cooper, Heron, and Heward begin their first chapter with the basics of what science is, specifically behavioral science, and the assumptions and attitudes of science including Philosophic Doubt. Cooper, et al., consider these foundational concepts in science as a whole and relate their importance to psychology and behavioral science. In their words:

“The attitude of philosophic doubt requires the scientist to continually question the truthfulness of what is regarded as fact. Scientific knowledge must always be viewed as tentative. Scientists must constantly be willing to set aside their most cherished beliefs and findings and replace them with the knowledge derived from new discoveries.

Good scientists maintain a healthy level of skepticism. Although being skeptical of others’ research may be easy, a more difficult but critical characteristic of scientists is that they remain open to the possibility- as well as look for evidence that their own findings and expectations are wrong.” -Cooper, Heron, Heward, “Applied Behavior Analysis”, (2017).

Bonus! B.F Skinner
“Regard no practice as immutable. Change and be ready to change again. Accept no eternal verity. Experiment.”- B.F Skinner, 1979

The sentiment behind Philosophic Doubt and science is that of openness and humility. Not only is the scientific work we read subject to doubt, but our own as well. The latter is the most difficult part- challenging our own beliefs constantly, challenging our most cherished propositions and reasoning. To some, this is something that expands the horizon of future knowledge infinitely, to others; a hard trail to follow that is no easy task. In either case, perhaps this brought up the importance of Philosophic Doubt, and how it ties in with the other assumptions in science as a challenging but inseparable part of the process.

Comments? Thoughts? Likes? Questions? Leave them below.

References:

1. Balfour, A. J. (1921). A defence of philosophic doubt: being an essay on the foundations of belief. London: Hodder & Stoughton.

2. Cooper, J. O., Heron, T. E., & Heward, W. L. (2017). Applied behavior analysis. Hoboken, NJ: Pearson.

3. Skinner, B. F. (1953). Science and human behavior: B.F. Skinner. New York: Macmillan.

Token Economies: What Behavioral Science and Blockchain Technology Have In Common

“Token Economies”- two words springing up at Blockchain and Cryptocurrency summits and conferences with increasing regularity. Token Economies have been used by behavioral scientists and practitioners for decades, but recently they have taken off in the field of Blockchain and Cryptocurrency technologies. Both applications use the term “Token Economy” interchangeably. In technology conferences and summits, it is the original behavioral psychology definition that is used to describe the concept. The tech field is now using the original token economy concept and expanded it to apply to what some might call the future of commerce and currency. Exciting stuff. Here, I will break down the basic concepts of what a Token Economy is, and how both behavioral scientists/analysts use them, as well as the new application in the technology by Blockchain and Cryptocurrency developers.

tokens

The Token Economy

Let’s break it all down. What is a token economy? A token economy is a system where tokens, or symbols, are used as conditioned reinforcers which can be traded in for a variety of other reinforcers later. It is not a bartering system or prize system where objects/access/services are given directly following a target behavior, but a conditioned stimulus (token) without necessarily any intrinsic value that is agreed upon to add up to exchange or buy another reinforcing item. A common example that most of us are used to is money. Paper money, specifically, can be considered a part of a token economy in that it is “traded in” towards some terminal reinforcing stimulus (or “back up reinforcer” as it is called in behavior analysis). The paper money is a conditioned reinforcer because it has no necessary intrinsic value but has conditioned value for what it can eventually be used for within the token economy.

This was taken up originally by behavioral researchers in the 1960’s, as a form of contingency management for the reinforcement of “target behaviors”- or prosocial learning, in therapy situations. Reinforcers are important psychologically because, by definition, reinforcers change the rates of behavior that they follow. They can help teach life-changing skills, or alternatives to some destructive or undesirable behavior quickly. But, reinforcers can be tricky too. People can become bored or satiated with tangible rewards, such as food, but within a token economy, reinforcement can be delivered in the form of tokens and allow for a later exchange or choice of any number of possibilities desirable to that individual. By pairing these tokens with access to “primary reinforcers” (reinforcers that are biologically important) or other “secondary reinforcers” (stimuli that have learned value), the tokens themselves become rewarding and reinforcing- thereby creating a sustainable system of reinforcement that defies the satiation and boredom variables that the researchers originally found as barriers to progress. Alan Kazdin’s work “The Token Economy” is a fantastic resource on the origins and research that began it all.

What can a token be? Nearly everything. But, it has to be agreed upon as a token (given some value for exchange) in order to serve as a token for the purpose of trading it in, or buying with it. Giving someone a high five after doing a great job at work, for example, is not a token. It is a reward, and possibly a reinforcer, but it was not conditioned to have value, and cannot be saved or exchanged. Tokens also need not necessarily be physical, or tangible. They can be symbols, or recorded ledgers, so long as that information can be used for the exchange in the corresponding token economy. This is where blockchain and cryptocurrency technologies tie in to the original behavioral science understanding of a token economy. Can data, or information, serve as a token and be used in a token economy if it is agreed upon to have value and worth exchange? If you haven’t heard of BitCoin (a Cryptocurrency), the answer is yes.

2rarqo

Blockchains and Cryptocurrencies

What is Blockchain then? And what is a Cryptocurrency? Using our original definitions of tokens and token economies, for data or information to be considered tokens they have to be able to be exchanged and have value that can be traded within the token economy. Blockchain technology solves this by creating units of data called “blocks”. These blocks, simply put, are a growing list of data records that contain a “cryptographic hash” of previous blocks. These linked blocks form a ledger which is resistant to duplication and tampering. In layman’s terms, unlike most data that people can manipulate and come into contact with day to day, a “block” within this Blockchain cannot be altered or copied and maintains a faithful record of time and transactions. Resistance to copying/duplication means that it cannot be forged, and resistance to altering means that this data (the record of information) is seen as reliable. If we create a currency using this technology, then we have the means to create units, or tokens, that are individual, can be traded, and have a consistent and (for the cases of this introduction) unalterable record of transaction. Assigning value to this creates a digital currency called Cryptocurrency. Tokens. Transactions can take place using these blockchains. These transactions take place person to person (“peer to peer” or P2P), meaning that once a unit of cryptocurrency is exchanged from one person to another, it resembles very much a physical exchange of all other forms of currency. This exchange does not require any medium, such as a bank, like physical currency does in online banking for example.

Blockchain and Cryptocurrency developers, then, would be looking to create a form of token currency that can be traded within this broader token economy- that is both reliable enough to be used by enough people to catch on or become commercially viable, while still maintaining the benefits of a cryptocurrency (security, privacy, etc) over traditional currency. These cryptocurrencies, these units of data, these blocks, have no intrinsic value themselves. They are tokens in the very real sense that the original behavioral research intended. Their usage and effects, then, appear to follow in the same vein. Currency can be reinforcing, reinforcement can alter behavior, and once a token takes on value through the conditioning process; it can be truly valuable in its own right as a “generalized reinforcer”- a reinforcer that is backed up by many other types of reinforcers. A dollar, for example, as a widely used currency can be used for a nearly countless number of goods, services, and transactions. This makes it a good generalized reinforcer. The more a token can be traded for, the better a generalized reinforcer it becomes.

Will a form of cryptocurrency, like Bit Coin, gain this same traction as a currency, or token, to access other reinforcers in trade? Many people say yes. That’s where both behavioral scientists and blockchain developers can both find excitement in each new development and innovation.

Likes? Comments? Questions? Did I get it wrong ? Leave your comment below!

References:

  1. Alan, K. (n.d.). The Token Economy: A Review and Evaluation. New York, NY 1977: Springer. doi:10.1007/978-1-4613-4121-5
  2. Blockchain. (2019, January 13). Retrieved from https://en.wikipedia.org/wiki/Blockchain
  3. COOPER, JOHN O. HERON, TIMOTHY E. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. Place of publication not identified: PEARSON.
  4. What is Simple Token (OST) [Audio blog post]. (2018, August 22). OST Live Podcast

Image Credits:

http://www.imgflip.com

http://www.smilemakers.com

Did Cognitivism Beat Behaviorism?

hands-people-woman-meeting

Some hold firm to the idea that the division between behaviorism and cognitivism is a vast divide; where there is a winning theory and a losing theory. You’ll hear them- “Behaviorism died decades ago!” and “Thoughts about thoughts? That’s just unprovable mentalism!” shouted from entrenched believers until they are blue in the face. There may be some salient historical details that explain why they feel that way; behaviorism (arguably) replaced many of the mentalism and introspective psychological methods well into the 20th century. Then, some would say that the behaviorist movement was halted by Chomsky’s rebuttal of B.F Skinner’s “Verbal Behavior” and the rise of the 1960’s “Cognitive Revolution”. The deep division could be argued as unbridgeable. As someone who was not practicing at the time of these contrasting theories coming to a head; I always wondered what it would have been like. Did everyone see it as a giant butting of heads? Did all the researchers and scientists find themselves marked on either side? Are the loud entrenched voices of today just echoes of the past that haven’t been resolved? If so, how did cognitive behavioral therapies do so well blending the two perspectives? There had to be more than just a line in the sand. Enter Terry L. Smith, and his book “Behavior and it’s Causes”- relating the exact sentiment which I was so curious about.

“I had (just like everyone else) read Kuhn (1970), and so almost reflexively interpreted cognitive psychology and behavioral psychology as competing paradigms (see Leahey, 1992, for a discussion of how common, and mistaken, this interpretation is). Cognitive psychology was clearly on the rise, so I inferred that the Skinnerian program must be on the decline. Indeed, I thought it must have disappeared by now… What I discovered was that during the 1960’s, the Skinnerian program had actually grown at an accelerating rate. This baffled me. How could operant psychology have survived and even prospered in the midst of “the cognitive revolution”?”

-Smith (2011).

How could that be? Terry L. Smith’s book explores this topic, speculates on some great points, and comes to several strong conclusions. I won’t spoil it for you aside from one- “Operant psychology” as Smith calls it, separated itself from being tied down to every philosophical tenet of Radical Behaviorism. It was Radical Behaviorism, in Smith’s view, that had taken the beating because it was too rigid on what it would allow to be studied, and cut too much out of what could be considered the study of behavior. This was a fascinating point, to me, since I had already studied what B.F Skinner had done with Radical Behaviorism to broaden it from Methodological Behaviorism (ie. private events). We’ve heard this one before, right?

“Radical Behaviorism does not insist upon truth by agreement and can therefore consider events taking place in the private world within the skin. It does not call these events unobservable”- Skinner, 1974

This was one of the larger distinction B.F Skinner made from Watson’s methodological approach which was strictly focused on observable stimuli and responses. If we take Smith’s interpretation on what “operant psychology” is today; it goes even further from radical behaviorism by cutting the divide and seeing itself within the broader breadth of psychology as a whole. This rings true for me when I speak to the behaviorists and practitioners I see in the field- there is still that aversion to “mentalism”, but the focus on the observational thrust that comes from Watson’s strict view is mainly practical- data collection is best done when people can see and define what they track. The behaviorist tradition still lives on in the practice of Applied Behavior Analysis, for example, but Skinner’s written word is not taken as a biblical truth; the components of the philosophy and science that propelled behavioral psychology to continue to progress are still empirically validated. They are scientific findings. The ones that work and do the most good remain.

This is Smith’s main point on “operant psychology” during the “cognitive revolution”; it continued on stronger than before on its own steam because the findings were strong and reproducible. While Chomsky, and other cognitivists, had made some compelling points on the limitations of Radical Behaviorism as an idea and philosophy; it did not undercut the behavioral science as a whole. The practices, techniques, and ideas of both Methodological and Radical behaviorism that came through in the empirical work remained. The broader reaching philosophy that might limits on the science with no empirical backing? Not so much.

Keep in mind that during the “cognitive revolution” beginning in the 1960’s- research in brain mapping and neurobiology had come a long way from the days when Watson, Pavlov, Thorndike, and Skinner began their work. Behavioral theory had been running strong for the beginning of the 20th century, and was now met with convergent findings. Both had their uses, and the ideas that did not refute one another but did overlap when it came to the theories. Internal processes were becoming more understandable through the biological discoveries; which some strict behaviorists may have misinterpreted as just another form of mentalism. That’s a hang up that did not help them. On the other hand, some cognitivists still thought all of behaviorism was comparing humanity to basic stimulus-response (S-R) machines. Another misunderstanding, another hang up. My interpretation is that people fought over those illusory extremes. Those were the voices that screamed the loudest but at the same time were the most misguided on what was actually happening. I equate this to the kind of thing we see on the internet- the “strawman arguments”, where someone constructs an exaggerated facsimile of their opponents’ ideas and tears those down rather than confronting what is actually said. It creates an easy target, but does not actually represent the reality. Strict behaviorists get some things right. Strict cognitivists get some things right. Sometimes…just sometimes… both groups get things wrong too! Surprising, right? That is how anything based in theory and following the scientific method actually works.

Maybe Terry L. Smith is on to something. Maybe we consider ourselves all a part of Psychology with a capital P, and put our findings and theories out there. The right ones that can empirically and reliably help people will be the legacy.

To be fair though, I am not completely in the objective virtuous middle; I’ve read Noam Chomsky’s review of Verbal Behavior and believe he missed the point.

Thoughts? Likes? Comments? Questions? Leave them below.

References:

Chomsky, N. (n.d.). 4. A Review of B. F. Skinner’s Verbal Behavior. The Language and Thought Series. doi:10.4159/harvard.9780674594623.c6

Skinner, B. F., & Skinner, B. F. (1957). Science And Human Behavior. Riverside: Free Press.

Smith, T. L. (2011). Behavior and its causes: Philosophical foundations of operant psychology. Dordrecht: Springer.
Photo Credits: http://www.pexels.com

Why we don’t always prompt: Behavior Analysis meets Vygotsky.

kids-girl-pencil-drawing-159823

In the early 20th century, there was a developmental psychologist named Lev Vygotsky working on theories of learning and development in parallel to many of the behaviorist traditions. If you were to ask a graduate student taking behavior analytic courses who Vygotsky was, they would most likely shrug their shoulders and wonder why that was important. He isn’t Watson. He isn’t Pavlov. He isn’t Thorndike. He isn’t Skinner. He isn’t Lindsley. So, why would a behaviorist ever want to care? Well, it’s because his work ties in so closely to the behaviorist tradition, that you could in some cases use his terminology and frameworks interchangeably and still see the same results. His work can help clarify why we, as behavior analysts, trainers, educators, and even parents, should not prompt every single time we see a child begin to struggle with an endeavor or task.

To an educator or professional following the behaviorist tradition, it’s not all that hard to describe. Prompts help the learner reach a reinforcement threshold that that their response likely could not have reached on its own. Shaping- describes a process by which an emergent behavior which is similar in some way to a target behavior, is reinforced by successive approximations to become the terminal target behavior. Basically, it’s taking an “okay” behavior attempt, and rewarding the behaviors that look closer to improvement until it’s “perfected” enough to reach more naturalistic reinforcement in the broader environment. To a behaviorist, that means looking at what the learner has in their repertroire, what they can do right now, and plan to reward the responses that improve that towards some end goal response. But wait, how exactly do we know when to intervene? And why don’t we intervene every time we see the learner encounter difficulty?

The trouble with that is that sometimes a learner does not actually learn from being prompted too much. Sometimes that reinforcement only contacts the amount of effort the learner expends to receive prompting. Sometimes they become dependent on those prompts, and then it is the educator doing the behavior, and the learner receiving reinforcement. They don’t improve because they have no need to improve. They get the prize every time their educator does it for them. That behavior that the educator prompts, might never transfer through modeling. Why should it, if the reinforcer comes anyway? This is where Vygotsky comes in. Vygotsky believed that there is a Zone of Proximal Development.

Lev Vygotsky was not a behaviorist. In many ways, he was against the methodological behaviorism that was popular at the time which focused on purely observable stimulus-response relationships. Vygotsky also believed that learning was not just a process that drew from a present environment of contingencies, but a broader wealth of cultural and societal forces that accumulate through generations and have impacts that were not directly related to the behaviors at hand. However, when it comes to the Zone of Proximal Development, his theories coincide with what behaviorists would conceptualize as both repertoires and the necessary thresholds for prompting. Vygotsky believed that there was a level at which a learner could successfully accomplish tasks without assistance, and a level at the other end of their developmental range that they could not accomplish without considerable help in the form of prompting. Between that, however, was a zone where a learner could accomplish them with some collaboration and prompting and eventually surpass it to a level of independence. It’s a zone that is in many ways different from individual to individual, but within that zone of proximal development; prompting (or collaboration as he called it) was at its most effective.

Think of it like this:

Zone of the learners “actual” development Zone of Proximal Development The limit of their current developmental ability
These are responses that the learner can perform, and tasks that the learner can complete without any assistance from others. These are tasks and responses that the learner can accomplish with the assistance and prompting of others.

These are tasks and responses that are beyond the learner’s ability to accomplish and can only be produced with considerable support and assistance.

*Behaviorist Footnote:
Think of this as the responses already in the learner’s repertoire. These are “easy”.
*Behaviorist Footnote:

Think of this as the area of “shapable” responses that are likely to lead to independent future responses. Vygotsky called this “scaffolding” but the process of “shaping” is synonymous.

*Behaviorist Footnote:

The client can be prompted through these tasks, but are unlikely to be able to reproduce them even with shaping procedures at this time.

This framework delineates an interesting range where a learner needs and could use the help of an educator or teacher to help prompt them, and when not. In the initial range, prompting is unnecessary and might actually hinder the learner from engaging in those responses in their most independent forms. The learners who can engage in the “easy” responses and find that reinforcement in the broader environment would be more likely to occur in the future. Prompting too much here could stifle that. In the next range, the Zone of Proximal Development, as Vygotsky calls it; prompting could actually be of the most use! These are responses that are viable for occurring and reaching natural reinforcement, but they just need a little help at first to get there. Here, prompting in the form of modeling or shaping could help the learner take their initial responses and bring them to their terminal and most effective independent forms. This is the exciting part. This zone is where the work put in by the educator and teacher could meet maximum return on what the learner can benefit from. Now, we have to be careful not to reach for the moon here. The final zone is where, even with prompting, the learner is unlikely to be able to shape their responses successfully. This, for example, is trying to teach a learner to run before they can walk. They need those foundational responses before they can even be prompted to a more advanced terminal response. An educator who comes across this scenario might be wise to dial the expectations back.

Between those two ranges of “easy” and “unlikely”, we find the responses that can be prompted for the most good. We would not prompt too much, and stifle the learner’s ability to contact reinforcement on their own, but nor would we fail to prompt at all, and miss those responses or behaviors that just need a little push. This is where a behaviorist, teacher, educator, or even parent, can take a thing or two from Vygotsky’s work. And if you’re a tried and true behaviorist who can’t believe that a cognitivist would be mentioned here, I’d suggest an open mind. You might even be surprised about the similarities between Vygotsky and Skinner on private events and “inner speech”. We can touch on that later, but for now, think about the zone of proximal development in your life and practice; what could use a little help?

Likes? Comments? Questions? Leave them all below!

References:

Burkholder, E. O., & Peláez, M. (2000). A behavioral interpretation of Vygotsky’s theory of thought, language, and culture. Behavioral Development Bulletin,9(1), 7-9.
COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
ORMROD, J. E. (2019). HUMAN LEARNING. S.l.: PEARSON.
Image Credits:

A Behaviorist’s Take on Far Cry 5

Far Cry® 5 (3).png

Forewarning to the regular readers; I’m talking about video games today. In particular, a fantastic action-adventure game I was turned on to by friends called Far Cry 5. That’s not an entire truth; I’ve played the predecessors too, but this one stands out to me narratively because it has a story based around social control. As a Board Certified Behavior Analyst, I’m drawn to these things. Imagine a world not so different from ours, where a doomsday religious cult takes control of a part of Montana and spreads a violent vision across the state corrupting the citizens to the new lifestyle of brutalization and indoctrination. That calls for a hero right? That’s the game. The thing that makes this interesting to a behaviorist is how it uses those social forces in-game to create fictional forms of coercion that in many ways matches the existing psychological science of conditioning. I like this game. It’s complex, it’s fun, and I’m going to be testing myself in its new Infamous difficulty mode over the next two weeks and during Extra Life to rack up some more donations for the local children’s miracle network hospital near me (link here and below). I’ll also try to keep spoilers beyond the psychological methodology to a minimum, Let’s get on to the psychology.

In the game, there are several bosses who control section of the map. Each of them represents a different form of that control. Spoiler alert. But honestly, no large reveals here. Joseph Seed is the big boss. He’s a sort of preacher borrowing from several religious traditions to deliver his idea on a “collapse” of society and a vision for a simpler future. He relies on a group/mob mentality, social reinforcement (a semi-Bandura style of vicarious punishment) and a form of authority that borrows from his own charisma and the religious texts he cites. Not too out of the ordinary. His doomsday cult also employs sub-bosses. John, a former lawyer, who is obsessed with having his devotees say YES, and uses similar group and social coercion. Faith, who uses a toxic mix of drugs called Bliss to create hallucinogenic induced indoctrination. Believable to a degree. Then, there’s my favorite and the reason for this post; Jacob. Jacob is a little different. He’s said to have a soldier’s background, but he uses a method of conditioning, which he refers to as a basic classical conditioning, with a substance (drug) related assistance. This puts his subjects into murderous rages/trances when he plays the song “Only You” by The Platters. He tries to make his method sound simple. He tries to make you believe it’s just simple stimulus pairing through classical conditioning.

Jacob does abhorrent experiments with these methods on both animals and humans, causing devastation and treachery across the part of the story. It’s very tragic. The thing is…he’s not just using classical conditioning. Conditioned stimulus with a conditioned response? Not quite. There’s more to it. He tries to explain his method several times and even uses the standard definition of classical conditioning to describe how he creates these diabolical effects, but when we look at the practice there’s a sinister amount of complexity that he leaves out. This fictional boss Jacob might think that it’s simply food deprivation, a song, practice in his chairs/training chambers that do it; but he’s selling himself short. He’s actually using both classical conditioning and operant conditioning. That fiend.

Far Cry® 5 (2)

Jacob’s Classical Conditioning

It might surprise you, but Jacob didn’t invent this form of conditioning. It actually has its origins with a researcher named Ivan Pavlov (and also Edward Thorndike) involving the well-known experiment with bells and salivation. There we see the pairing of a conditioned stimulus with an unconditioned response. Basic stimulus-response psychology. Now, in this fictional world of Far Cry 5, the bad guy Jacob references these things, and even Pavlov (“Pavlovian”) once or twice. I think narratively, it makes sense. He’s training killers. He sees his conditioned stimulus (a song) and their response (murderous rages) to be synonymous with that process. Except… when we look at the training, it’s not that clean. There are parts that seem to follow this method; mainly that he is engaging in a stimulus pairing procedure that works on a learned behavior change for the individual. The environmental event (or stimulus) precedes the response he is looking for. That makes sense too. Even the cutscenes play out the process correctly! We assume the original neutral stimulus “Only You” by The Platters does not lead to murderous rages to an ordinary person. He needs to make that connection happen in his victims by pairing stimulus and response. Jacob pairs that neutral stimulus, with an unconditioned stimulus (threat, through some form of a hallucinogenic and visual process), to elicit an unconditioned response (attack). Then, following this, he presents the newly paired conditioned stimulus (“Only You” song) to elicit the newly conditioned response (attack). Makes sense, right? Somewhat. But look at the training methods a little deeper and we get some complexity. He has the stimuli he wants available. He has the song. He has the wolf pictures, and the predatory images of wolves killing deer, but he also adds something else in… Reinforcement and Punishment during his trials.

Far Cry® 5

Operant Conditioning through Discrete Trial Training (DTT)

The reason I like the Jacob missions so much is that they do use real-world conditioning methods. They just undersell them a little. Jacob, the big bad guy I hated through two playthroughs of this game, uses both classical conditioning and operant conditioning to make his process work. Also, some fictional drugs and hallucinogenics, but let’s focus on what we know. Operant Conditioning is different from Classical Conditioning (or “Pavlovian Conditioning”) in one major way; it focuses on the ability of the subject to respond in a specific way, followed by a reinforcer in order to increase the frequency of that behavior or shape it towards a targeted goal. When someone mentions B.F Skinner, or Skinner Boxes, this is the type of conditioning they are talking about. Again, MINOR SPOILERS. Jacob does that to our character the first time he catches us. It’s not just the classical conditioning process of the song to the natural response of attacking when threatened. He trains our character to make that stimulus and response relationship stronger, and introduce faster and more vicious shaped behaviors to the repertoire of the character. It’s tragic. It’s sad. But his method is theoretically sound. You see, he uses what we behaviorists call Discrete Trials. The situation for each trial is exact. The Discriminative Stimulus (SD) to set it off is the same each time. Here is where the operant part comes in. The character is tasked with eliminating all enemies using the provided weapons, in an interval time frame, to complete the task and receive reinforcement for the chained behaviors. This follows the three-term contingency known as A-B-C. Antecedent. Behavior. Consequence. Let’s break it down.

(ANTECEDENT) aka Discriminative Stimulus- “Only You” Song, and visual presentation of threat-related stimuli.

(RESPONSE) – Eliminating targets.

(CONSEQUENCE)- Added time to the interval to allow for more time to complete the task for further reinforcement, and verbal praise from Jacob in the form of “Good”, “Cull The Weak” etc. This is Reinforcement.

Or… (CONSEQUENCE)- in the form of Punishment. Fail to complete the task by either being killed by enemies, or failing the time interval, and you meet the punishment contingencies of starting over from the beginning, or verbal reprimands in the form of “No”, “You are weak”, “You are not a soldier”, etc.

In other words, Jacob is shaping repertoires. He’s not just pairing behavior. He is creating a series of trained responses, operants if you will, in the presentation of his conditioned stimuli to be completed in a way that he controls. It is the fundamental ingredients of all learning, but he has twisted it a little to make this heroic character fall right into a trap of uncontrollable lapses in judgment and responding in cruel ways that are uncharacteristic or were a part of the character from the start. Chilling, right? But like a rat in a maze, or a box, the character must follow these in order to progress. Press the lever, get the cheese. Shoot the opponents, get the praise and progress.

Far Cry® 5 (5).png

Meta Game Talk: Conditioning The Players

Let’s talk a little about the big picture here. Yes, Jacob is fictional. Yes, this heroic character is fictional too. But when we look at the game from the lens of how it works on player reinforcement and punishment, we can actually see ourselves in the picture of this box. We are also conditioned, if we choose to play the game and continue to play the game, in a way that shapes and sharpens our behavioral repertoires. The same Discrete Trial Training that Jacob puts our character through, we are also participating in, and are contacting that same reinforcement and punishment as though it were our own (broadly speaking). We want to succeed. We want to continue. We want to win.

So, we get faster. We get more accurate. We learn the patterns. This is why we train. As Jacob has said so many times during these repeated trials. Each time, giving us a little more of a challenge. Each time, progressing us with different response repertoires to enact on the challenges in our way. It’s fun. In some ways, it can be a representation of the game as a whole. There are many reinforcers out there to get. Many contingencies to engage with. Even multiple endings (that’s the part that got me doing it twice).

I learned to shoot through both enemies in the revolver scene from the left. I learned to take the submachine gun in the next room and work from low to high, right, center, to left. For the shotgun, I turned corners with two lefts and one right at head level and tapped at the first sign of movement. For the rifle, I stayed low and aimed in short bursts, leading a clear line through the middle, and for the LMG… well, let’s not give it all away just yet. Your repertoires need honing too, and there are many variations that work.

That’s the fun.

ff5

The Behaviorist’s Take:

5/5 Stars for me. This game has been a joy to relax with. It’s challenging, but still can be taken in small parts and missions as time allows. It’s not too much of a time sink for someone on a professional schedule, and not too much of a learning curve for putting half an hour a day in. The story is strong, the emotional bond between the heroic character and the sympathetic (and often funny) people they meet is also a great time. They even let you make your own custom levels and challenges for your fellow players in an Arcade mode. I dig it.

As I mentioned above, this will be my game for the Extra Life 2018 Charity Event taking place the first week of November. I am, believe it or not, the weakest player on my team, but I love talking behaviorism and psychology and will be doing it all day to support the locals in Philadelphia, raising charity funds for the Children’s Hospital of Philadelphia (CHOP). I’m not only an outsider fan of their great work with children, I often have direct contact with the children’s hospital in my day to day work with young populations and can’t speak highly enough about their commitment. Extra Life is a legitimate charity, and 100% of the funds go directly to the children’s hospital. I’m leaving my link below and will be overjoyed if readers could contribute in some part to my goal so I can hold my head up high this year. Any amount at all. I’ll be streaming and will be happy to respond to any comments. Have ideas that I missed? I love those. Send those too.

Extra Life Donation Link

Comments? Like? Questions? Leave them below!

References:

Cooper, John O., Heron, Timothy E.Heward, William L.. (2007) Applied behavior analysis /Upper Saddle River, N.J. : Pearson/Merrill-Prentice Hall

Far Cry 5 [Software]. (2018). Ubisoft Montreal, Ubisoft Toronto.

Image Credits:

Christian Sawyer, M.Ed., BCBA (original Photography/Screenshots)

Steam. http://www.steam.com- Far Cry 5 Logo

Why I Leave My Political Hat At Home

pexels-photo-711009

Opinion piece time. I leave my political hat at home. Or, at least I try to. I leave my belief systems about policy and voting to conversations with friends, Twitter (if I can’t help myself), and the local networking events where local politicians from town hang out- that way it’s just contextual. I’m friends with the local school board. I’m on a first name basis with the mayor of my town. I catch up and chat with the local councilmembers. I have a political life which is just as strong as my professional life. It’s not easy to split the two. More often than not, me deliberating on a choice at work does hit on several pieces of what makes my moral compass orient the way it does. I believe in compassion. I am a behavior analyst- it’s from the behaviorist tradition. It is observational, data-driven, research-based. I don’t allow personal opinion impact what happens with decisions with clients. Thankfully, data does that for me. Is this effective? Yes or no. Why? Well, the data suggests…

I can’t just put up a phase change line on a client’s progress graph because my opinion about a far-reaching political event somehow relates. It’s unfair. It’s my lens getting shifted which impacts more than me if it’s not reined in. The clients are individuals, deserving of individual care. Outside of that, it also means that I have people working with that client which report to me- RBT’s (Registered Behavior Technicians). They worked hard to get that credential. They’ve passed their tests and went through their supervised hours. They are professionals. Would it be fair for me to walk into work with a political or ideological idea in my head and try to bring it up to them? Of course not. That’s not their job. Their responsibility is to the client, based on the real world observable responses and data they see and collect. They depend on my unclouded experience and judgment. Even if they were to be outspoken about a political view (which happens), I can’t let that color my opinion of them or how I treat their judgment. It could. It easily could. But that’s my professional line drawn in the sand.

Here’s a common counter I’ve heard: Things are getting bad here. We need to speak out. We need to take a political stance in our personal and professional lives.

If it involves the vaccine pseudoscience? I’ll bite. I can justify that because the evidence is there and it relates to my work.

But here’s the pickle. The people who bring up that counter argument assume something. They assume that just because we share a job title, and do the same thing, and care about the same pursuits that we have the same political opinion, and I’d be an addition to their circle. Now, when those political views have already been expressed, I can be pretty sure whether I agree or not- and it’s a mixed bag, but surprisingly to some- I don’t share the expected viewpoints. Were they looking for differing viewpoints? I can’t be sure, but it doesn’t feel like it. Is it worth turning a workplace contentious? Is the workplace the place, and the time, to deal with these issues?

“But Chris, surely you don’t support _____.”
“You work with kids though. How could you ____?”
“If you’re not ____ then you’re ____.”
“_____ did something terrible. You can’t support ____ could you?”

I have nuanced viewpoints. They don’t follow a single ideology, or politician. That potentially makes it even worse. My political stance might not align with anyone who is unipolar in their support or views. The world is a big place. The United States is a big place. Pennsylvania is a big place. There are a lot of different people with valid but different views. In my personal life, I can vote with my conscience. I can even refuse to vote if it aligns with my conscience. I can protest who I want to protest. I can talk to local politicians from both parties. I can talk with local third-party candidates. I’m outspoken on education in these settings and with these people. But they don’t report to me. They aren’t my professional peers either. It’s the context that makes sense to me. If I meet someone from work, off the clock, and they want to talk about these issues; then I would be perfectly fine putting my thoughts out there. Discuss. Change my mind. Sure. I’d have to draw a line somewhere though. It can’t get heated. Even the small stuff would have to be calm and rational and most importantly; wouldn’t be evident at work the next day.

In my profession as a Board Certified Behavior Analyst, the board (BACB) that governs how supervisors treat supervisees are pretty clear in many respects. Dual relationships, abuses of power, conflicts of interest- they all have some clear delineation. Politics isn’t mentioned specifically, but imagine a case where there was an outspoken supervisor who did espouse their views and acted on perceived implications of those views at work. Would that affect the people directly reporting to them? How sure could we be that it wasn’t? I stepped into work on November 9th, 2016. I felt it. Whatever it was, it was there. Putting that into the supervisory relationship is a dangerous game, in my opinion. I’m not saying other people can’t do it, but it’s not something I’d feel comfortable with given the potential to go bitter.

I believe that if something needs changing, it can be done with every opportunity that a citizen has. That goes for maintaining a high held value or traditional ideal. People are free to do both. Bringing that explicitly to the workplace, with a position of influence and supervision responsibility, has risks. I’d much prefer to leave that particular hat at home.

 

References:
Just me.

Photo Credits: http://www.pexels.com

Tabletop Roleplaying with a Behavior Analyst

20180803_163713.jpg

There are a vast array of opinions on role playing games. The stereotypes about them are prevalent in the popular culture of movies and televisions shows- mainly depicting the socially inept cliches rolling dice and spouting an incomprehensible language of their own. That type of depiction does get laughs, but it also is unlike anything I’ve seen in reality. I was influenced by those caricatures of role players too. For a long time I did not understand the appeal of piling up in a dark basement, playing a game about pretend people where nothing really mattered and there were so many rules to learn. Where’s the fun in that? It was the wrong outlook, but the right question. There was fun in it. It just took the experience to actually try it out and find it for myself.

Tabletop Role Playing is just a form of collective story telling. If you’ve ever seen a fictional movie and been engrossed in it, or had an idea for a novel, these are the same types of precursor behaviors to putting yourself in someone else’s shoes. There’s a fun to that. Taking on a different personality for a moment, and seeing a viewpoint unlike our own. If we want to get psychological about it, there might be some aspects of Adlerian play theory, or Bandura’s social learning through vicarious reinforcement in there. The gist of it is; one person sets the stage of the story and determines the rules of how the game is played, and the players take on a role and navigate that world for a collective goal (most of the time).

If you’re the type of person who likes making materials like token boards, graphs, or craft projects- this is right in your wheelhouse too.

It’s best to start off as a player before deciding to run your own game. You get to understand group dynamics and how collective story telling works. I was in my 20s when I first started this type of role playing. I started late. I tried a little of everything I could get invited to. Some people like settings with dragons and elves, but that’s not my type of thing exactly. I gravitated towards more realistic settings where interpersonal relationships and psychology was more grounded in humanity. Fictional worlds not too different or fantastic from our own. What I learned quickly is that these games work on Skinnerian principles- many things do, but role playing had a specific feel of reinforcement schedules that was familiar to me. The person who runs the game, sometimes called a referee, sometimes called a DM, sets the scale of what actions are reinforced and what are not.

Sometimes these are fixed reinforcement schedules based on experience: points that are rewarded that can be applied to the characters skills and attributes to make them more proficient, or more hardy to tackle the adventures. A measure of how much the character grows.

Sometimes these reinforcement schedules are variable ratio items: like in-game money, armor for your character, and tools that they can use to tackle different obstacles. A measure of what the character has, or can spend.

The players themselves run into variability by natural consequence; every action they decide to have their character make, if it is a specific skill or difficulty, comes with rolling a die to see if they succeed or fail.

These can be run like any other Skinner box. Compound schedules appear to be the most interesting to players. A fixed ratio that can be expected- perhaps collecting something important for one of the protagonists in a decided location. Or maybe a variable ratio- deciding what foes give up what item or monetary reward for being bested. Some people run their games with combat in mind; every situation is a nail to be beaten down by a well armed adventurer’s hammer. There’s a thrill to that kind of gameplay, but I find that it isn’t compelling enough for me. I prefer to create stories that have the opportunity for danger, but the risk of engaging in combat is sparsely reinforcing and has a greater opportunity for punishment. A live by the sword, die by the sword style of reinforcement schedule. There may be rewards to a quick and brutal choice, but a player can lose their character just as easily. I like using social stories in therapy to develop more adaptive skills. I use that same mindset when designing a game too- why resort to violence when you can talk your way out of trouble?

Say there is a dark concrete room, dim lights, seven enemies outnumber and surround a poorly armed player group. If they choose combat- they would most likely lose. It might work. I would allow it. Let the dice roll and see if they succeed. But more often than not, a clever player can decide to roll their die in a very different way; persuasion. I set the mark much lower for that if they have the right pitch. They make a deal even the most brutal enemy couldn’t refuse. The die is rolled- they win. Now there is one enemy less, and one more temporary friend to the adventure. The other enemies aren’t just going to stick to their hostility- maybe they overheard that, maybe they’re swayed too, maybe this causes division in the enemy group. The player group capitalizes. They play bluff roles. They play intimidation rolls. They play oratory rolls to back their fellow players up with a rousing speech. The tables turn, and now they’re on the side with higher numbers and that piece of the game is won.

That situation is harder to pull off for players. It takes more thought. More coordination. Turn taking. A minute or two to step away from the game, collect their ideas, then bring it back. I’m not trying to run a stressful table here- thinking is allowed. They devise a plan that works better than pulling a sword and pulling a trigger. I reinforce. Experience for “defeating” an entire room. They did after all. “Tangible reinforcers” in game for the characters. They get a bartered deal that they’d never get anywhere else if they’d been violent to these bad guys. Negative reinforcement- they avoid the aversive harm that is revealed to them when they now know- after persuading their enemies- that the enemies outmatched them in hidden weapons. The players used teamwork, not just some haphazard dice throwing about blood and guts. Group bonus. More experience for everyone. Why not? They played the game their way and they played it smart. These were not just four people sitting around a table doing their own random guesses for a quick and easy win, they came together with ideas that I would never have thought up for the story and won it themselves. They changed the story. Now it’s my turn to adjust my ideas to their new role played reality.

Now…It doesn’t always play out that way. Variable reinforcement is a necessity in a game of rolling dice. So is variable punishment. Sometimes the dice roll, and there’s a failure. Or worse- a critical failure! Not only is the prize not won, or the intended action not completed; it was actually a detriment to even try. Players have crashed a car. Blown up a usually harmless household item. Set a pacifist character in the game into a fit of rage and spoiled a whole quest line. That bank vault actually had a skunk in it. It happens. It’s something like a gamble, but when the reinforcement flows heavier than the punishment, it’s all worth it. It evens out. It takes a strong story, it takes a coherent direction and narrative, but the players do all the heavy lifting. They think. They plan. They roll the dice. Everyone has a great time.

You get to see patterns in that. Make it more challenging the next time. More engaging. Take the next story point in a way that you’d never have thought of before.

Let’s not forget that even when the game is done, there’s a friendship there now. People got to know each other a little better. They got to see people they talk to in a different light, more creative to one another, more inventive. Sometimes some playful rivalries come out of it. There’s also a community out there with shared experiences that goes beyond individual play groups and tables. Thousands of other people playing the same game their way. I personally love the community. I have ideas about how to run the game, and run it by others who play the same game but have done it better than me. I adapt. I improve. Sometimes, I even have an idea about how psychosis works in this imaginary world, and reach out to the internet with an interpretation on new rules-….and the creator of the game itself (Maximum Mike Pondsmith) replies.

mm

Talk about fun. Talk about reinforcement. I’ve learned never to underestimate what a good table top roleplaying game can be, or what it can bring to an otherwise ordinary afternoon. If you’ve never tried one? It’s never too late. Groups are out there with every age, every time commitment, and every skill level. Give it a shot. You might just like it.

 

Questions? Comments? Likes? Leave them below.

 

20180803_163713.jpg

Remembering the Pre-Aversive Stimulus

hubert-mousseigne-661465-unsplash

There are some terms and concepts from behavioral psychology’s past that have found themselves buried in time. Tucked away in a journal here or there but largely forgotten. The older research that tracked rates of behavior following “noxious stimuli”, for example- A phrase we don’t use anymore.  Time has also changed the fascination with respondent conditioning and effects that just two (or more) paired stimuli somewhere along the line could change responding for a lifetime. Powerful principles, which with progress now seem so mundane. Somewhere in there, we have the pre-aversive stimulus.

The pre-aversive stimulus had a great role in early behavioral science animal research to describe responding patterns, but the concept easily applies to humans as well. A pre-aversive stimulus, simply put, is the stimulus that reliably precedes an aversive stimulus. Have you ever heard the term avoidance responding? Some people may call that “escape-maintained behavior” in the field but it is effectively just that- engaging in behavior (responding) to avoid a stimulus that was aversive in the past. Running away. Getting away. Dodging it. What signals that, then? The pre-aversive stimulus. It goes even further. Just through respondent conditioning, the pre-aversive stimulus can take on features of the aversive stimulus and become a conditioned aversive stimulus itself. Then there’s another pre-aversive stimulus that could reliably precede that, and with enough second-order conditioning, you could get messy (over)generalization and find all sorts of related stimuli as aversive. Generalized Anxiety Disorder theoretically works on this same principle. It’s not hard to see how this kind of thing can tangle up a person’s life- whether they are able to realize it and vocalize it or not.

 

vosika-french-nests-insect-macro-69983

Wait! Isn’t a pre-aversive stimulus just a kind of SD?

Let’s not jump to any conclusions and mistake a pre-aversive stimulus for an SD just yet. They have some things in common. They’re both stimuli (but so is almost everything else). They can both be considered antecedent stimuli when we look at the framework of the avoidance responding that sometimes follows them. They signal something. All good comparisons- but here’s a big distinction if you don’t remember: A discriminative stimulus (SD) signals reinforcer availability for a specific type of response.

The per-aversive stimulus does not necessarily have to.

In some situations, you could conceptualize a case for negatively reinforced behavior, but that might muddy the definitions of both terms being used concurrently. They speak to different phenomena even though they could describe one particular stimulus. The big difference is that the cue for available reinforcement is not necessary for a pre-aversive stimulus. It is simply a stimulus that has commonly preceded something aversive, or bad.

Example: An individual has been stung by a wasp before. Maybe several times if they were unlucky. Prior to the stinging, they heard the buzzing around a wasp nest.

That buzzing could likely become a pre-aversive stimulus, and through respondent conditioning, a conditioned aversive stimulus itself in the future.

In the research, pre-aversive stimuli tended to evoke “anxiety” in respondents- which was quasi-operationalized to the term conditioned emotional response (CER), also called conditioned suppression. That’s an important distinction to keep in mind. Here, a pre-aversive stimulus appears to suppress or decrease responding- not signal reinforcement for a response like an SD would.

Like freezing near a wasp nest when buzzing is heard. The usual comfortable walking pace (response) is suppressed in the presence of the buzzing sound (pre-aversive antecedent stimulus).

 

n3

Anxiety! Conditioned Emotional Responses! Conditioned Suppression!

Respondent conditioning research has some fascinating lessons that are just as relevant today as they were decades ago. Sometimes in the day to day practice of behavior analysis- things get oversimplified for the sake of ease of practice.

Behavior goes up? Reinforcement is at work.

Behavior goes down? Punishment is at work.

To a degree, those definitions work. Even with our wasp nest example earlier, those initial stings could absolutely punish some future walking behavior. But we can’t forget about the little things- the little preceding stimuli that have so much to do with the actual phenomenon. The buzzing didn’t punish the walking. Don’t forget the antecedents. Don’t forget the respondent conditioning. Taking the time to examine just one more step explains the process so much more clearly.

What conditioned pre-aversive stimuli appear to evoke conditioned emotional responses in your day to day life? Do you see conditioned suppression of behavior, as a result, that would have otherwise been there? What pre-aversive stimuli could be “tagging on” to the effects of an aversive stimulus you’re aware of? Does it evoke any avoidance behavior?

Too simple? Laurence Miller ‘s (1969) work on compounding pre-aversive stimuli might whet your broader research appetite. Citation below.

Thoughts? Comment! Question! Like!

 

References:

Coleman, D. A., Hemmes, N. S., & Brown, B. L. (1986). Relative durations of conditioned stimulus and intertrial interval in conditioned suppression. Journal of the Experimental Analysis of Behavior,46(1), 51-66. doi:10.1901/jeab.1986.46-51
COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
Miller, L. (1969). Compounding of pre-aversive stimuli1. Journal of the Experimental Analysis of Behavior,12(2), 293-299. doi:10.1901/jeab.1969.12-293
Ormrod, J. E. (2016). Human learning. Harlow, Essex, England: Pearson.
Image Credits:
http://www.pexels.com, photographer Hubert Mousseign