Performance Data Collection For All Professionals

No matter what you do, you’ll often find yourself in a position to either teach a skill or train someone in a proficiency you have. In some cases, many times. One of the most necessary parts in my line of work is data collection on human behavior and performance. I’ve met hundreds of professionals and paraprofessionals over the years who see how behavior analytic therapy and training are delivered using daily data collection and measurement and often get asked “Do you have a spare sheet I could use?”. Workshops, after school programs, camps, job training events, painting classes, apprenticeships, exam prep, clinical trainings, driving courses, and other various skill based events have all had opportunities for me to show off what data collection can be used for, and how it can be applied to any profession where one person needs to learn a new skill and their performance needs to be evaluated in a well defined and stable way. If this is something that you do, or have an interest in doing, I have just the form for you. In just 15-30 minutes of reading and reviewing the instructions below, I aim to make sure you learn and can use the following cool tools from the world of applied behavior analysis:

  • How to track data on performance for a single day and across days.
  • What a “Cold Probe” is, and how you can use it to configure and adjust your training plan.
  • What “Discrete Trials” means, and how you can use them to work on a single or multiple skills in a single training session and deliver effective feedback for performance improvement.
  • How simple and effective percentage data is for performance.
  • How to practice a trained skill repeatedly without become repetitive.
  • When to deliver reinforcement (social praise) for success, and when to deliver prompts (correction).
  • How to compare today’s performance of your client to their future or past performance and use visual analysis of the data to make better decisions.
  • What “behavior coding” is and why defining our target performance goals matters.
  • How to do an analysis of component skills and break your trained skill down into pieces.

I am attaching the link to this performance data collection tool below. You can either print it out and use it in free writing, or use it digitally if you carry a tablet or similar device. This pdf has been formatted to use text fields for typing in easily, a spot to import your logo in the heading with no fuss, and the data sections can be clearly exported into the spreadsheet software of your choice. There does exist some very advanced software out there that can do more than this. This is not the be all-end all, and if linear regressions, or reversal designs are your thing, this might not check all of your boxes. I suggest visiting other subscription software for the research level of analysis you might use in a human operant lab, but if you want something practical, with ease of use, and is completely free of charge, by all means enjoy the form below.

Instructions:

Let’s talk about the top portion of the form for a moment where we have three fields:

  • Name:
  • Date:
  • Instructions:

When we are training an individual, or even a small group of individuals, we need a way to separate out performance data so that we do not get confused when it comes to evaluation and analysis of it. Each individual stays separate from one another, and each day’s performance is distinct from another. The “Name” section here applies to the individual you are training, and not the trainer. We also will need the date of the training so that we can review our data in order, and include instructions if we have multiple trainers performing the same training across different times or dates. Every profession is different and every trainee is going to require different skills, so I will not be able to describe every form of instruction you might want to use here. I would suggest something concise and to the point. Your co-trainers on the topic would likely understand the skills and only need an instructive structure in delivering the training. For example, if we had a client who we wanted to train to high proficiency in jump roping for their schoolyard double-dutch competition, we might want our trainers to know what to have ready.

Cold Probes:

In behavior analytic terminology, a “cold probe” is something that you do to test a skill without prompting or incentives to see where the client’s performance is without assistance. Simply put, at the start of your training or teaching section, you ask them to perform the skill and see how they do. Can they do it completely independently to your established level of competence? If so, you might mark a “Y” for “Yes”. If not, you might mark a “N” for “No”, and that gives you an idea of where that day’s training targets might focus on. Cold probes are useful when you have a client who has mastered something, or maybe is coming in for the first time, and you want to see if they can produce that specific target of performance on demand. Reviewing the cold probe isn’t a final answer on whether that person has or does not have a skill in their repertoire, but it can give you an example of their unaided performance for you to use your training judgement on for what they might need to be taught, practiced, or have a long term strategy for performance improvement on. Cold probes are tools, not something to make or break a training plan on. Performance can fluctuate. Use them to determine a focus for that day, but keep in mind it might only be a part of your overall goal for the client. You can also use cold probes to remove a planned part of the training that day that might not be worth giving extra time for. If our imaginary jump roping client can perform their three alternate foot step jumps without aid, perhaps we gear our training topics for the day for something a little more advanced to make the best use of our time.

A Component Skills Analysis and Discrete Trial Training (DTT):

We can use our cold probe data to figure out what skills we can target for improvement. Often, when we come across a difficulty in competency with a trainee, the skill is often made up of smaller more basic skills, or have a precursor skill that needs to be strengthened before they can move on to the original target skill. Discrete Trial Training (DTT) is a process by which a complex skill is broken down into smaller component behaviors which are taught in order to meet the original target. They are “discrete” or singular component skills which are set up in a distinct training opportunity, where we can follow up demonstration of a skill with either praise/reinforcement when performed correctly, or prompting/feedback when there are errors in need of our assistance. Each practice opportunity is a new chance to try again and build towards greater success. The number of trials you use is not set in stone, but for this training sheet I have provided five opportunities for each component skill. Let’s talk about our example jump-roper. What would happen if our trainee did not perform their alternate foot jump to our criterion of success? Take a look at the sample data below.

In this example we’ve had our trainee demonstrate the skill five times, with each component skill being performed an equal number of times. What might this data suggest? Is our trainee having difficulty in all areas? Probably not. In this case we see that they are able to lift their left foot into a jump perfectly for all tracked trials, but when it comes to the right foot, and the heels being up during jump roping, we see errors. A good part of using these types of trials is that you can compare performance in one component behavior to another. Look at the data above. You will see that the right foot lifting, and the heels up components share a trend of errors. That could lead us as trainers to suspect that there might be a relation between the two, and our training and corrective procedures can be tailored at this point for helping them improve. With this style of data collection we can pin point exactly where errors occur, which makes our training time tailor fitted to the need and increases our efficiency.

Do not forget about reinforcement in these stages. Reinforcement is what increases rates of the target behavior that it follows. We praise and reward as soon as a success, or approximation to success (improvement) is seen. By praising and rewarding what goes right, we can keep that level of performance high. We can use reinforcement following prompts to maintain a level of engagement and improvement. Do not simply focus on the errors alone. Target the successes and reinforce them. A solid training procedure is heavy on reinforcement.

Percentage Data and Analysis:

In our trial data above we use percentage data as a form of measuring performance and success. In this scenario, using five trials means that each trial counts as a distinct 20% of the final score. When we measure performance we want to make sure we have a criterion by which we consider mastery. Not all skills necessarily, or humanly, can be done with 100% every single time. In most cases, keeping to 80-90% as a goal is not a bad benchmark to have in mind. It is well above blind luck, and with proficiency at those levels, it is often easier to discover patterns of what environmental stimuli correlate to higher performance than others. Does our jump roping trainee do better during our individual training than they do in front of peer crowds on the playground? A variance of 20% or more might let us know that if we see a pattern emerge over time.

The sheet above is structured so that you can export data from the probe and trial sections into a spreadsheet, where you can use a visual analysis (graph) of your choice. I, and many professionals, enjoy line graphs which show percentage of performance by date. By combining the results of multiple daily data sheets, you can create graphs and perform a visual analysis of progress in a way that is cleaner than raw data. By comparing the date of the data sheet, with the final percentage scores of success you can see something like this.

Reviewing performance data with your client (or their caretaker) is key. Visual data presentations like the one above can be a tool in your toolbelt to make large trends easier to understand. Line graphs are easy ways to show trends and to use that visual to breakdown where their performance was, compared to where it is now. Even if you see a negative trend, this can be a great tool to discuss what might be going on outside of the training and analysis that might be a factor. You can even learn what is impacting the graph, but might be missing from the training regimen. No data is ever wasted. It is all a resource.

Behavior Coding:

The final sections of the sheet involve spots where you can do what we call in the field of behavior analysis, and research in general, behavior coding. Behavior coding is a process by which you operationally define your target performance skills in observable and measurable ways. When you are working with a team, or with multiple trainers, your success can depend on whether everyone is measuring the exact same things the exact same way. We want as much inter-observer agreement as possible. Coding makes that possible.

Let’s take an example from our jump roping client above. One of the component skills we chose was “Left Foot Up (Jump)”. That can be confusing without further explanation. It could use an operationally defined and coded skill. We can use our behavior coding section to put simple and quick definitions so that everyone measuring that skill in the future knows exactly what it looks like, and what we consider success. The better our coding, the more sensitive our data is. We want to find a middle ground of detail without being obfuscating with too much wording. There is a difference between precision, and a code that makes tracking impractical. The main goals we want are something we can observe which lets anyone watching have the same opportunity to track it exactly as we might, and measurable, meaning that our behavior coding of our target skill can fit into the data tracking format.

For example: “Left Foot Up (Jump)”- The left foot is lifted up completely from the ground during a jump with enough space for the jump-rope to clear it underneath.

You may increase the precision of your measurement to match the distinct needs of the skill, but the goal is to be sure that everyone tracking data on that skill is using the same definition. This one above is what I would consider low to medium in precision, but will do for what we need it for. Match your definitions and coded behavior to your specific profession and needs, but be sure it is not vague or subjectively unobservable (“a spirited and joyful jump” could mean just about anything to anyone). If you need to use what some would consider subjective language, try coding for that as well (“Joyful” is defined as smiling during a jump, etc.).

Keeping a Running List of Component Skills:

Component skills which become mastered or are ongoing targets for future weeks can be listed on the second page as well. This helps us distinguish how we broke down our probed larger skills into their discrete and distinct components. Keeping a list of what we have worked on, and what we have yet to work on, can give us better ideas for future trials to run in the next training opportunity, give us a log of what was mastered or completed in a previous training, and give us a section for note taking on the component skills that would fit the needs of your professional training. I would suggest if you use the component skill section to determine future training targets, less is more. Training ten skills within an hour or two makes sense, but over training tens of skills within a time frame might lead to lesser mastery across the entire list. Focus on the most important component skills that make up the larger cusp skills. You may find success in picking your particular targets for each training session, or week.

Further Training:

I hope you enjoyed the material here and the review. It would be impossible for me to include every potential usage of these sheets, and the more complex data analysis processes you might want to use them for, but if you have need of further training, consultation, or simply questions, you may reach me on this website or email at csawyer@behavioralinquiry.com. I would be happy to help you with further training on this data sheet, how to adapt and construct your own, and any further interest you might have in performance tracking or behavior analysis.

Comments? Questions? Leave them below.

Getting Back Up After Failure

Failure is a tough topic to bring up but a necessary one. When we are in it, it’s all we can think about. When we are past it, we often do not want any further reminders of it. Failure, behaviorally, and psychologically, is a part of everything we do as a variable, and factors in to every future strategy we use. It is a part of our past that defines how we interact with the future. In a previous writing I discussed “Overcoming the Fear of Failure”, but this one will be about what to do when it happens to us. How do we move on? How do we grow from it? How to we set our future expectancies to do better? To what do we attribute failure to? All of these and more are necessary to making each failure a stepping stone to a future success, or else we might find ourselves in a loop generating ever worse strategies. Instead, we need to learn to get back up. Let’s talk now about some of the research we have on the topic and how we might navigate failure and find motivation from it.

Mastery Orientation vs. Learned Helplessness

When it comes to deriving motivation from failures, both big and small, the strategies that we develop in childhood have a great deal of influence on our current behavior. You may have heard of the term “learned helplessness” before, which describes a pattern of behavior of low motivation and outputs after repeated failures. The individual receives so little reinforcement following their actions that they simply do not continue to try. Diener and Dweck (1978) popularized these concepts in a study on youths that they split into two groups based on patterns and strategies that they observed without being taught. They found that some children when faced with repeated challenges and varying degrees of failure would either consistently give up, and reduce responding, while others would re-assess and modify their responding based on the inputs of their failure. The researchers were very interested in the cognitive strategies that both of these groups displayed, all without any coaching, and determined that even at a very young age, there were clear distinctions on these two types by their ideas on their loci of control. A locus of control is a belief system that people use to determine whether they have control of outcomes, or if outside forces do. A person with an internal locus of control would see the results of their actions as largely based on their own actions and future control. An individual with an external locus of control would see the results of their actions as largely impacted by an outside force or their environment. Now, there is a part of this study that some consider a little unfair. No matter what answer the children gave to their respective stimuli at the start, they were told they were incorrect. How they responded afterwards largely correlated based on how they viewed their loci of control.

Mastery oriented individuals appeared to generally attribute their failures to a lack of effort or something they’d missed. Even at that age, their first reaction focused on pivoting and reassessing.

Learned helpless individuals tended to attribute the failures to the situation as largely beyond their control (in this case, without knowing it, they were technically right as far as the experiment was concerned).

So what happened?

Mastery oriented individuals kept trying, kept changing their responses based on feedback, and largely kept at the task longer than the other individuals. They showed no decline and became more sophisticated in their strategy use (which was eventually validated).

Learned helpless individuals tended to show a progressive decline in the use of good-problem solving strategies and began to include less sophisticated and poorer problem solving strategies. Ones that would be even less likely to work.

This model of attribution is still used to this day, but has a few caveats. Unlike this study, in the real world, people are not always one or the other. In many cases, and complex problems, it requires using multiple loci of control, but also understanding whether the factors we evaluate and learn from are stable (long term) or unstable (temporary). The stability of an attribution is its relative permanence as a factor. If you know you are good at jumping rope, meaning you have high ability, you have a stable factor to consider your next success with. But, if you attribute jumping rope to how much effort your legs can put out, then the source of success is unstable—effort can vary and has to be renewed on each occasion or else it disappears. We’ll talk a little more about how effort and ability works in a second. The important part is that when it comes to evaluating our part in the grand scheme, the internal locus of control tends to help us perform better.  Let’s look at some examples.

It rained today and we got all wet. We hate that. What if it rains tomorrow and we don’t want to be rained on? Would a belief system around an internal loci of control make sense if we focus purely on ourselves and ignore the sky? Not very well. No matter how many strategies we might attempt based on our own feedback, we are unlikely to change the weather. On the other hand, a person using this internal loci of control might decide to travel away from the storm as a strategy, bring an umbrella, or wear a rain coat, which has some functionality for them but the rain still happens where they once were. Internal loci of control work best when we take into account our solutions but do not ignore the immutable environmental factors.

What about using an external loci of control on task performance? Perhaps we’d like to pick up three items off of our room’s floor within ten minutes. We might begin to generate all the reasons why we cannot, and how far the floor is from our fingers, and how many other factors there are between the items and the trash can, leading to very low performance on this task within a time frame. It’s the room that’s messy. It’s been messy for days now. So messy. So much mess too. What if we just pick up one thing then go back to bed? It’s still messy. Might as well not. Then, we’ve just effectively wasted time generating non-functional thoughts (poor strategy), and nothing was done (poor outcome). That isn’t helpful either.

Generally speaking, when it comes to our own behavior, within our own repertoires of ability, it is wiser to use an internal locus of control to conceptualize our potential impact on tasks and problems. When there are larger systems and unavoidable outcomes from the outside, it does not hurt to consider what lies in an external locus of control. We, as individuals, cannot control everything. But, as we see above, when faced with continual failure feedback, utilizing an internal locus of control early on can help us come up with strategies which mitigate the external circumstances and perhaps land us in a better spot. There is no harm in generating increasingly sophisticated strategies to put ourselves into better conditions and allow the external factors outside of our control to be managed from ever increasing positions of control and strategy on our part. Sometimes when failure comes, it comes after we thought we had a great strategy focusing on our own improvement and it just did not work.

How do we do it? How do we take back some semblance of control when the waves of failures keep coming?

Consider that the concepts of a locus of control, and how our actions impact our goals are called attributions, and have an effect on our future behavior and how we respond to challenges. When we attribute too much to external causes, it can lead us to decrease our attempts. When we attribute too much to internal causes, it can sometimes lead to more sophisticated problem solving, but blind us to other factors might be outside of our control and narrow our perspective too much.

Mediating these attributions not just in the moment of the first failure we come across, but those that follow can help us create a better perspective on our situation. We can also rely on our social circle, relay our experiences, to see if others can help us see what we might have missed and help our future strategies find better success.

  • Evaluate your current attribution and locus of control of the problem.
  • What are some ways we can evaluate our own pattern of responding and improve it? (Internal Locus)
  • What are some environmental factors that impacted our failure that our behavior did not change (External Locus)
  • How do we refine our strategy so that our next attempt can put us in a better position against those environmental variables if they happen again? Can we mitigate what held us back?

Purposive Behaviorism and Re-Training our Attributions

As individuals we can create systems that help us maintain a level of reinforcement to offset failure, and as social creatures, help create an environment of positive interactions that can help us both realize our achievable goals and find strategies to access them. Thankfully, we have concepts and theories at our disposal to explain the hows and whys. Let’s talk Purposive Behaviorism and how we can re-training our Attributional Theories.

If you’ve read my other works on this site, behaviorism itself is familiar to you. Purposive Behaviorism goes beyond the more mechanistic systems of reinforcement and punishment, stimulus and response, that you see in some of the more traditional theories. Yes, reinforcement is important to keep us moving forward. Yes, punishment (failure) can knock us back. But we are human, and complex beings, and a good analysis always takes that into account. From a purposive behavior standpoint, we use goals and work hard to achieve them. That is an intrinsic part of what it is to be human. In older theories by Edward Tolman, the term cognitive map was developed to describe how we do that. Our cognitive map is how we envision our path to our goal. We all have beliefs, unspoken ones, that a specific action on our part will get us closer to an intended consequence or goal. Let’s call these expectancies. They cover both the behavior we intend to do, and the goal we intend to achieve with them. It’s a roadmap. Tolman also believed that we learn from our successes and failures largely through a latent process. There is an automaticity to reinforcement that helps us pick up what has worked and set aside what has not worked, and integrating more cognitive and conscious strategies to what we have learned latently is the best way to move forward. Keep in mind not just what you can remember and consciously recall, but also what might have been learned latently from the experience.

When we map out our actions to meet a goal, we often give ourselves a time frame (hopefully realistic) in which to reach them. By giving our goals, or conceptual map of how we achieve them, a context in time we help judge how to act and what to expect. Generally speaking, acting now is always better than acting later unless you have a more advantageous use of time further along to position towards your goal. With our expectancies in mind, we have our actions, our goals, and our time frame. As adults, we also learn to discriminate effort from ability. Effort can be defined as the amount of energy or resources we must expend to progress towards the goal, while ability may be defined by our existing proficiency or skills that can achieve it. In most situations it is a combination of both effort and ability that help us reach complex goals.

Let’s reintroduce failure here. Let’s say that we mapped out our goal, we made our attempt to the best of our effort and ability, and we find that we simply did not meet success. Perhaps we even see repeated failure. It can be easy to get disheartened, and even travel down that path of learned helplessness, but we should do everything we can to avoid it. Let’s imagine that we did our best to conceptualize our locus/loci of control, and they were as accurate as they could be, but we still missed the mark. We tried, we failed. Let’s say our expectancy, our goal and plan to reach it, is still very important and we do not want to change the goal. How do we use our time most effectively now to get back up and try again? We need to re-train ourselves, and that means re-training our attributions.

Do we have the ability to achieve this next step in our goal? What did our failure show us?

Did we apply the necessary effort to achieve the next step in our goal? What did our failure show us?

Were our attributions on stability based around factors that were stable (ability) or unstable (effort)?

The combination of evaluating our ability and effort and attribute our failure and successes along these variables is key to knowing when something can be achieved alone, if further training, resources, or additional help from others is needed, and how to adjust our plans going forward to include these more sophisticated and evaluated plans that came from the experience. Failure here is a teacher. It’s not always easy to maintain effort after a failed attempt even if the ability was there. To retrain ourselves to analyze our attributions of the failure correctly, we must take some time to evaluate the factors. Use this tool from Dweck (2000), who we saw in that earlier study too, below to take a particular situation you might have been in the past, and see where the attributions fall.

Plug some of your attributions in the grid above and see where they fall. Do you think anyone else evaluating your situation might have a different series of attributions for it?

We tend to get the best results out of ourselves and planning ahead by attributing a reasonable portion our previous successes to internal and stable causes. What went right in the situation within our ability, even if there was an ultimate failure, that we can consistently do again? Example: I might not have won the race, but this was close to my best personal time yet.

When analyzing our failures, we can go wrong in attributing things entirely to unstable and external causes. Things that we see as completely out of our control, and leaves nothing for us to work and grow on. Example: I was going to go in to work today but then the roads were so busy and you know I can’t drive on busy roads…

The take away:

  • Turning failures into successes takes analysis of what happened.
  • Sometimes we analyze the situation well and can think of some improvements for next time focusing on our internal factors.
    • “Stable Dimension” attributions help us reflect on our ability and how to improve it.
    • “Unstable Dimension” attributions help us reflect on our level of effort and if we can improve it next time.
  • If we see many attributions leaning in the unstable or external direction, maybe it could take an extra pair of eyes to help us get a new perspective.
    • Reaching out to a trusted friend, or experienced advisor on the topic.
    • Re-evaluating the attribution by considering internal factors.
  • Learned helplessness can arise from attributing too much to external factors, avoiding evaluation of internal factors, leading to poor problem solving and less sophisticated goal directed behavior.

Getting back up after failure requires analysis of our actions, re-training our attributions to avoid learned helplessness, and consistent effort going forward.

What are some attributions you’ve thought about recently? Have the behaviors you’ve used to reach those goals been effective? Have they been ineffective? How has your belief system on the locus of control impacted the process? Have you utilized others to help you with alternate perspectives?

Comments? Questions? Feedback? Leave them below.

References:

Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied Behavior Analysis. Merrill.

Edward Chace Tolman. (2015). Introduction to Theories of Learning, 302–326. https://doi.org/10.4324/9781315664965-16

Hoose, N. A.-V. (n.d.). Educational psychology. Lumen. Retrieved November 11, 2021, from https://courses.lumenlearning.com/edpsy/chapter/attribution-theory/.

Molden, D. C., & Dweck, C. S. (2000). Meaning and motivation. Intrinsic and Extrinsic Motivation, 131–159. https://doi.org/10.1016/b978-012619070-0/50028-3

Schunk, D. H., Meece, J. L., & Pintrich, P. R. (2014). Motivation in education: Theory, research, and applications. Pearson Education Ltd.

Tolman, E. C. (1967). Purposive behavior in animals and men. Irvington.

Image Citations:

Title image: Getty Images/iStockphoto
Attribution Grid: Christian Sawyer, M.Ed., BCBA

Philosophic Doubt- When Scientific Inquiry Matters

There are important assumptions, or attitudes of science, which ground scientific study across all disciplines; Determinism, Empiricism, Experimentation, Replication, Parsimony, and Philosophic Doubt. The last one holds a key role in how we deal with the information we gain from science, and what we do with it in the future. Philosophic Doubt is the attitude of science which encourages us to continuously question and doubt the information, rules, and facts that govern our interpretation and understanding of the world (universe, etc). Philosophic Doubt is what has practitioners of science question the underpinnings of their belief, and continually do so, so that their understanding is based on consistently verifiable information. Philosophic Doubt cuts both ways- it can have a scientist test the truthfulness of what others regard as fact, but that means they also must take on the same level of scrutiny and skepticism in their own work. To some, Philosophic Doubt is a gift that has helped them expand on their ideas and shape them beyond the initial first experimental steps. To others, Philosophic Doubt is a detrimental form of skepticism clawing at information or beliefs that they hold dear. These views are not new, and in fact we can find traces of this disagreement going back to the 19th century. Here we will explore the assumption of Philosophic Doubt, including proponents and detractions both old and new.

Why do we need Philosophic Doubt anyway?

Philosophic Doubt is important to science because it has an effect on how the progression of scientific work takes place. It has scientists test their own assumptions, hypotheses, and underlying beliefs, even if those are held precious to them, against replicable evidence and new future findings. Philosophic Doubt drives experimentation, and it precedes replication as well. It is what underlies the empirical drive for seeking evidence. Without philosophic doubt, science can go wrong. A hypothesis could be formed based on inaccurate information which would never be retested. Subjective experience could entrench anecdotes in a study as a broader experience than they are. A scientist could start with what they want to find, and cherry pick only what fits their assumption. These examples are the risks of not taking Philosophic Doubt in to account. Sometimes it can simply boil down to the scientist wanting to be right, against keeping an open mind that they might not be. Holding the assumption that there is a benefit to questioning findings or previously accepted beliefs is not a slight against past experience or belief, but rather a better way of interpreting future information if it were to challenge it. Questioning is a part of science, but not everyone thought so.

“In Defence of Philosophic Doubt”

Authur James Balfour, a 19th century philosopher, debater, former Prime Minister, and scientist, took this topic head on in “In Defence of Philosophic Doubt”. Unlike today, opponents of Philosophic Doubt at the time were more interested in comparing the empirically-heavy scientific beliefs to a more open metaphysical series of alternatives- that is, they were more interested in comparing science to non-scientific belief systems as the truth of reality. When it came to psychology, there were idealists, and realists, and stoics at each others throats with concepts that could not be observed or proven. As you might already be able to see, comparing metaphysical constructs to an assumption that has them continually question their arguments and points, makes metaphysical assertions all the harder to make. Scientific points, however, make Philosophic Doubt a little easier to withstand:

Under common conditions, water freezes at 32 degrees Fahrenheit

Employing Philosophic Doubt, we can continually circle back to this assertion to test it again, and again. Pragmatically, there comes a point where we only question these basic and well founded particulars when we have reason to do so, but the doubt is always present. Sometimes for precision, sometimes to be sure that we are building off of the knowledge correctly, and others to help with the replication and experimentation assertions that grow science. Balfour was a strong proponent of natural sciences, and the use of this kind of questioning. Science founded on observation and experimentation was something truly important to him. Keep in mind, the 19th century was shaped by scientific discovery at a pace never before seen. Balfour kept an even head about this, and believed more in the assumptions of science as the path to understanding the natural world. Propositions which states laws, or which stated facts, had to be built on concrete science and not just personal belief or anecdote. Some of his points we would take as obvious today- for example, when using comparative probability, would we run an experiment or trial just once, or twice? Multiple times? If we ran something like this just once, it wouldn’t be comparative probability, but if we ran it twice and accepted this as the final answer to the question we would miss out on the further replication and experimentation on the subject. The curiosity that Philosophic Doubt embodies would keep the experiment and replication going. Without Philosophic Doubt, we fall into a trap of not questioning initial assumptions or findings.

Another interesting thing about Balfour’s work is that it came at a time where there was a great deal of belief in a mechanical universe that followed strict Newtonian laws. At the time, this was compared with more metaphysical alternatives. Balfour cautioned everyone to continually use philosophic doubt and to question both belief systems- even if the “mechanical universe” was winning by a landslide at the time. If we were to take Balfour’s points and stretch them into the future, we might see how he would have found some justification in further development in physics- quantum mechanics for example, where the Newtonian mechanical universe which was seen as sufficient to explain everything, falls a little short. Without that testing of the original tenets of physics, the use of Philosophic Doubt, we might not be where we are now. The analysis of Balfour’s work could go on for entire chapters, but I would like to top it off with an excerpt on the topic of the evolution of beliefs, and the reluctance to test our own personal beliefs:

“If any result of ‘observation and experiment’ is certain, this one is so- that many erroneous beliefs have existed, and do exist in the world; so that whatever causes there may be in operation by which true beliefs are promoted, they must be either limited in their operation, or be counteracted by other causes of an opposite tendency. Have we then any reason to suppose that fundamental beliefs are specially subject to these truth-producing influences, or specially except from causes of error? This question, I apprehend, must be answered in the negative. At first sight, indeed , it would seem as if those beliefs were specially protected from error which are the results of legitimate reasoning. But legitimate reasoning is only a protection against error if it proceeds from true premises, and it is clear that this particular protection the premises of all reasoning never can possess. Have then, then, any other? Except the tendency above mentioned, I must confess myself unable to see that they have; so that our position is this- from certain ultimate beliefs we infer than an order of things exist by which all belief, and therefore all ultimate beliefs, are produced, but according to which any particular ultimate belief must be doubtful. Now this is a position which is self-destructive.

The difficulty only arises, it may be observed, when we are considering our own beliefs. If I am considering the beliefs of some other person, there is no reason why I should regard them as anything but the result of his time and circumstances.” -Arthur James Balfour, “In Defence of Philosophic Doubt” (1879).

Back to Basics- Science and Philosophic Doubt

In “Applied Behavior Analysis ” Cooper, Heron, and Heward begin their first chapter with the basics of what science is, specifically behavioral science, and the assumptions and attitudes of science including Philosophic Doubt. Cooper, et al., consider these foundational concepts in science as a whole and relate their importance to psychology and behavioral science. In their words:

“The attitude of philosophic doubt requires the scientist to continually question the truthfulness of what is regarded as fact. Scientific knowledge must always be viewed as tentative. Scientists must constantly be willing to set aside their most cherished beliefs and findings and replace them with the knowledge derived from new discoveries.

Good scientists maintain a healthy level of skepticism. Although being skeptical of others’ research may be easy, a more difficult but critical characteristic of scientists is that they remain open to the possibility- as well as look for evidence that their own findings and expectations are wrong.” -Cooper, Heron, Heward, “Applied Behavior Analysis”, (2017).

Bonus! B.F Skinner
“Regard no practice as immutable. Change and be ready to change again. Accept no eternal verity. Experiment.”- B.F Skinner, 1979

The sentiment behind Philosophic Doubt and science is that of openness and humility. Not only is the scientific work we read subject to doubt, but our own as well. The latter is the most difficult part- challenging our own beliefs constantly, challenging our most cherished propositions and reasoning. To some, this is something that expands the horizon of future knowledge infinitely, to others; a hard trail to follow that is no easy task. In either case, perhaps this brought up the importance of Philosophic Doubt, and how it ties in with the other assumptions in science as a challenging but inseparable part of the process.

Comments? Thoughts? Likes? Questions? Leave them below.

References:

1. Balfour, A. J. (1921). A defence of philosophic doubt: being an essay on the foundations of belief. London: Hodder & Stoughton.

2. Cooper, J. O., Heron, T. E., & Heward, W. L. (2017). Applied behavior analysis. Hoboken, NJ: Pearson.

3. Skinner, B. F. (1953). Science and human behavior: B.F. Skinner. New York: Macmillan.

Token Economies: What Behavioral Science and Blockchain Technology Have In Common

“Token Economies”- two words springing up at Blockchain and Cryptocurrency summits and conferences with increasing regularity. Token Economies have been used by behavioral scientists and practitioners for decades, but recently they have taken off in the field of Blockchain and Cryptocurrency technologies. Both applications use the term “Token Economy” interchangeably. In technology conferences and summits, it is the original behavioral psychology definition that is used to describe the concept. The tech field is now using the original token economy concept and expanded it to apply to what some might call the future of commerce and currency. Exciting stuff. Here, I will break down the basic concepts of what a Token Economy is, and how both behavioral scientists/analysts use them, as well as the new application in the technology by Blockchain and Cryptocurrency developers.

tokens

The Token Economy

Let’s break it all down. What is a token economy? A token economy is a system where tokens, or symbols, are used as conditioned reinforcers which can be traded in for a variety of other reinforcers later. It is not a bartering system or prize system where objects/access/services are given directly following a target behavior, but a conditioned stimulus (token) without necessarily any intrinsic value that is agreed upon to add up to exchange or buy another reinforcing item. A common example that most of us are used to is money. Paper money, specifically, can be considered a part of a token economy in that it is “traded in” towards some terminal reinforcing stimulus (or “back up reinforcer” as it is called in behavior analysis). The paper money is a conditioned reinforcer because it has no necessary intrinsic value but has conditioned value for what it can eventually be used for within the token economy.

This was taken up originally by behavioral researchers in the 1960’s, as a form of contingency management for the reinforcement of “target behaviors”- or prosocial learning, in therapy situations. Reinforcers are important psychologically because, by definition, reinforcers change the rates of behavior that they follow. They can help teach life-changing skills, or alternatives to some destructive or undesirable behavior quickly. But, reinforcers can be tricky too. People can become bored or satiated with tangible rewards, such as food, but within a token economy, reinforcement can be delivered in the form of tokens and allow for a later exchange or choice of any number of possibilities desirable to that individual. By pairing these tokens with access to “primary reinforcers” (reinforcers that are biologically important) or other “secondary reinforcers” (stimuli that have learned value), the tokens themselves become rewarding and reinforcing- thereby creating a sustainable system of reinforcement that defies the satiation and boredom variables that the researchers originally found as barriers to progress. Alan Kazdin’s work “The Token Economy” is a fantastic resource on the origins and research that began it all.

What can a token be? Nearly everything. But, it has to be agreed upon as a token (given some value for exchange) in order to serve as a token for the purpose of trading it in, or buying with it. Giving someone a high five after doing a great job at work, for example, is not a token. It is a reward, and possibly a reinforcer, but it was not conditioned to have value, and cannot be saved or exchanged. Tokens also need not necessarily be physical, or tangible. They can be symbols, or recorded ledgers, so long as that information can be used for the exchange in the corresponding token economy. This is where blockchain and cryptocurrency technologies tie in to the original behavioral science understanding of a token economy. Can data, or information, serve as a token and be used in a token economy if it is agreed upon to have value and worth exchange? If you haven’t heard of BitCoin (a Cryptocurrency), the answer is yes.

2rarqo

Blockchains and Cryptocurrencies

What is Blockchain then? And what is a Cryptocurrency? Using our original definitions of tokens and token economies, for data or information to be considered tokens they have to be able to be exchanged and have value that can be traded within the token economy. Blockchain technology solves this by creating units of data called “blocks”. These blocks, simply put, are a growing list of data records that contain a “cryptographic hash” of previous blocks. These linked blocks form a ledger which is resistant to duplication and tampering. In layman’s terms, unlike most data that people can manipulate and come into contact with day to day, a “block” within this Blockchain cannot be altered or copied and maintains a faithful record of time and transactions. Resistance to copying/duplication means that it cannot be forged, and resistance to altering means that this data (the record of information) is seen as reliable. If we create a currency using this technology, then we have the means to create units, or tokens, that are individual, can be traded, and have a consistent and (for the cases of this introduction) unalterable record of transaction. Assigning value to this creates a digital currency called Cryptocurrency. Tokens. Transactions can take place using these blockchains. These transactions take place person to person (“peer to peer” or P2P), meaning that once a unit of cryptocurrency is exchanged from one person to another, it resembles very much a physical exchange of all other forms of currency. This exchange does not require any medium, such as a bank, like physical currency does in online banking for example.

Blockchain and Cryptocurrency developers, then, would be looking to create a form of token currency that can be traded within this broader token economy- that is both reliable enough to be used by enough people to catch on or become commercially viable, while still maintaining the benefits of a cryptocurrency (security, privacy, etc) over traditional currency. These cryptocurrencies, these units of data, these blocks, have no intrinsic value themselves. They are tokens in the very real sense that the original behavioral research intended. Their usage and effects, then, appear to follow in the same vein. Currency can be reinforcing, reinforcement can alter behavior, and once a token takes on value through the conditioning process; it can be truly valuable in its own right as a “generalized reinforcer”- a reinforcer that is backed up by many other types of reinforcers. A dollar, for example, as a widely used currency can be used for a nearly countless number of goods, services, and transactions. This makes it a good generalized reinforcer. The more a token can be traded for, the better a generalized reinforcer it becomes.

Will a form of cryptocurrency, like Bit Coin, gain this same traction as a currency, or token, to access other reinforcers in trade? Many people say yes. That’s where both behavioral scientists and blockchain developers can both find excitement in each new development and innovation.

Likes? Comments? Questions? Did I get it wrong ? Leave your comment below!

References:

  1. Alan, K. (n.d.). The Token Economy: A Review and Evaluation. New York, NY 1977: Springer. doi:10.1007/978-1-4613-4121-5
  2. Blockchain. (2019, January 13). Retrieved from https://en.wikipedia.org/wiki/Blockchain
  3. COOPER, JOHN O. HERON, TIMOTHY E. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. Place of publication not identified: PEARSON.
  4. What is Simple Token (OST) [Audio blog post]. (2018, August 22). OST Live Podcast

Image Credits:

http://www.imgflip.com

http://www.smilemakers.com

May I have your attention please? The Nominal Stimulus vs. The Functional Stimulus

cellular-education-classroom-159844.jpeg

Hm?

What’s that?

Sorry, I wasn’t paying attention.

You’ll see this happen in some case studies, research articles, classrooms, and even therapeutic practice. A situation laid out with everything in mind to elicit the predictable response. You ask “What’s two plus two?” and eagerly await the “four!”…but it doesn’t happen. You call out to someone who’s wandered off “Hey! Over here!”, and they keep on walking. You picked out your discriminative stimulus so well but the response had little or nothing to do with it. You were missing the big piece of responding to stimuli that is absolutely obvious on paper, but so easily overlooked: Attention.

Stimulus-Response contingencies are a good place to start with explaining why this is so important, because they’re often the simplest and easiest to explain. One thing happens, a response follows it. The in-between that goes unsaid is that the respondent was actually able to perceive the stimulus, otherwise the response was either coincidental or unrelated. The stimulus that is never perceived, or attended to, is called a Nominal Stimulus. It happened. It was presented purposefully. It’s not a discriminative stimulus. It plays no role in selection. The individual is unaware that it even occurred. Nominal stimuli are the “everything else” in a situation that the intended respondent is not attending to.

Imagine a teacher in a classroom helping a student write their name. They first prompt by demonstrating how the name is written. The student does not copy it. So they take the student’s hand and physically guides them through the name writing start to finish, then they reinforce with some great descriptive praise to reinforce. Great! The student learned something, right? They’re more likely to at least approximate name writing in the future, right? How about the first letter?

Not if they were looking up at the ceiling the whole time. Nominal Stimulus.

The teacher may have set up a great visual demonstration, planned out a prompting strategy, and planned out a reinforcer to aid in learning the target behavior- but not one of those things were effective, or even meets their respective intended definitions, without the student’s attention. What the teacher was actually looking for, with any of their attempts, was a Functional Stimulus.

A functional stimulus, attended by an individual, that signals reinforcement for a specific behavior? That is the feature of the discriminative stimulus (SD) that elicits previously reinforced behavior. It’s received by the respondent in a meaningful way.

The lesson here in this distinction is that observers can sometimes assume stimulus-response relations or failures in responding because they are working with situations that present Nominal Stimuli instead of Functional Stimuli. Without distinguishing the attendance of the respondent, one could simply document a discriminative stimulus occurred when it had not. That would lead to inaccurate data, and further inaccurate intervention development based on those inaccuracies.

Check for attention. Always. It may not always be the easiest thing to discern. Auditory attending is not as easy to infer as visual attending is, but by keeping the nominal and functional stimuli in mind, you are in a better place to test for conditions that better facilitate both.

Let’s try one more example.

Take this guy in the car. He’s got his phone out. Just got a text. Now THAT was one sweet discriminative stimulus. Tons of reinforcement history signaling behind that one.

pexels-photo-804128

The street lights in front of him? Nominal stimuli.
The stop sign down the road? Nominal stimulus.
The cars on either side of him? Nominal stimuli.

Not all unattended stimuli are nominal stimuli exactly, but in a society, these signals (lights, signs, other people’s proximity) are delivered with the intended purpose of changing or governing the responses of people in order to make sure everyone drives in an orderly and safe(ish) way. Even when a person is attending, partially, to an array of stimuli around them; all supposedly “important” in one way or another, some don’t actually register without specific attention.

One more example. Last one, I promise.

793aa52a1859d8673ffb417128a80425--autism-classroom-classroom-ideas

An instructor is working with a non-verbal child to build communication. They are seated at a desk. The child is staring off at one of the walls and reciting some continuous vocal stereotypy to themselves. The instructor is guiding a communication board- a page with the alphabet on it.

They… rapidly… move the board’s position in front of the child’s finger, anticipating and…prompting… the words “I W A N T L U N C H”. They stand up with glee and reinforce this…method… with a “Great job! Let’s get lunch!”. The child continues to stare off at the wall, and continue the repetitive stereotypy until lunch is brought over.

What might that instructor infer from this process if they were not thinking about nominal stimuli? Well, they might infer that the process was in any way impacted by the child’s responding. Or, that the board and prompting was received in any way by the child. It could get a little confusing.

That’s the importance of nominal and function stimuli.

Questions? Comments? Likes? Leave them all below!

References:

Healy, A. F., & Weiner, I. B. (2013). Experimental psychology. Hoboken, NJ: Wiley.

Ormrod, Jeanne Ellis. (2012) Human learning /Boston : Pearson

Beer and Behavior Analysis

20180518_195046.jpg

There’s been a shift in culture towards beer recently. Twenty years ago, if you saw the title “Beer and Behavior” you would absolutely expect a scathing speech of the abuses of the drink. This is not going to be that. I assume everyone reading this to be responsible. I’m interested in modern context. The beer industry has grown, become more varied, and those varieties have become more available. Craft brewing has taken off to previously unforeseen heights and different styles and personal recipes of beer are becoming available to the public like never before. It’s amazing. People are demanding more beer, and craft brewers are making it.

Now when there’s socially significant behavior out there, it can be studied. When people engage with their environment, their society, over something they want and will pay for it’s worth knowing how that works. I wanted to see how we could apply some of the concepts we use in Applied Behavior Analysis (ABA) to get an idea of it. Behaviors as the consumers, and behaviors as the provider. That’s where Midnight Oil Brewing Company came in to provide the setting for studying and some insights on what the process is like on both sides of the bar. That night, in particular, they had nine of their craft beers on tap and full-house of people engaging in operant behaviors to gain access to them.

Now let’s talk behavior.

Beer can be a Reinforcer. Think of a reinforcer as a type of stimulus that resembles a reward. What makes a reinforcer special is that it maintains or increases the likelihood of the behavior that precedes it. Think of it like this-

A person walks up to the bar and asks for a beer, maybe a Serenity session ale, the bartender pours that beer and hands it to them.

Assuming that the beer is what they like, and they find it reinforcing, the consumer would be likely to return to that same bar and order again. That’s reinforcement. To break it down further- The consumer’s behavior (requesting) operates on the environment for access to that beer. Access to the beer is socially mediated by talking to the bartender and the eventual exchange of money, but if they get access to the beer and like it, the reinforcement acts on that requesting behavior’s presentation in the future. The requesting behavior happens again or might even happen more often. There was a big if in there though. The beer had to be enjoyable, or reinforcing, to the individual for it to work. People have different tastes, and as you may be aware, not all people like all types of beer.

20180518_194907.jpg

Beer Flights can be a Preference Assessment for Reinforcers. A preference assessment is a tool used to figure out which stimuli are reinforcing at a given time. This is done by a presentation of a varied set of stimuli to an individual, which they have access to and engage with, and eventually, you get a hierarchy from that. By looking at what gets chosen more, you can tell which stimulus a person likes best at that given moment. Preferred stimuli make for great reinforcers for behavior. Now at a taproom or bar, we can use these preference assessments to determine our own hierarchies of the types of beer we enjoy. This can help us weed out the types we do not like, which help us not select them in the future, from the types we do like.

A person has a flight of 9 beers in front of them. They try all nine, but only like and continue to drink the Stouts, Porters, and Saisons.

On the other side of the bar, a bartender can observe a person with a flight of beers, and use the information from watching what beers were selected and consumed at higher amounts, to make better suggestions for that person’s next choice to order. A little rapport building goes a long way. (I know that I tend to order more of the suggestions of a bartender that understands my preferences. Personal opinion-data point of one.). On the business side of things, having consumers choose a selection of beers they enjoy repeatedly can have long-term reinforcing tendencies on their return and future consumption. Imagine an example of a person mistakenly trying a few beers in a row of a style they dislike. This could punish beer seeking and buying behavior- the opposite of reinforcement. Knowing where to guide a consumer is useful information. The trend of behavior can go in both directions, and a preference assessment could be key in making the experience enjoyable for everyone.

Taprooms can employ J.R Kantor’s Setting Events to create an environment to facilitate engagement from consumers not only as paying customers but prosocially with one another. Some people call this ambiance. Some people call this the “feel” of a place. In early behavior analytic research, behaviorists like J.R Kantor were interested in antecedent stimuli, “things” in the environment that could either prime behavior, or discriminate (select) specific behaviors to occur. These are stimuli, variables in the environment, that may influence certain behaviors to occur over others.

Larger spaces with a higher number of tables could lead to a higher retention of served consumers, more bartenders responding to requests could lead to higher rates of (responsible) beer requests, larger tables could lead to groups forming, televisions playing a specific program could retain specific like-interested individuals, and play-oriented items like boardgames could provide alternative sources of reinforcement and retain consumers on the premises for longer.

The potential is endless, and many of these examples would have to be fine-tuned and tested for practicality, but these are all things that could be set in place before someone even steps foot in the door. Antecedents are powerful things. But Setting Events aren’t the only concepts out there that explore them- there are also Motivating Operations. We’ve talked about Reinforcers, and even Punishers. These are stimuli that have an effect on future behavior, but there was a great researcher named Jack Michael that noticed that there are factors that can momentarily affect the value of those stimuli, and the behaviors seeking them.

Thirst and Hunger can be Unconditioned Motivating Operations. When you see the word Motivating Operation, take the common well-known word of “Motivation” to guide your understanding of it. Unconditioned just means that it is something innate, or not learned. Unconditioned Motivating Operations (UMOs) are often based on natural biological drives, and in taprooms and bars, the most common ones we see are based on deprivation and satiation. Thirst is a great example of a UMO.

If a person is thirsty, a beer is more likely to be a strong reinforcer, and their behavior to seek it out is more likely. The same with hunger, as a UMO for food-seeking behavior, and food as a reinforcer.

The same, however, can go for satiation. If someone is full, that satiation acts as a UMO and abolishes the seeking behaviors and reinforcement value of food or drink.

Beer can involve Conditioned Motivating Operations too. Conditioned Motivating Operations (CMOs) are just like Unconditioned Motivating Operations; they momentarily alter the value of a reinforcer- like beer. The only difference is that these are conditioned, or learned. The research on these has been back and forth. Some say their effects are noteworthy, and others say these theories don’t hold much water. I think they can make a great way of conceptualizing how preferences, or reinforcement values, can be affected by a person’s learned history. To that end, I’m going to try and make a taproom, or beer example, for each type of CMO.

Surrogate Conditioned Motivating Operation (CMO-S)- A surrogate CMO is something that alters the value of a reinforcer because it has been paired with an Unconditioned Motivating Operation, and takes on its effects. Here’s a craft beer example:

Unconditioned Motivating Operation- Deprivation. The value of beer is going to be higher.

Surrogate Conditioned Motivating Operation- “Last Call”. The value of beer is going to be higher due to a paired deprivation scenario (UMO) in the past.

In these conditions, we can speculate that it would have a behavior-altering effect in the same way deprivation does, and a value-altering effect on the beer as a reinforcer for requesting right before time runs out. A deprivation (UMO) has been paired with the “Last Call” stimulus enough that it takes on some of that effect.

Reflexive Conditioned Motivating Operation (CMO-R)- A reflexive CMO alters the value of its own removal. Behaviorally, this is called “discriminated avoidance”. Learned avoidance to a specific thing. Basically- a person is presented with something, they’ve experienced it in the past as something aversive or bad, and they want to get away from it. Just the presentation is enough to cue behaviors to avoid it. Here is a personal Beer CMO-R I’ve experienced.

Conditioned Stimulus- A saison in the middle of a beer flight, which ruins the flavors of otherwise amazing beers tasted afterward.

Reflexive Conditioned Motivation Operation- Seeing the word Saison on a beer flight list. All behaviors that can get the bartender to NOT include it are altered (more likely).

Saisons (NS) are okay types of beers on their own, but again, personal data point of one, ruin the palate for the tastes that follow it when they are in a beer flight (CMO-R). The presentation of a saison in a beer flight is enough for someone (me) to engage in behavior for its removal.

Transitive Conditioned Motivating Operation (CMO-T)- A transitive CMO is something a little broader, and looser, conceptually. It involves an alteration of the value of another stimulus. Generally, through improvement. Like the other CMOs, this is also based on a persons learned history. Some traditional examples like to go for the blocking of a behavior chain, leading to another stimulus to solve it becoming more valuable. I much prefer the “My Friend Has That Beer And Now I Want It Too” transitive conditioned motivating operation conceptualization. For this to work, it requires a learned history of a friend that often selects delicious beer. This delicious beer paired history also has a discriminative quality of “being better” than the persons first choice before. Their friend just picks the better beer every time. It’s not fair. Let’s play it out like this.

Person’s Requesting Behavior: “I’d like an Insomnia Stout”.

Friend’s Order Afterwards: “I’d like you to layer this Doc Brown Ale with the Dark Matter Stout on top.”

Transitive Conditioned Motivating Operation- This value altering condition (Friend’s Order) may not have physically blocked the first response (Person’s First Request), but it is a stimulus presentation with a value altering effect strong enough create the need for a stimulus change.

Person’s Second Requesting Behavior: “NO WAIT! Cancel that first one. I also want that Doc Brown Ale with the Dark Matter Stout on top.”

What do you think? Has that happened to you before? Could it be explained by the transitive conditioned motivating operation? I think it just might.

So we’ve gone through some Behavior Analysis, and we’ve gone through some Beer. Do you have any other examples of common human behavior that could be explained by these terms, or others, behavior analytically?

Questions? Comments? Arguments? Leave them below!

References:

COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
Wahler, R. G., & Fox, J. J. (1981). Setting events in applied behavior analysis: Toward a conceptual and methodological expansion. Journal of Applied Behavior Analysis,14(3), 327-338. doi:10.1901/jaba.1981.14-327
Big Thanks:
to Midnight Oil Brewing Company

“They’re Just Tired”- The Worst Scapegoat Explanation for Behavior

pexels-photo-57686

Why are they acting that way? “They’re just tired.”.  It’s one of those cliches that never goes away. It’s just so easy to use. You can use it for any situation at all to explain away patterns of maladaptive or cranky behavior. Screaming? Tired. Throwing things? Tired. Hitting their siblings? Tired. It’s the explanation that’s got it all-… Except that it’s not exactly true all the time. Exhaustion does exist, sleeping poorly does affect behavior, but there’s a risk in assuming a cause without looking at the exact conditions surrounding the behavior. It’s more work to do so, but it’s worth it.

In Behavior Analysis, we call that kind of thing an “explanatory fiction”. It’s not directly untruthful, but it avoids reality through ease and circular reasoning. Why do they do that thing we don’t like? Oh! They’re tired. It’s not hard to see the practical ease in that either. Everyone in their life has been cranky or acted miserably when they’ve been stretched too thin. The problem comes from the assumption. That assumption takes away all the curiosity and the need to dig for a more sophisticated answer, and it also leads us to a bias of expectation. We’ll ask around post hoc to confirm the broad theory.  Did they sleep well last night? Oh! Well, there was that one time when ____. Anything we get that conforms to our “theory of tiredness” will close the book. Open and shut case. We miss the real reason. We miss the real point. There’s risk in that. We miss out on catching the patterns that become habits that hurt further down the line. We blind ourselves to teachable moments.

The way to avoid all of these pitfalls and to explore the real reason behind these target behaviors is to begin the search right when we spot it. It would be even better if we could give context to what happened before the behaviors occur. A great psychologist named B.F Skinner called this the Three-Term Contingency, and it is a great way to actually get an idea on the triggers, causes, and/or maintaining factors for behaviors that ought not to happen. These are broken down into three things to study: the Antecedent which occurs before the behavior (“What exactly set this off?”), the Behavior which is the exact thing we are looking at, and the Consequence which happens after the behavior occurs (“What did this behavior get or what did it let them escape?”). Now it’s not just enough to ask the questions. We should probably document it too. Write it down. Take notes. Get numbers. How many times are you seeing this specific behavior? We call that Frequency. How long does that behavior last? We call that Duration. We can use this information to inform our conceptualization on what the behavior’s function is. By finding the function, it can lead to us adapting not only the environment to aid in decreasing the behavior but also aid in helping the learner find a better way to engage for what it is they are after. Even if it is a nap.

Let’s talk Functions of behavior. In Behavior Analysis, there are 4 common categories that make it a simple framework to work with: Attention, Access (to something/someone), Escape (to get away from or avoid), or Automatic Reinforcement (which is internal/invisible and mediated by the self). A pattern of behavior that occurs again and again, regardless of how they slept the night before, might lead us in the direction of one of these. Or more than one. A behavior can also be “multiply maintained”. We can either see this as a complication or as a better truth than a simple off-hand answer. Assuming that fatigue and tiredness are the leading factors only gives us the solution of a nap. That may delay the behavior’s reoccurrence, but if you see, again and again, it’s time to take the step and look deeper. The nap is not the answer, only a temporary respite from the behavior. The contingency and history of reinforcement haven’t gone anywhere. Bottom line: It’s more complicated than that, and probably isn’t going away that easily.

pexels-photo-707193

Trade the Nap for some Differential Reinforcement

Now it’s time to get serious. If we’ve gotten this far, and tracked behavior observably as possible, and ruled out our original assumption of an internal factor like “tiredness”, then we need an answer we can use in the world of the awake. Thankfully, behavior is like dinosaurs, it can undergo extinction (that means go away), or it can get stronger if you feed it (reinforce it). The “bad behaviors”, the maladaptive ones that are not a help to the learner or their situation, can be extinguished by simply avoiding the thing that reinforces it. What is it after? Don’t let it get that. What is it avoiding? Don’t let it avoid that either.

Hard work, right?

But that’s not the end of it. You can’t just take away a behavior and leave a void. You need to replace it. So, when it comes to a maladaptive behavior that aims to get something, and it’s adapted to get that thing, you find a better behavior to replace it. The “bad behavior”? Doesn’t get it. The “good behavior”? That gets it. That’s differential reinforcement; reinforcing the good useful stuff and not reinforcing the other stuff that isn’t helpful or good. Here’s a handful of techniques that follow that principle:

The ol’ DRO (Differential Reinforcement of Other Behaviors): This technique is where you reinforce the “other” behaviors. Everything except the thing you want to go away. If you’re targeting a tantrum, you reinforce every other behavior that is not tantrum related. Some people even fold in some timed intervals (preplanned periods of time) and reward gaps of “other” behaviors so long as the target behavior does not occur. Can they go 5 minutes without a tantrum? Great. How about 10? Progress.

“Not that, this instead!” DRI (Differential Reinforcement of Incompatible Behaviors):  This isn’t a large net like the DRO procedure. This one is where a set of behaviors are picked because they make the target “bad behavior” impossible. Let’s say our learner plays the bagpipes too loudly and is losing friends fast. What’s a good DRI for that? Anything that makes playing the bagpipes impossible. Try the flute. Or jump rope. Or fly a kite. Hold a microphone and sing. It’s all the same just so long as it’s physically impossible to do both the replacement and the original target (bagpipes, etc) that we aim to decrease.

“The right choice” DRA (Differential Reinforcement of Alternative Behavior): This is the laser targeted, surgical precision, version of the DRI. It follows a similar principle: Get a behavior reinforced that is NOT the maladaptive one. Except for DRA, this behavior is a single target, and it’s most often one that is more effective and socially appropriate. DRI doesn’t care if the new behavior and old target behavior share a function or purpose. DRA would, in most cases. You aim an alternative better behavior to take the place of the old maladaptive one.

 

The research on all three are varied, but they are tried and true ways to get one behavior to go away while getting other better ones in their place. Some are easier to use in some situations than others. I invite you to explore the research. It’s fascinating stuff. It’s also a lot more effective long-term than assuming the explanatory fiction and hoping it goes away. Why not take action? Why not take control of real factors that could be used for real good and change?

But not right now. You should take a nap. You look tired.

 

 

Just kidding.

 

References:

Cooper, John O. Heron, Timothy E. Heward, William L. (2007) Applied Behavior Analysis. Upper Saddle River, N.J. Pearson/Merrill-Prentice Hall.

Image Credits:

http://www.pexels.com

Behavior Analysis and Personality Psychology

title

Applied Behavior Analysis and Personality Psychology at first glance have very little in common. Applied Behavior Analysis (ABA) comes from the behaviorist tradition of the purely observable, and Personality Psychology features variables that are often seen within the individual and outside of direct measurement. As time moves on in the field of psychology, and the behavioral fields specifically, there is a call for greater breadth and understanding from practitioners across more than one domain. Behaviorism as a field of psychology is alive and well, but sometimes practitioners can pigeonhole themselves (pardon the pun) into the strict traditionalist ideas of the early 20th century, leaving the cognitive revolution and relevant psychological progress aside.

Few people realize, that this is not too a large gulf to bridge.

The topic of personality and temperament in individuals was touched on by B.F Skinner himself in “Science and Human Behavior” (1951) and “Beyond Freedom and Dignity” (1971), but as many would suspect, the meaning of the word personality was operationalized to a series of observable concepts such as “response tendencies”. These tendencies of responding were used to explain how individuals varied in their sensitivity to stimuli. It stands to reason that everyone in their life has come across another individual who was not impacted by a stimulus in the same way as themselves. This is a basic part of humanity. This is the reason we need to clinically perform preference assessments. Individual differences occur regardless of standardized stimuli. No matter how precisely we form a potential reinforcer, no matter how accurate the degree of the amount, or intensity, or even how carefully a schedule is arranged; one person may respond differently to it than another. And that is not including motivating operation factors like deprivation and satiation. Sometimes people are affected by different things in different ways, and they respond to different things in different ways.

Personality Psychology concerns itself with these individual differences. It is a field that is interested in the unique differences of the thinking, behaving, and feeling of individuals. Personality Psychology studies traits or factors based on the similarities and differences of individuals. Some feature traits such as Extraversion, Neuroticism, and Psychoticism (Eysenck Personality Inventory), Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (The Big Five). Others add in the traits of Honesty and Humility (HEXACO). Although there are many different theories on how these personality traits are formed, are measured, and are predictive; they still aim to explain something that strict observation of antecedent or consequence stimuli appears to miss. Behaviorists and practitioners of Applied Behavior Analysis may look at these things and pump their brakes. After all, it seems like a challenge to align the methods found in Personality Psychology to the dimensions of behavior analysis that Baer, et al. constructed in 1968. How does personality fit into a strictly behavioral framework? What about making personality framework conceptually systematic? Or could an experimenter even demonstrate control in a way to be analytic? Baer, Wolf, and Risley themselves said that a self-reported verbal behavior could not be accepted as measurable unless it was substantiated independently. How do we do it, then?

First, we may want to take a step back and work on defining what we are looking at. Behaviorists and ABA practitioners are used to a functional analytic approach which aims to identify exactly that; functional relationships between the environment and clinically targeted behaviors. Personality Psychology, on the other hand, is a little more topographical in how traits are defined. They look at classifying traits by what they present as, how they appear, and reports of how people act, and think, with less emphasis on that environment link. One of the great researchers to bridge these two ways of studying personalities, tendencies, and behavior, was Jeffrey Gray who looked at the personality inventories and questionnaires of Hans Jürgen Eysenck, and developed a theoretical model which related these personality and temperament factors to behavioral inhibition (behaviors likely to be inhibited where cues of punishment or lack of reinforcement are found), and behavioral activation (behaviors likely to be activated in the presence of possible reinforcement or cues of no punishment). Here, personality traits of extraversion and introversion, for example, were related to dimensions of anxiety or impulsivity which could be easier to define and study behaviorally. Gray (1981) was interested in how these traits could explain “sensitivity” (higher responding) or “hypo-responsiveness” (lower responding) to punishment and reinforcement stimuli.

Would someone who was rated higher in extraversion/low-anxiety respond a certain way to social positive reinforcement?

Would someone who was rated higher in introversion/high-anxiety respond a certain way to social negative reinforcement?

These are some questions that might pique the interest on both sides of the fence, both Behavior Analytic, and Personality Psychology. Take any one of those personality traits above, and you may find similar ways to study it behaviorally. The literature on this type of work is impressive. Gray’s work which began in the 1970s, went on for over 30 years. There is a wealth of literature on the topic of his theoretical models, and the topics of the Behavioral Inhibition System (BIS) which relates factors that impact a reduction of responding, and Behavioral Activation System (BAS) which relates factors that impact an increase in response activation, from Gray’s work in 1981. In 2000, Gray & McNaughton presented a third theoretical system called FFFS (fight-flight-freeze system) to explain responses to unconditioned aversive stimuli in which emotionally regulated states of “fear and panic” play a role in defensive aggression or avoidance behaviors. These took into account neuropsychology and went even further to suggest links to conflict avoidance in humans in day to day life. The literature on this is absolutely fascinating in how it finds a way to bring behavioral analytic concepts to a new arena.

Could it be possible for one day to see Personality Psychologists talking about reinforcement and punishment sensitivity? How about Behavior Analysts talking about traits when considering consequence strategies? At the very least, it’s a conversation that neither field might have had without knowing. We can only hope to gain from stepping outside of traditional boundaries and broaden our intellectual horizons.

Comments? Questions? Thoughts? Leave them below!

References:

Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some
current dimensions of applied behavior anlysis. Journal of
applied behavior analysis, 1(1), 91-97.

Big Five personality traits. (2018, April 19). Retrieved from https://en.wikipedia.org/wiki/Big_Five_personality_traits
Farmer, R. F. (2005). Temperament, reward and punishment sensitivity, and clinical disorders: Implications for behavioral case formulation and therapy. International Journal of Behavioral Consultation and Therapy,1(1), 56-76. doi:10.1037/h0100735
Gray, J. A. (1981). A Critique of Eysenck’s Theory of Personality. A Model for Personality,246-276. doi:10.1007/978-3-642-67783-0_8
Gray, J. A., & McNaughton, N. (2000). The neuropsychology of anxiety: An enquiry into the functions of the septo-hippocampal system. Oxford: Oxford University Press.
Hans Eysenck. (2018, April 14). Retrieved from https://en.wikipedia.org/wiki/Hans_Eysenck

HEXACO model of personality structure. (2018, April 22). Retrieved from https://en.wikipedia.org/wiki/HEXACO_model_of_personality_structure

Skinner, B. F. (1953). Science and human behavior. New York: Macmillan.
Skinner, B. F. (1971). Beyond freedom and dignity. New York: Knopf.
Image Credits:

http://www.pexels.com

Symbols and Notation in Behavior Analysis

pexels-photo-356079.jpeg

Symbols and notation in behavior analytic research is fascinating. I find myself thrilled coming across the diagrams in the professional literature and getting so much from so little. A few letters, an arrow, a nice Δ (delta); it’s beautiful. If you are familiar with journals like the Behavior Analyst, The Journal of Applied Behavior Analysis (JABA), or The Journal of the Experimental Analysis of Behavior, you might have encountered some of these symbols. Now what these symbols and notations do, is help take large concepts like a Response, or Stimulus, or Reinforcement and Punishment, and lay them out into an orderly system of presentation without the need for paragraphs of explanation. Let’s look at this one for example:

SR

It shows some very common symbolic notation.

S, stands for stimulus.

The arrow, stands for “followed by” or “elicits” depending on whether it’s operant or respondent.

R, stands for response.

These are the foundational pieces of behavior analytic symbol and notation. I’ve created a chart below to show you these and some of the other variations you might come across.

Symbols

We can see some interesting variations between the notation symbols, mainly when it comes to how we use them in terms of conditioned and unconditioned. When we are talking about stimuli and responses that are not reinforcers/punishers, we use the abbreviations; S for Stimulus, R for Response, C for conditioned, and U for unconditioned. The status of the stimulus or response as either conditioned/unconditioned always comes as the first letter of the initialism.

When we talk about reinforcement, punishment, discriminative, and delta, the S for stimulus always comes first as a capital letter, followed by the type of stimulus in superscript. Now, unlike the basic conditioned/unconditioned stimuli/responses above, these superscripts use capitalization to distinguish between a conditioned reinforcer/punisher, and an unconditioned reinforcer/punisher, so remember to keep an eye out for that. Unconditioned punishers and reinforcers use a capital letter in superscript, while conditioned punishers and reinforcers use a lower case letter in superscript. Following the conditioned/unconditioned formatting, we distinguish between “positive” and “negative” by using + for positive reinforcers and punishers, and – for negative reinforcers and punishers.

This is very helpful when we want to nail down exactly what kind of contingencies we are seeing. You may remember that reinforcement is a process where a behavior is more likely to occur in the presence of an antecedent, because it has been reinforced in the past in those conditions. What that kind of reinforcer was, is important. Was it unconditioned? Things like food, water, etc. The basics things we as humans seek out naturally.  They are very effective, but can become subject to satiation. Now what about an unconditioned reinforcer? Something trained, or taught, through past experience. Money is a common one, tokens as well, or even art. The distinction between conditioned and unconditioned is no small gap, conceptually, so we want to be clear when we read these symbols as to what we are actually talking about.

Now that we have the symbols, let’s combine what we know to examine this example!

SR+

We would read this as, a Stimulus (S) is followed by a Response (R) which is followed by the presentation of an Unconditioned Positive Reinforcer SR+.

What kind of examples can you come up with? Leave them below!

 

 

 

Sources:

COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.

Sundel, M., & Sundel, S. S. (2018). Behavior change in the human services: behavioral and cognitive principles and applications. Thousand Oaks, CA: SAGE Publications, Inc.

Photo Credits:
http://www.pexels.com

 

 

Extra Life Case Study: Massed vs. Spaced Trials in the Acquisition of Skilled Motor (Video Game) Tasks

For this article, we have a special purpose; to bring awareness to a fantastic non-profit organization called Extra Life, whose goal is to raise donations for the Children’s Miracle Network Hospitals, which gives much needed funding to families who need it. (Donation links at the bottom of the page!)

Today, the topic is video games; which is the main focus of Extra Life’s audience. To bring some psychological expertise, and applied behavior analytic focus to this topic, we had two volunteers come up to test their mettle on (arguably) one of the most difficult video games to master and beat: “I Wanna Be The Boshy”. On the surface, a very simple looking game. Move a character with a keyboard or analog stick, along treacherous environments without touching obstacles, enemies, or projectiles. That is, until you realize how impressive the reaction time needs to be in order to progress through the levels; upwards of 2-5 responses per second. Each mistake, has a punishing restart to the beginning of the level or section, relying on the player’s skill to not only learn the pattern of motor responses to complete each section, but enter them reliably with perfect timing and order.

5
I WANNA BE THE BOSHY!

In many cases, this game requires months to beat (rare cases excluded). With this time frame, we were able to watch recordings of our two players (etanPSI & LonestarF1) via a streaming service named Twitch, which provided the video of game-play that could be reliably studied and analyzed for the target behavior skills necessary to master and beat this game.

For this particular study, we chose the target behavior of successive correct responses, and used frequency data as our metric to gauge progress through the levels. For example, one correct response may navigate a particular jump, a second may require maneuvering for a landing, and a third for another jump to a moving obstacle, all within 1.5 seconds, totaling 3 successive correct responses for that particular challenge. On average, during our tracked trials, a particular level or challenge requires a minimum of 43 successive correct responses in one minute of play, order to continue.

3
Analyzing the Players Behavior

If we want to understand the game from a behavior analytic, and psychological point of view, we need to discuss some terms:

Reinforcement: Think of reinforcement as a rewarding stimuli, that has the benefit of increasing the target behavior in the future. A reward which is successful in making a response (game playing, etc) happen more often, is called a reinforcer.

  • In this specific case, success following a trial serves as a conditioned reinforcer for the player, where beating a section, or a boss, is reported as the goal and achievement to be earned.

Responses: This is what the person does. Any behavior that follows a specific target stimulus, is considered a response.

Punishment: These are the opposites of reinforcement. They are consequences that decrease the likelihood of a behavior of occurring in the future.

Frequency (and Rate): Frequency and rate respond to behavior that occurs over a set amount of time. For example, if our general target is 43 correct responses in 1 minute of time, then we would want our rate of successive correct responses to near that amount to give us the greatest chance of success.

Discrete Trials: A discrete trial is often used in a clinical condition where a discriminative stimulus (SD) precedes a response, which is then reinforced when that response is the target behavior. The good thing about video games, is that each level, or screen, can be considered a discrete trial; as correct responses are reinforced with continuing the game, while failures (and punishing stimuli) cause it to be repeated.

Massed Trials: Massed trials refers to the use of discrete trials in close proximity after each other, so that no interrupting behavior occurs between them. In other words, repetition. For our gaming example, this would be restarting immediately after each failure to continue to the original starting point of the previously failed trial.

Spaced Trials: Spaced trials refer to a training condition where each discrete trial is separated by a pause, where various behaviors and stimuli unrelated to the next discrete trial may be engaged with. Think of this like a break condition. The player can take a breather, talk to the fans, take a drink of water. All of these things occur between trials, so that there is a gap between them.

2
The Experiment

Our friendly experiment required our players, etanPSI and LonestarF1 to attempt to engage in 30 trials in both conditions. The first condition would be Massed Trial, which involved 30 complete repeats without any interruption between trials. Successes could continue on to the next section, but repeats would require the trial to begin again without any (controllable) pause or break. The second condition would be Spaced Trials, where our players would be required to take at least a few seconds between trials, to chat, breathe, take a drink of water, or any other free-operant behavior in that gap. We did not limit our players to a specific time limit on these, but on average, they ranged between 10-30 seconds. We would then compare the two to see which appeared to give the players the best improvement benefit.

Our players reported themselves to be motivated to beat the game, and the challenge of proceeding through the game served as conditioned reinforcers. This free-operant preference assessment appeared to have some validity, as these players put themselves through over 60-270 trials per recorded play period, well above our 30 (60 with both conditions) trial requirement for the experiment. The players were free to agree to the conditions of the experiment, or deny them as they felt appropriate. Tracked periods that did not meet the criterion for the experiment were discarded, and the next session which did was counted. We called it “Science Mode”, when the players were agreeing to the experiment terms. Over all, 80% of Massed Trials tracked fit the experimental criterion, and 62% of Spaced Trials tracked fit the criterion. This provided us with a breadth of data to work with in getting a general idea of the factors which may be in play which attribute to their specific learning styles and abilities in completing the game itself. By the end of the tracked periods, both players had successfully completed the game, and beaten the final boss.

During this period, both players went through high rates of failure conditions, where successive conditions of failure within 10 responses were common when they impacted enemy projectiles, environmental hazards, or incorrect landings. This was a common function of the game’s difficulty, which had a degree of punishment effect on responding. In more cases than not, these conditions did not cause either etanPSI or LonestarFI to quit the game completely, but instead lead to a naturally chosen pause between situations to either breathe, react with a verbalization, or take a moment to process. In the conditions where Massed Trials were being tracked, these series of 30 responses were discarded, but when Spaced Trials were being tracked, these series were kept if they held to the same spaced pattern for following responses.

Our target goal for this experiment was to see how they remained within the average number of successive responses (43) per minute, that had been tracked from successful win conditions previously. Our range for their responses were tracked between 20 and 60, on a Standard Celeration Chart. By tracking the average of 30 tracked responses, (some as low as 1, others as high as 77 per minute), we were able to place the average within these intervals on to a chart and compare them to same, or close-proximity day responses from both conditions.

Previous research by Fadler, et al., and others they referenced (Foos, et al (1974), Rea et al, (1987), suggests that Spaced Trial is the superior method of skill acquisition, but it was noticed during etanPSI and LonestarFI‘s play styles that Massed Trials were preferred. Cursory investigations of other players showed the same. Faster restarts appeared to give higher rates of reinforcement, which in turn lead to success within a single day’s time that might not have been possible if play had been delayed or discontinued. It did appear that during this period, higher rates of repetition of these pattern based motor behaviors, did effect the end result of success.

In their article “The acquisition of skilled motor performance: Fast and slow experience-driven changes in primary motor cortex” Karni, et al (1998), suggests that there are different types of learning stages, and that experience driven changes to the brain effect two different types of learners in different ways; “We propose that skilled motor performance is acquired in several stages: “fast” learning, an initial, within-session improvement phase, followed by a period of consolidation of several hours duration, and then “slow” learning, consisting of delayed, incremental gains in performance emerging after continued practice. This time course may reflect basic mechanisms of neuronal plasticity in the adult brain that subserve the acquisition and retention of many different skills.” which they demonstrated in their study as well. We will not go too deeply in to biological factors in this article (since we did no MRI’s on our players), but if you have interest the article is cited below. However, this “fast learning” does appear to coincide with our conceptual Massed Trial format of learning, and the within-session improvement phase may be a factor in what we are seeing in the results of etanPSI and LonestarF1.

7
The Results

The results from our experiment was astounding. We found a clear favor in both the player’s preferred style of trial, and the ability for their skills to improve with it. Both players ranged in similar failure (0-1) and win (~43) successful responses per minute, and both in cases leading to successes against particularly difficult bosses exceeded these by going over 70 successive correct responses per minute!

With etanPSI we were also able to see some situations where both spaced and massed trials, interspersed, had a greater degree of success than when they were split by 30 consecutive trials each. When he was able to engage in repetitive environment/platform based difficulties, Massed Trial was more successful, but when dealing with alternating projectile challenges from game bosses, Spaced Trials were useful to mitigate the punishing effects of failure conditions. Higher volume vocalizations, high intensity percussive maintenance to gaming instruments, and broader vocabulary, appeared to lend a restorative effect to attentiveness and responding rates to the following massed trial conditions.

Capture

A Dpmin-11EC Standard Celeration Chart from our experiment.

In both conditions, we were able to see consistent Acceleration of gained successive correct responses per minute, from Massed Trials, which may have also been in part to the increase in difficulty as the players progressed, requiring higher outputs of responses. Nevertheless, the players did rise to the occasion and appear to hold to improvement in responding and pattern recognition & responding, over the course of 30+ trials per day. Where many had failed and given up, these two players had not only succeeded, but excelled at an incredibly difficult game.

4
The Fun!

Now that you know the story of our fun experiment, here’s where you can donate and thank our amazing players for their time and skill, as well as help the lives of countless children receiving medical services through a hospital on the Children’s Miracle Hospital Network! 100% of all donations go directly to charity, and are tax deductible! Help our player’s team to exceed their goal and change lives!

Donate to our amazing experiment volunteers!

etanPSI’s Extra Life Page

LonestarFI’s Extra Life Page

Like the science? Donate to the behaviorist!

Chris S’s Extra Life Page

References:

  1. Karni, A., Meyer, G., Rey-Hipolito, C., Jezzard, P., Adams, M. M., Turner, R., & Ungerleider, L. G. (1998). The acquisition of skilled motor performance: Fast and slow experience-driven changes in primary motor cortex. Proceedings of the National Academy of Sciences, 95(3)
  2. Wimmer, G. E., & Poldrack, R. A. (2017). Reinforcement learning over time: spaced versus massed training establishes stronger value associations
  3. The Precision Teaching Learning Center.- http://www.precisiontlc.com/ridiculus-lorem/

Photo Credits: etanPSI & Lonestar F1 http://www.twitch.tv