Performance Data Collection For All Professionals

No matter what you do, you’ll often find yourself in a position to either teach a skill or train someone in a proficiency you have. In some cases, many times. One of the most necessary parts in my line of work is data collection on human behavior and performance. I’ve met hundreds of professionals and paraprofessionals over the years who see how behavior analytic therapy and training are delivered using daily data collection and measurement and often get asked “Do you have a spare sheet I could use?”. Workshops, after school programs, camps, job training events, painting classes, apprenticeships, exam prep, clinical trainings, driving courses, and other various skill based events have all had opportunities for me to show off what data collection can be used for, and how it can be applied to any profession where one person needs to learn a new skill and their performance needs to be evaluated in a well defined and stable way. If this is something that you do, or have an interest in doing, I have just the form for you. In just 15-30 minutes of reading and reviewing the instructions below, I aim to make sure you learn and can use the following cool tools from the world of applied behavior analysis:

  • How to track data on performance for a single day and across days.
  • What a “Cold Probe” is, and how you can use it to configure and adjust your training plan.
  • What “Discrete Trials” means, and how you can use them to work on a single or multiple skills in a single training session and deliver effective feedback for performance improvement.
  • How simple and effective percentage data is for performance.
  • How to practice a trained skill repeatedly without become repetitive.
  • When to deliver reinforcement (social praise) for success, and when to deliver prompts (correction).
  • How to compare today’s performance of your client to their future or past performance and use visual analysis of the data to make better decisions.
  • What “behavior coding” is and why defining our target performance goals matters.
  • How to do an analysis of component skills and break your trained skill down into pieces.

I am attaching the link to this performance data collection tool below. You can either print it out and use it in free writing, or use it digitally if you carry a tablet or similar device. This pdf has been formatted to use text fields for typing in easily, a spot to import your logo in the heading with no fuss, and the data sections can be clearly exported into the spreadsheet software of your choice. There does exist some very advanced software out there that can do more than this. This is not the be all-end all, and if linear regressions, or reversal designs are your thing, this might not check all of your boxes. I suggest visiting other subscription software for the research level of analysis you might use in a human operant lab, but if you want something practical, with ease of use, and is completely free of charge, by all means enjoy the form below.

Instructions:

Let’s talk about the top portion of the form for a moment where we have three fields:

  • Name:
  • Date:
  • Instructions:

When we are training an individual, or even a small group of individuals, we need a way to separate out performance data so that we do not get confused when it comes to evaluation and analysis of it. Each individual stays separate from one another, and each day’s performance is distinct from another. The “Name” section here applies to the individual you are training, and not the trainer. We also will need the date of the training so that we can review our data in order, and include instructions if we have multiple trainers performing the same training across different times or dates. Every profession is different and every trainee is going to require different skills, so I will not be able to describe every form of instruction you might want to use here. I would suggest something concise and to the point. Your co-trainers on the topic would likely understand the skills and only need an instructive structure in delivering the training. For example, if we had a client who we wanted to train to high proficiency in jump roping for their schoolyard double-dutch competition, we might want our trainers to know what to have ready.

Cold Probes:

In behavior analytic terminology, a “cold probe” is something that you do to test a skill without prompting or incentives to see where the client’s performance is without assistance. Simply put, at the start of your training or teaching section, you ask them to perform the skill and see how they do. Can they do it completely independently to your established level of competence? If so, you might mark a “Y” for “Yes”. If not, you might mark a “N” for “No”, and that gives you an idea of where that day’s training targets might focus on. Cold probes are useful when you have a client who has mastered something, or maybe is coming in for the first time, and you want to see if they can produce that specific target of performance on demand. Reviewing the cold probe isn’t a final answer on whether that person has or does not have a skill in their repertoire, but it can give you an example of their unaided performance for you to use your training judgement on for what they might need to be taught, practiced, or have a long term strategy for performance improvement on. Cold probes are tools, not something to make or break a training plan on. Performance can fluctuate. Use them to determine a focus for that day, but keep in mind it might only be a part of your overall goal for the client. You can also use cold probes to remove a planned part of the training that day that might not be worth giving extra time for. If our imaginary jump roping client can perform their three alternate foot step jumps without aid, perhaps we gear our training topics for the day for something a little more advanced to make the best use of our time.

A Component Skills Analysis and Discrete Trial Training (DTT):

We can use our cold probe data to figure out what skills we can target for improvement. Often, when we come across a difficulty in competency with a trainee, the skill is often made up of smaller more basic skills, or have a precursor skill that needs to be strengthened before they can move on to the original target skill. Discrete Trial Training (DTT) is a process by which a complex skill is broken down into smaller component behaviors which are taught in order to meet the original target. They are “discrete” or singular component skills which are set up in a distinct training opportunity, where we can follow up demonstration of a skill with either praise/reinforcement when performed correctly, or prompting/feedback when there are errors in need of our assistance. Each practice opportunity is a new chance to try again and build towards greater success. The number of trials you use is not set in stone, but for this training sheet I have provided five opportunities for each component skill. Let’s talk about our example jump-roper. What would happen if our trainee did not perform their alternate foot jump to our criterion of success? Take a look at the sample data below.

In this example we’ve had our trainee demonstrate the skill five times, with each component skill being performed an equal number of times. What might this data suggest? Is our trainee having difficulty in all areas? Probably not. In this case we see that they are able to lift their left foot into a jump perfectly for all tracked trials, but when it comes to the right foot, and the heels being up during jump roping, we see errors. A good part of using these types of trials is that you can compare performance in one component behavior to another. Look at the data above. You will see that the right foot lifting, and the heels up components share a trend of errors. That could lead us as trainers to suspect that there might be a relation between the two, and our training and corrective procedures can be tailored at this point for helping them improve. With this style of data collection we can pin point exactly where errors occur, which makes our training time tailor fitted to the need and increases our efficiency.

Do not forget about reinforcement in these stages. Reinforcement is what increases rates of the target behavior that it follows. We praise and reward as soon as a success, or approximation to success (improvement) is seen. By praising and rewarding what goes right, we can keep that level of performance high. We can use reinforcement following prompts to maintain a level of engagement and improvement. Do not simply focus on the errors alone. Target the successes and reinforce them. A solid training procedure is heavy on reinforcement.

Percentage Data and Analysis:

In our trial data above we use percentage data as a form of measuring performance and success. In this scenario, using five trials means that each trial counts as a distinct 20% of the final score. When we measure performance we want to make sure we have a criterion by which we consider mastery. Not all skills necessarily, or humanly, can be done with 100% every single time. In most cases, keeping to 80-90% as a goal is not a bad benchmark to have in mind. It is well above blind luck, and with proficiency at those levels, it is often easier to discover patterns of what environmental stimuli correlate to higher performance than others. Does our jump roping trainee do better during our individual training than they do in front of peer crowds on the playground? A variance of 20% or more might let us know that if we see a pattern emerge over time.

The sheet above is structured so that you can export data from the probe and trial sections into a spreadsheet, where you can use a visual analysis (graph) of your choice. I, and many professionals, enjoy line graphs which show percentage of performance by date. By combining the results of multiple daily data sheets, you can create graphs and perform a visual analysis of progress in a way that is cleaner than raw data. By comparing the date of the data sheet, with the final percentage scores of success you can see something like this.

Reviewing performance data with your client (or their caretaker) is key. Visual data presentations like the one above can be a tool in your toolbelt to make large trends easier to understand. Line graphs are easy ways to show trends and to use that visual to breakdown where their performance was, compared to where it is now. Even if you see a negative trend, this can be a great tool to discuss what might be going on outside of the training and analysis that might be a factor. You can even learn what is impacting the graph, but might be missing from the training regimen. No data is ever wasted. It is all a resource.

Behavior Coding:

The final sections of the sheet involve spots where you can do what we call in the field of behavior analysis, and research in general, behavior coding. Behavior coding is a process by which you operationally define your target performance skills in observable and measurable ways. When you are working with a team, or with multiple trainers, your success can depend on whether everyone is measuring the exact same things the exact same way. We want as much inter-observer agreement as possible. Coding makes that possible.

Let’s take an example from our jump roping client above. One of the component skills we chose was “Left Foot Up (Jump)”. That can be confusing without further explanation. It could use an operationally defined and coded skill. We can use our behavior coding section to put simple and quick definitions so that everyone measuring that skill in the future knows exactly what it looks like, and what we consider success. The better our coding, the more sensitive our data is. We want to find a middle ground of detail without being obfuscating with too much wording. There is a difference between precision, and a code that makes tracking impractical. The main goals we want are something we can observe which lets anyone watching have the same opportunity to track it exactly as we might, and measurable, meaning that our behavior coding of our target skill can fit into the data tracking format.

For example: “Left Foot Up (Jump)”- The left foot is lifted up completely from the ground during a jump with enough space for the jump-rope to clear it underneath.

You may increase the precision of your measurement to match the distinct needs of the skill, but the goal is to be sure that everyone tracking data on that skill is using the same definition. This one above is what I would consider low to medium in precision, but will do for what we need it for. Match your definitions and coded behavior to your specific profession and needs, but be sure it is not vague or subjectively unobservable (“a spirited and joyful jump” could mean just about anything to anyone). If you need to use what some would consider subjective language, try coding for that as well (“Joyful” is defined as smiling during a jump, etc.).

Keeping a Running List of Component Skills:

Component skills which become mastered or are ongoing targets for future weeks can be listed on the second page as well. This helps us distinguish how we broke down our probed larger skills into their discrete and distinct components. Keeping a list of what we have worked on, and what we have yet to work on, can give us better ideas for future trials to run in the next training opportunity, give us a log of what was mastered or completed in a previous training, and give us a section for note taking on the component skills that would fit the needs of your professional training. I would suggest if you use the component skill section to determine future training targets, less is more. Training ten skills within an hour or two makes sense, but over training tens of skills within a time frame might lead to lesser mastery across the entire list. Focus on the most important component skills that make up the larger cusp skills. You may find success in picking your particular targets for each training session, or week.

Further Training:

I hope you enjoyed the material here and the review. It would be impossible for me to include every potential usage of these sheets, and the more complex data analysis processes you might want to use them for, but if you have need of further training, consultation, or simply questions, you may reach me on this website or email at csawyer@behavioralinquiry.com. I would be happy to help you with further training on this data sheet, how to adapt and construct your own, and any further interest you might have in performance tracking or behavior analysis.

Comments? Questions? Leave them below.