Science History: Do elephants ever forget?

It’s time to go back into the annals, to a time when applied behavior analysis, a budding biological and philosophical branch of psychology and medicine, was in its relative youth. A time when the exploration of behavior was catching on at a fever pitch, and the whole scope of life on earth could be put under the microscope of a growing analysis that wanted to peel away all excess explanatory fiction and get to the truth of it all for one burning question: Do elephants ever forget? That’s right. It was 1975 and even now, we’ve all heard the reputation about elephants. Intelligent creatures. Great memory. Potentially endless ability to recall. Was that true? Could it be possible to test it? With three elephants, and a few boxes of sugar cubes, it was a great day for science. It was the day we will all remember as the day we became sure that elephants are very, very good at memorization.


Published in the third annual issue of the Journal of Applied Behavior Analysis in 1975, four researchers took on the age old question and became heroes of empiricism and elephants alike. We had Hal Markowitz, Michael Schmidt, Leonie Nadal, and Leslie Squier. In the far and distant land of the Portland Zoo, they had an idea. The idea was that they could investigate the memory of elephants by constructing a test based around a plywood, plexiglass, and slate operant panel, and this apparatus would create a light-dark simultaneous discrimination task. What’s an operant panel? Think about the word “operant” here as something which modifies behavior by reinforcing or inhibiting certain effects and consequences. Operant be used to describe spontaneous behaviors that come under the control of other stimuli, which is also accurate for what the researchers were trying to get the elephants to engage in. Basically, it’s a thing where a stimulus is delivered (light or dark) an elephant can reach its trunk, press a disk, and if the light or dark disk matches the stimulus they were given, it records it as correct. This was a miracle of technology at the time, mind you. It had a galvanized steel feeder. It had microswitches. It was incredible. Built specifically for the elephants to be able to comfortably reach in with their trunks, press the disk of their choosing, and then get a treat when they were right. They learned to be right very often, and as you will see, retained that knowledge over a long period of time when they were set up to do it again.


Our subjects were three Indian elephants in the Portland Zoo, two originally from Thailand, and one from Saigon. Their names were Rosy, Tuy Hoa, and Belle. They were very willing participants in this experiment, uncoerced, and not even deprived of food to help their motivation. Why? Because all it took was sugar cubes. They loved sugar cubes. Once they got a taste of the sweet treat, wildly different from their usual diet, they were willing and able to learn this contraption and do the best operant chamber work an elephant had ever done before. By using the positive reinforcement of these sugar cubes for correct responses, learning came quickly. It would also turn out that with a great reinforcement system, and a willing elephant, they absolutely remember the best ways to get sugar cubes.


This was also the start of a great behavioral research program at the zoo. This experiment was unfortunately stopped within a decade due to administrative policies, but for a stretch in excess of 8 years, we learned that elephants truly had a knack for memory. The raw data, unfortunately, was also destroyed in a fire at Reed College later on, but the published work remains for us to enjoy.


Markowitz, Schmidt, Nadal, and Squier had a long term plan. When you break it down, this experiment was actually two experiments. First, to see if the elephants could master the original task of discriminating light with dark on corresponding plexiglass and slate circles. We learned that within six minutes of reinforcing correct responses, Tuy Hoa was amazing. She was a fast learner and had a hunger for sugar cubes. The other two also did well with some modifications (we’ll talk about that in a second). The second experiment happened a little over eight years later where a (mostly) salvaged duplicate of the apparatus was created, no extra training was given, and these three elephants tried again with stunningly accurate results. Tuy Hoa herself was able to reproduce mastery with only two mistakes. She still enjoyed sugar cubes eight years later too. Rosy and Belle were a little slower to catch up, but as it turned out with some further medical investigating between experiments, medical consultants on vision were able to confirm retinal damage through vascular deficiency in these two. With Belle, they attempted to use different colored lights to rule out the damage, but at the time the literature on elephant color vision was fragmentary and they were unsure if elephants in general could see color. They couldn’t be absolutely certain, but with the colored lights, the researchers did see much better responding from these two pachyderm ladies with blue and green lamps.


This study also had the benefit of shining a light (get it?) on the limits of medical research and parameters of elephants, which were common in captivity at the time, but had no matching depth of research. The dearth of specific biological information on what elephants could perceive took this simple study into their capability of memory, and raised questions on all other forms of visual acuity and behavioral research potential. As people would learn later, elephants actually have dichromatic vision, and they can see reds and greens fairly well during the day. It also turns out they can see blues and violets better at night. They can distinguish color, just not as well as humans do. That is about as far as my understanding in elephant vision goes, but I imagine that if our researchers knew these details back then, they might have had an interesting restructuring of their experiment based on light/dark using colored lamps. We can only speculate.


What we can take away, is that due to some fascinating research in 1975, we can be confident that elephants have the capability to remember a task they’d completed 8 years prior, and with enough sugar cubes, reproduce their results without further training. It was in their repertoire, and studies since have only confirmed (and exceeded) this original eight year span of memory.

Do elephants ever forget? Maybe.

Do they forget something that got them sugar cubes eight years ago? No way.




Comments? Questions? Leave them below.


References:

Markowitz, H., Schmidt, M., Nadal, L., & Squier, L. (1975). Do elephants ever forget? Journal of Applied Behavior Analysis, 8(3), 333–335. https://doi.org/10.1901/jaba.1975.8-333

Image Credit: 

Stock photo: pexels.com

Edits by: Christian Sawyer, M.Ed, BCBA

Getting Back Up After Failure

Failure is a tough topic to bring up but a necessary one. When we are in it, it’s all we can think about. When we are past it, we often do not want any further reminders of it. Failure, behaviorally, and psychologically, is a part of everything we do as a variable, and factors in to every future strategy we use. It is a part of our past that defines how we interact with the future. In a previous writing I discussed “Overcoming the Fear of Failure”, but this one will be about what to do when it happens to us. How do we move on? How do we grow from it? How to we set our future expectancies to do better? To what do we attribute failure to? All of these and more are necessary to making each failure a stepping stone to a future success, or else we might find ourselves in a loop generating ever worse strategies. Instead, we need to learn to get back up. Let’s talk now about some of the research we have on the topic and how we might navigate failure and find motivation from it.

Mastery Orientation vs. Learned Helplessness

When it comes to deriving motivation from failures, both big and small, the strategies that we develop in childhood have a great deal of influence on our current behavior. You may have heard of the term “learned helplessness” before, which describes a pattern of behavior of low motivation and outputs after repeated failures. The individual receives so little reinforcement following their actions that they simply do not continue to try. Diener and Dweck (1978) popularized these concepts in a study on youths that they split into two groups based on patterns and strategies that they observed without being taught. They found that some children when faced with repeated challenges and varying degrees of failure would either consistently give up, and reduce responding, while others would re-assess and modify their responding based on the inputs of their failure. The researchers were very interested in the cognitive strategies that both of these groups displayed, all without any coaching, and determined that even at a very young age, there were clear distinctions on these two types by their ideas on their loci of control. A locus of control is a belief system that people use to determine whether they have control of outcomes, or if outside forces do. A person with an internal locus of control would see the results of their actions as largely based on their own actions and future control. An individual with an external locus of control would see the results of their actions as largely impacted by an outside force or their environment. Now, there is a part of this study that some consider a little unfair. No matter what answer the children gave to their respective stimuli at the start, they were told they were incorrect. How they responded afterwards largely correlated based on how they viewed their loci of control.

Mastery oriented individuals appeared to generally attribute their failures to a lack of effort or something they’d missed. Even at that age, their first reaction focused on pivoting and reassessing.

Learned helpless individuals tended to attribute the failures to the situation as largely beyond their control (in this case, without knowing it, they were technically right as far as the experiment was concerned).

So what happened?

Mastery oriented individuals kept trying, kept changing their responses based on feedback, and largely kept at the task longer than the other individuals. They showed no decline and became more sophisticated in their strategy use (which was eventually validated).

Learned helpless individuals tended to show a progressive decline in the use of good-problem solving strategies and began to include less sophisticated and poorer problem solving strategies. Ones that would be even less likely to work.

This model of attribution is still used to this day, but has a few caveats. Unlike this study, in the real world, people are not always one or the other. In many cases, and complex problems, it requires using multiple loci of control, but also understanding whether the factors we evaluate and learn from are stable (long term) or unstable (temporary). The stability of an attribution is its relative permanence as a factor. If you know you are good at jumping rope, meaning you have high ability, you have a stable factor to consider your next success with. But, if you attribute jumping rope to how much effort your legs can put out, then the source of success is unstable—effort can vary and has to be renewed on each occasion or else it disappears. We’ll talk a little more about how effort and ability works in a second. The important part is that when it comes to evaluating our part in the grand scheme, the internal locus of control tends to help us perform better.  Let’s look at some examples.

It rained today and we got all wet. We hate that. What if it rains tomorrow and we don’t want to be rained on? Would a belief system around an internal loci of control make sense if we focus purely on ourselves and ignore the sky? Not very well. No matter how many strategies we might attempt based on our own feedback, we are unlikely to change the weather. On the other hand, a person using this internal loci of control might decide to travel away from the storm as a strategy, bring an umbrella, or wear a rain coat, which has some functionality for them but the rain still happens where they once were. Internal loci of control work best when we take into account our solutions but do not ignore the immutable environmental factors.

What about using an external loci of control on task performance? Perhaps we’d like to pick up three items off of our room’s floor within ten minutes. We might begin to generate all the reasons why we cannot, and how far the floor is from our fingers, and how many other factors there are between the items and the trash can, leading to very low performance on this task within a time frame. It’s the room that’s messy. It’s been messy for days now. So messy. So much mess too. What if we just pick up one thing then go back to bed? It’s still messy. Might as well not. Then, we’ve just effectively wasted time generating non-functional thoughts (poor strategy), and nothing was done (poor outcome). That isn’t helpful either.

Generally speaking, when it comes to our own behavior, within our own repertoires of ability, it is wiser to use an internal locus of control to conceptualize our potential impact on tasks and problems. When there are larger systems and unavoidable outcomes from the outside, it does not hurt to consider what lies in an external locus of control. We, as individuals, cannot control everything. But, as we see above, when faced with continual failure feedback, utilizing an internal locus of control early on can help us come up with strategies which mitigate the external circumstances and perhaps land us in a better spot. There is no harm in generating increasingly sophisticated strategies to put ourselves into better conditions and allow the external factors outside of our control to be managed from ever increasing positions of control and strategy on our part. Sometimes when failure comes, it comes after we thought we had a great strategy focusing on our own improvement and it just did not work.

How do we do it? How do we take back some semblance of control when the waves of failures keep coming?

Consider that the concepts of a locus of control, and how our actions impact our goals are called attributions, and have an effect on our future behavior and how we respond to challenges. When we attribute too much to external causes, it can lead us to decrease our attempts. When we attribute too much to internal causes, it can sometimes lead to more sophisticated problem solving, but blind us to other factors might be outside of our control and narrow our perspective too much.

Mediating these attributions not just in the moment of the first failure we come across, but those that follow can help us create a better perspective on our situation. We can also rely on our social circle, relay our experiences, to see if others can help us see what we might have missed and help our future strategies find better success.

  • Evaluate your current attribution and locus of control of the problem.
  • What are some ways we can evaluate our own pattern of responding and improve it? (Internal Locus)
  • What are some environmental factors that impacted our failure that our behavior did not change (External Locus)
  • How do we refine our strategy so that our next attempt can put us in a better position against those environmental variables if they happen again? Can we mitigate what held us back?

Purposive Behaviorism and Re-Training our Attributions

As individuals we can create systems that help us maintain a level of reinforcement to offset failure, and as social creatures, help create an environment of positive interactions that can help us both realize our achievable goals and find strategies to access them. Thankfully, we have concepts and theories at our disposal to explain the hows and whys. Let’s talk Purposive Behaviorism and how we can re-training our Attributional Theories.

If you’ve read my other works on this site, behaviorism itself is familiar to you. Purposive Behaviorism goes beyond the more mechanistic systems of reinforcement and punishment, stimulus and response, that you see in some of the more traditional theories. Yes, reinforcement is important to keep us moving forward. Yes, punishment (failure) can knock us back. But we are human, and complex beings, and a good analysis always takes that into account. From a purposive behavior standpoint, we use goals and work hard to achieve them. That is an intrinsic part of what it is to be human. In older theories by Edward Tolman, the term cognitive map was developed to describe how we do that. Our cognitive map is how we envision our path to our goal. We all have beliefs, unspoken ones, that a specific action on our part will get us closer to an intended consequence or goal. Let’s call these expectancies. They cover both the behavior we intend to do, and the goal we intend to achieve with them. It’s a roadmap. Tolman also believed that we learn from our successes and failures largely through a latent process. There is an automaticity to reinforcement that helps us pick up what has worked and set aside what has not worked, and integrating more cognitive and conscious strategies to what we have learned latently is the best way to move forward. Keep in mind not just what you can remember and consciously recall, but also what might have been learned latently from the experience.

When we map out our actions to meet a goal, we often give ourselves a time frame (hopefully realistic) in which to reach them. By giving our goals, or conceptual map of how we achieve them, a context in time we help judge how to act and what to expect. Generally speaking, acting now is always better than acting later unless you have a more advantageous use of time further along to position towards your goal. With our expectancies in mind, we have our actions, our goals, and our time frame. As adults, we also learn to discriminate effort from ability. Effort can be defined as the amount of energy or resources we must expend to progress towards the goal, while ability may be defined by our existing proficiency or skills that can achieve it. In most situations it is a combination of both effort and ability that help us reach complex goals.

Let’s reintroduce failure here. Let’s say that we mapped out our goal, we made our attempt to the best of our effort and ability, and we find that we simply did not meet success. Perhaps we even see repeated failure. It can be easy to get disheartened, and even travel down that path of learned helplessness, but we should do everything we can to avoid it. Let’s imagine that we did our best to conceptualize our locus/loci of control, and they were as accurate as they could be, but we still missed the mark. We tried, we failed. Let’s say our expectancy, our goal and plan to reach it, is still very important and we do not want to change the goal. How do we use our time most effectively now to get back up and try again? We need to re-train ourselves, and that means re-training our attributions.

Do we have the ability to achieve this next step in our goal? What did our failure show us?

Did we apply the necessary effort to achieve the next step in our goal? What did our failure show us?

Were our attributions on stability based around factors that were stable (ability) or unstable (effort)?

The combination of evaluating our ability and effort and attribute our failure and successes along these variables is key to knowing when something can be achieved alone, if further training, resources, or additional help from others is needed, and how to adjust our plans going forward to include these more sophisticated and evaluated plans that came from the experience. Failure here is a teacher. It’s not always easy to maintain effort after a failed attempt even if the ability was there. To retrain ourselves to analyze our attributions of the failure correctly, we must take some time to evaluate the factors. Use this tool from Dweck (2000), who we saw in that earlier study too, below to take a particular situation you might have been in the past, and see where the attributions fall.

Plug some of your attributions in the grid above and see where they fall. Do you think anyone else evaluating your situation might have a different series of attributions for it?

We tend to get the best results out of ourselves and planning ahead by attributing a reasonable portion our previous successes to internal and stable causes. What went right in the situation within our ability, even if there was an ultimate failure, that we can consistently do again? Example: I might not have won the race, but this was close to my best personal time yet.

When analyzing our failures, we can go wrong in attributing things entirely to unstable and external causes. Things that we see as completely out of our control, and leaves nothing for us to work and grow on. Example: I was going to go in to work today but then the roads were so busy and you know I can’t drive on busy roads…

The take away:

  • Turning failures into successes takes analysis of what happened.
  • Sometimes we analyze the situation well and can think of some improvements for next time focusing on our internal factors.
    • “Stable Dimension” attributions help us reflect on our ability and how to improve it.
    • “Unstable Dimension” attributions help us reflect on our level of effort and if we can improve it next time.
  • If we see many attributions leaning in the unstable or external direction, maybe it could take an extra pair of eyes to help us get a new perspective.
    • Reaching out to a trusted friend, or experienced advisor on the topic.
    • Re-evaluating the attribution by considering internal factors.
  • Learned helplessness can arise from attributing too much to external factors, avoiding evaluation of internal factors, leading to poor problem solving and less sophisticated goal directed behavior.

Getting back up after failure requires analysis of our actions, re-training our attributions to avoid learned helplessness, and consistent effort going forward.

What are some attributions you’ve thought about recently? Have the behaviors you’ve used to reach those goals been effective? Have they been ineffective? How has your belief system on the locus of control impacted the process? Have you utilized others to help you with alternate perspectives?

Comments? Questions? Feedback? Leave them below.

References:

Cooper, J. O., Heron, T. E., & Heward, W. L. (1987). Applied Behavior Analysis. Merrill.

Edward Chace Tolman. (2015). Introduction to Theories of Learning, 302–326. https://doi.org/10.4324/9781315664965-16

Hoose, N. A.-V. (n.d.). Educational psychology. Lumen. Retrieved November 11, 2021, from https://courses.lumenlearning.com/edpsy/chapter/attribution-theory/.

Molden, D. C., & Dweck, C. S. (2000). Meaning and motivation. Intrinsic and Extrinsic Motivation, 131–159. https://doi.org/10.1016/b978-012619070-0/50028-3

Schunk, D. H., Meece, J. L., & Pintrich, P. R. (2014). Motivation in education: Theory, research, and applications. Pearson Education Ltd.

Tolman, E. C. (1967). Purposive behavior in animals and men. Irvington.

Image Citations:

Title image: Getty Images/iStockphoto
Attribution Grid: Christian Sawyer, M.Ed., BCBA

Teaching Romantic Behavior

This is a topic that they do not teach in the graduate courses in behavior analysis. It rarely makes it into the purview of supervisory topics for clinicians who are used to working with much more language and basic functional living skills like the ABBLS-R, VBMAPP, or PEAK assessments, but it is a topic that will eventually does come up. Sometimes you’ll have a teenager who has mastered a wide repertoire of individual and group friendship related skills, has meaningful social experiences opening up ahead of them, and then the big question drops; “What if I want a boyfriend or girlfriend?”. Like most behavior, including verbal and other social behavior, even romantic behavior can be broken down into components and taught, but unlike a friendly conversation or turn taking game, the stakes are much higher.

With romantic relationships, many things appear simple but as many of us know, without a strong foundation it can be a complex series of social and heavily emotional evoking stimuli that can alter judgement and lead people to making choices they might never have made before. In clinical practice, especially with teenagers on the autism spectrum, the “go to” resource I have used is the PEERS Treatment Manual, specifically for social skills in teenagers. As you can see from the image above (credited to the manual), it is an evidence based curriculum which focuses on breaking down the components of decision making when entering the world of dating, and provides a great range of materials and practice strategies to be used before generalizing to the natural environment. For example, the first few steps listed above:

-Talk to mutual friends, if you have them.

-Flirt with your eyes.

-Give specific or general compliments, depending on how well you know them.

-Ask them if they’re dating anyone.

-Show interest in them by trading information and finding common interests.

-Laugh at their jokes when appropriate.

(Laugeson, 2015)

We can use very concrete behavioral examples of what types of social skills we’d like our clients to employ, as well as asking questions along the way that relate directly to the person they intend to engage with. It does not assume that romantic or dating success will follow, but helps the learner pick up on cues as to whether their initial inquiries and engagement is heading in a direction, while also having them be direct about their intentions so that the subject of their interest is also in the know. It helps with icebreakers, sourcing friends and using a network of others to help, and also (not captured above) brings up the topic of both consent from the other person, and parent involvement in early teenage dating scenarios. Everyone is informed, and the social behaviors are framed around the feedback each step of the way. From a behavior analytic perspective, it is easy to apply to real world scenarios while helping the learner recognize what lies ahead of them before they jump into the real situation. The next chapter, by the way, works on conflict within relationships, which I think is incredibly helpful, especially to learn early in life. I know I wish I had.

There are, of course, many schools of thought on romantic behavior which takes place pre-adulthood. Prior to a career in behavior analysis, I had a background in school counseling focusing on adolescents in high school. The individual and group therapy work that goes on in a school was, to say the least, a little looser than what a behavior analyst might do. More often than not, in the school setting you were not teaching people to begin relationships, but rather coming in later when complications arose. As was the texts and resources given to us in dealing with both the formation and conflict that comes in young relationships. In a school setting, assuming both parties were attending the same school and had the interest in utilizing school counseling services, the focus on some of the resources were framed with a different outlook with topics such as: Anger, Monopolizing, Attention Seeking, Resistance, Arguments and Fighting, “Contagious Exiting”, and Crying. As you might imagine, it was less of a skill building process, and more of a crash course on dealing with tangled and heavily emotional topics through expression. I also want to be clear that I am not bashing this viewpoint; conflict resolution was often successful, but the framework was built around mental constructs, and was usually a multiple step process lead by the individuals themselves and the tangents of what lead up to the issues that arose. In both behavior analytic, and the counseling framework, clarity was key, conflict resolution was taught, and communication was valued. I think both avenues had some truths that were helpful in both teaching and working through very high emotions, but the behavioral viewpoint was a little cleaner in execution of what skills were being exhibited, and what skills were not. This is only my opinion. Data point of one.

In my experience in both “worlds” of working with adolescents and teaching romantic and behavior, it has become clear that the research out there and effective curriculums focus largely on the following skills:

  • Dating means different things at different ages. A 13 year old dating is very very different than an 18 year old dating, both in definition, and the involvement of adults to mediate the where’s and the how’s.
  • What dating means and the expectations are should be clear up front.
  • Identifying ahead of time what it might mean if the subject is interest back, and also if they are not.
  • Framing what a relationship looks like for the age and interests of both involved.
  • Determining appropriate activities.
  • Setting and accepting boundaries.
  • Asking questions. Asking the right questions.
  • Individualizing the therapeutic training to take into account the interests and expectations of each party and what subjectivity means.
  • Conflict resolution skills ahead of time.
  • Respecting the privacy of the other person and not speaking out of turn even to the therapist on matters that are not free to discuss. Privacy skills.

This is sometimes just the start. There are layers that clinicians will be allowed to be privy to, and others that are not. Respecting boundaries also applies to this therapeutic and educative approach. There are times when a client will ask you to back off, or keep details to themselves.

In my experience no two trainings have ever been alike and when it comes to a behavior analytic or therapeutic approach, the client’s needs and interests come first. They may need some teaching on how to balance that with others needs and interests, but that is also a part of learning about what matters and what can develop.

Thoughts? Questions? Comments?

Understanding Control, and Hope for a Better Future

“The danger of the misuse of power is possibly greater than ever. It is not allayed by disguising the facts. We cannot make wise decisions if we continue to pretend that human behavior is not controlled, or if we refuse to engage in control when valuable results might be forthcoming. Such measures weaken only ourselves, leaving the strength of science to others. The first step in a defense against tyranny is the fullest possible exposure of controlling techniques. A second step has already been taken successfully in restricting the use of physical force. Slowly, and as yet imperfectly, strong man is not allowed to use the power deriving from his strength to control his fellow men. He is restrained by a superior force created for that purpose- the ethical pressure of the group, or more explicit religious and governmental measures. We tend to distrust superior forces, as we currently hesitate to relinquish sovereignty in order to set up an international police force. But it is only through such counter-control that we have achieved what we call peace- a condition in which men are not permitted to control each other through force. In other words, control itself must be controlled.” -B.F Skinner, “Freedom and the Control of Men”

This quote taken from B.F Skinner’s “Freedom and the Control of Men” stood out strongly to me more recently than it has before. It is both a warning, and in a sense, an optimism for the future to reign in force and better understand science. “Freedom and the Control of Men” is a title that, at first, comes across as very antiquated and a little tone deaf in the word usage, but was very much written as both a critique of its time, and also geared to the readers of the future. “Men”, in this usage, refers to humankind, and not a particular sex. In it, Skinner carefully takes into account the period of time in which it was written in the United States in 1955, but an optimistic view of the future- and speaks to the reader about the topic of control, and how “tyranny” can hide in an atmosphere of democracy. He speaks to future readers directly. Coercion, violence, and disproportionate uses of these were very much alive in 1955 as forms of control, and unfortunately for us readers of today, are apparent still. Force and violence to control a population were spoken of as something to be left in the past, something to overcome as a society. Control that needed to be controlled itself. It is one of B.F Skinner’s lesser known works, but hold points that still underpin very much of the behaviorist views of a better world through science, not violence or ignorance. He did not shy away from the idea that there is a purpose in aiming for perfection, perfection is not impossible, but is not easy to attain either even in democracy. A message of humanitarianism driven by science. It is not ingrained, it is taught, shaped, and practiced. The painful realization for modern readers is that we have not come far at all from 1955.

In this piece, Skinner speaks to us in “Footnotes for the reader of the future”, which I found to be helpful and an insightful way to see that this was a piece of its time, but was also intended not to stagnate back there. It was meant to give a grounding in the period which this early behavioral science came from. B.F Skinner believed in the improvement of the future through behavioral science; a belief that I think most people who study psychology or are interested in it, believe too. Things can be better if we just strive to understand them. “Freedom and the Control of Men” was not meant as a guidebook in stamping out freedom, or forcing people to follow a path, but rather to help understand that control exists outside of the connotations of coercion. There are good forms of control that help bring order, and progress and “designing a new cultural pattern”, but also to understand that there are forms of control that can hold all of that back and grasp at power for selfishness, or indoctrination. If we do not understand both how good control, and the more selfish forms control work, designing a better future is a very difficult task. Only through science can an understanding of control be explored which is not skewed by propaganda or ideological misuse. Skinner poses a question that stands out, and serves as the underlying point to his piece:

“The question is this: Are we to be controlled by accident, by tyrants, or by ourselves in effective cultural design?”

Effective cultural design is something that B.F Skinner explores in many of his works, even in the fiction of Walden Two, but always has an equitable and positive aim for humanity. Behavior change that leads to a better world where the malevolent, violent, or selfish forms of control are not used on the populous for behavior change. Misuse of power, which is described in the quote at the start of this essay, is something Skinner warns us repeatedly about, and something I believe many of us may still see in abundance around us today. As Skinner puts it, it takes an ethical and well thought out process to effect this change through science. But even in 1955 there were opponents to the idea of science as the means to work out change. In “Freedom and the Control of Man” Skinner references two works revolving around this point: Fyodor Dostoevsky’s “Notes from the Underground”, and Aldous Huxley’s “Brave New World” to describe the general ideas of human “cussedness”, or the idea that people would reject control, even an ethically guided and scientific implementation of it through effective cultural design. At the time of its writing, Skinner references an idea of a human innate refusal of control, and a new forming fear of scientific dystopian futures arising from even the most basic forms of behavioral conditioning. Skinner poses that control exists, either by accident, by tyranny, or by a more scientific and ethical cultural process regardless.

I recommend to everyone to read “Freedom and the Control of Men”, “Notes from the Underground”, and “Brave New World”. Skinner’s ideas truly tie together nicely here with the referenced works understood as he intended. For brevity, I will put the “Piano keys” reference of Dostoevsky below, which Skinner uses to relate the common idea that humans would innately refuse all control, which would stand in the way all efforts to improve human behavior:

“…out of sheer ingratitude, sheer spite, man would play you some nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element. It is just his fantastic dreams, his vulgar folly that he will desire to retain, simply in order to prove to himself–as though that were so necessary– that men still are men and not the keys of a piano, which the laws of nature threaten to control so completely that soon one will be able to desire nothing but by the calendar. And that is not all: even if man really were nothing but a piano-key, even if this were proved to him by natural science and mathematics, even then he would not become reasonable, but would purposely do something perverse out of simple ingratitude, simply to gain his point. And if he does not find means he will contrive destruction and chaos, will contrive sufferings of all sorts, only to gain his point! He will launch a curse upon the world, and as only man can curse (it is his privilege, the primary distinction between him and other animals), may be by his curse alone he will attain his object–that is, convince himself that he is a man and not a piano-key! If you say that all this, too, can be calculated and tabulated–chaos and darkness and curses, so that the mere possibility of calculating it all beforehand would stop it all, and reason would reassert itself, then man would purposely go mad in order to be rid of reason and gain his point!”- Fyodor Dostoevsky, Notes from the Underground


We see this all the time. Rebellion against any hint of new regulation or advice, no matter what the intent as noble or admirable. This belief still remains popular in culture today as much as it was when Skinner referenced it in 1955. There is still a belief that no matter how ethical the goal; be it wearing a face mask to reduce the risk of disease transmission, or taking advice to better oneself, there is an innate need to rebel no matter what the cost and damage it wreaks- and that the rebellion is the natural and right thing to do. Or, taking Aldous Huxley’s “Brave New World” wherein social and environmental engineering only lead to a world where people serve as cogs in a machine with very little imagination or will of their own. It was not ethical, it was tyrannical. Rebellion, in a sense, glorified as right no matter the cost. Behavioral science, and any control itself, was bad. Skinner believed that there was more to science than that. Science was a tool, and could be used for negative ends just as positive ends, but it could be used positively.

Skinner believed that behavioral science could be used to understand control, not as a form of “brain washing” or “fooling with the machinery in the human head”, but as a way to step forward and away from the very real and existing systems of the past that hold people back today. “Freedom and the Control of Men” was written with hope that the democratic philosophy that many of us know could either use science as a strength to move forward to a better future, or risk falling back into the very tyranny and violence that it was meant to overcome. In Skinner’s words:

“If Western democracy does not lose sight of the aims of humanitarian action, it will welcome the almost fabulous support of its own science of man and will strengthen itself and play and important role in the building of a better world for everyone. But if it cannot put its “democratic philosophy” into proper historical perspective- if, under the control of attitudes and emotions which it generated for other purposes, it now rejects the help of science- then it must be prepared for defeat. For if we continue to insist that science has nothing to offer but a new and more horrible form of tyranny, we may produce just such a result by allowing the strength of science to fall into the hands of despots.” – B.F Skinner “Freedom and the Control of Men”.

In 1955, these words came a decade after World War II ended, a rise of cultural and governmental attitudes towards communism, and at the very spark of the civil rights movement. There is certainly historical context that needs to be applied when reading it too, and the meaning behind this piece holds enduring hope and truth, in my opinion, about what science, especially behavioral science can bring to the world. Ignorance of control, praising the use of violence as a form of control, or holding too tightly to the notion that rebelling against even the safest forms of control, is human nature may only lead to a repeat of history in which no one benefits.

I hope you have a chance to read the works above, and take as much enjoyment and reflection as I had.


References:

  1. Huxley, A. (1998). Brave new world: Aldous Huxley. New York, NY: Spark Publishing.

2) Dostoyevsky, F. (1993). Notes from the underground. New York, NY: Vintage Classics.

3) Skinner, B. F. (1999). Cumulative record. Place of publication not identified: Copley Pub. Excerpt: “Freedom and the Control of Men”



Comments? Questions? Leave them below.

Philosophic Doubt- When Scientific Inquiry Matters

There are important assumptions, or attitudes of science, which ground scientific study across all disciplines; Determinism, Empiricism, Experimentation, Replication, Parsimony, and Philosophic Doubt. The last one holds a key role in how we deal with the information we gain from science, and what we do with it in the future. Philosophic Doubt is the attitude of science which encourages us to continuously question and doubt the information, rules, and facts that govern our interpretation and understanding of the world (universe, etc). Philosophic Doubt is what has practitioners of science question the underpinnings of their belief, and continually do so, so that their understanding is based on consistently verifiable information. Philosophic Doubt cuts both ways- it can have a scientist test the truthfulness of what others regard as fact, but that means they also must take on the same level of scrutiny and skepticism in their own work. To some, Philosophic Doubt is a gift that has helped them expand on their ideas and shape them beyond the initial first experimental steps. To others, Philosophic Doubt is a detrimental form of skepticism clawing at information or beliefs that they hold dear. These views are not new, and in fact we can find traces of this disagreement going back to the 19th century. Here we will explore the assumption of Philosophic Doubt, including proponents and detractions both old and new.

Why do we need Philosophic Doubt anyway?

Philosophic Doubt is important to science because it has an effect on how the progression of scientific work takes place. It has scientists test their own assumptions, hypotheses, and underlying beliefs, even if those are held precious to them, against replicable evidence and new future findings. Philosophic Doubt drives experimentation, and it precedes replication as well. It is what underlies the empirical drive for seeking evidence. Without philosophic doubt, science can go wrong. A hypothesis could be formed based on inaccurate information which would never be retested. Subjective experience could entrench anecdotes in a study as a broader experience than they are. A scientist could start with what they want to find, and cherry pick only what fits their assumption. These examples are the risks of not taking Philosophic Doubt in to account. Sometimes it can simply boil down to the scientist wanting to be right, against keeping an open mind that they might not be. Holding the assumption that there is a benefit to questioning findings or previously accepted beliefs is not a slight against past experience or belief, but rather a better way of interpreting future information if it were to challenge it. Questioning is a part of science, but not everyone thought so.

“In Defence of Philosophic Doubt”

Authur James Balfour, a 19th century philosopher, debater, former Prime Minister, and scientist, took this topic head on in “In Defence of Philosophic Doubt”. Unlike today, opponents of Philosophic Doubt at the time were more interested in comparing the empirically-heavy scientific beliefs to a more open metaphysical series of alternatives- that is, they were more interested in comparing science to non-scientific belief systems as the truth of reality. When it came to psychology, there were idealists, and realists, and stoics at each others throats with concepts that could not be observed or proven. As you might already be able to see, comparing metaphysical constructs to an assumption that has them continually question their arguments and points, makes metaphysical assertions all the harder to make. Scientific points, however, make Philosophic Doubt a little easier to withstand:

Under common conditions, water freezes at 32 degrees Fahrenheit

Employing Philosophic Doubt, we can continually circle back to this assertion to test it again, and again. Pragmatically, there comes a point where we only question these basic and well founded particulars when we have reason to do so, but the doubt is always present. Sometimes for precision, sometimes to be sure that we are building off of the knowledge correctly, and others to help with the replication and experimentation assertions that grow science. Balfour was a strong proponent of natural sciences, and the use of this kind of questioning. Science founded on observation and experimentation was something truly important to him. Keep in mind, the 19th century was shaped by scientific discovery at a pace never before seen. Balfour kept an even head about this, and believed more in the assumptions of science as the path to understanding the natural world. Propositions which states laws, or which stated facts, had to be built on concrete science and not just personal belief or anecdote. Some of his points we would take as obvious today- for example, when using comparative probability, would we run an experiment or trial just once, or twice? Multiple times? If we ran something like this just once, it wouldn’t be comparative probability, but if we ran it twice and accepted this as the final answer to the question we would miss out on the further replication and experimentation on the subject. The curiosity that Philosophic Doubt embodies would keep the experiment and replication going. Without Philosophic Doubt, we fall into a trap of not questioning initial assumptions or findings.

Another interesting thing about Balfour’s work is that it came at a time where there was a great deal of belief in a mechanical universe that followed strict Newtonian laws. At the time, this was compared with more metaphysical alternatives. Balfour cautioned everyone to continually use philosophic doubt and to question both belief systems- even if the “mechanical universe” was winning by a landslide at the time. If we were to take Balfour’s points and stretch them into the future, we might see how he would have found some justification in further development in physics- quantum mechanics for example, where the Newtonian mechanical universe which was seen as sufficient to explain everything, falls a little short. Without that testing of the original tenets of physics, the use of Philosophic Doubt, we might not be where we are now. The analysis of Balfour’s work could go on for entire chapters, but I would like to top it off with an excerpt on the topic of the evolution of beliefs, and the reluctance to test our own personal beliefs:

“If any result of ‘observation and experiment’ is certain, this one is so- that many erroneous beliefs have existed, and do exist in the world; so that whatever causes there may be in operation by which true beliefs are promoted, they must be either limited in their operation, or be counteracted by other causes of an opposite tendency. Have we then any reason to suppose that fundamental beliefs are specially subject to these truth-producing influences, or specially except from causes of error? This question, I apprehend, must be answered in the negative. At first sight, indeed , it would seem as if those beliefs were specially protected from error which are the results of legitimate reasoning. But legitimate reasoning is only a protection against error if it proceeds from true premises, and it is clear that this particular protection the premises of all reasoning never can possess. Have then, then, any other? Except the tendency above mentioned, I must confess myself unable to see that they have; so that our position is this- from certain ultimate beliefs we infer than an order of things exist by which all belief, and therefore all ultimate beliefs, are produced, but according to which any particular ultimate belief must be doubtful. Now this is a position which is self-destructive.

The difficulty only arises, it may be observed, when we are considering our own beliefs. If I am considering the beliefs of some other person, there is no reason why I should regard them as anything but the result of his time and circumstances.” -Arthur James Balfour, “In Defence of Philosophic Doubt” (1879).

Back to Basics- Science and Philosophic Doubt

In “Applied Behavior Analysis ” Cooper, Heron, and Heward begin their first chapter with the basics of what science is, specifically behavioral science, and the assumptions and attitudes of science including Philosophic Doubt. Cooper, et al., consider these foundational concepts in science as a whole and relate their importance to psychology and behavioral science. In their words:

“The attitude of philosophic doubt requires the scientist to continually question the truthfulness of what is regarded as fact. Scientific knowledge must always be viewed as tentative. Scientists must constantly be willing to set aside their most cherished beliefs and findings and replace them with the knowledge derived from new discoveries.

Good scientists maintain a healthy level of skepticism. Although being skeptical of others’ research may be easy, a more difficult but critical characteristic of scientists is that they remain open to the possibility- as well as look for evidence that their own findings and expectations are wrong.” -Cooper, Heron, Heward, “Applied Behavior Analysis”, (2017).

Bonus! B.F Skinner
“Regard no practice as immutable. Change and be ready to change again. Accept no eternal verity. Experiment.”- B.F Skinner, 1979

The sentiment behind Philosophic Doubt and science is that of openness and humility. Not only is the scientific work we read subject to doubt, but our own as well. The latter is the most difficult part- challenging our own beliefs constantly, challenging our most cherished propositions and reasoning. To some, this is something that expands the horizon of future knowledge infinitely, to others; a hard trail to follow that is no easy task. In either case, perhaps this brought up the importance of Philosophic Doubt, and how it ties in with the other assumptions in science as a challenging but inseparable part of the process.

Comments? Thoughts? Likes? Questions? Leave them below.

References:

1. Balfour, A. J. (1921). A defence of philosophic doubt: being an essay on the foundations of belief. London: Hodder & Stoughton.

2. Cooper, J. O., Heron, T. E., & Heward, W. L. (2017). Applied behavior analysis. Hoboken, NJ: Pearson.

3. Skinner, B. F. (1953). Science and human behavior: B.F. Skinner. New York: Macmillan.

The Philosophy of Logical Behaviorism

How we use language in behavioral science and psychology is important.

If you’ve ever studied psychology, or behaviorism specifically, have you ever asked?
“Why do we have to use observable terms for behavior?”
“Why do we define things in operationally and observational terms?”

These are also the questions that “Logical Behaviorism” was chiefly concerned with. If we are to treat psychology as a natural science, then proponents of logical behaviorism would suggest the language, theory, and semantics used in this process should reflect this.

Logical behaviorism is perhaps one of the more obscure branches of behaviorism as a whole, but whose history is closely tied with the more familiar methodological or radical behaviorist schools of behavioral thought. Logical behaviorism originated in the early 20th century when scientists and philosophers aimed to establish psychology as an independent and experimental natural science. Like methodological (classical) and radical behaviorism, logical behaviorism shared the focus of objectivity and reliance on measurable techniques for observation, data collection, and the rejection of introspective-heavy mentalistic explanations.

Where logical behaviorism differed from other behaviorist branches, is that its was concerned primarily with the scientific usage of language and semantics in psychology. The early proponents of logical behaviorism aimed to completely differentiate the objective and scientific behavioral psychology of the time, from the popular Freudian and Jungian introspective and mentalistic psychological writings. Because of this, logical behaviorism is often seen as more a philosophical psychology than directly empirical. Many of the positions of logical behaviorism are hidden in the works that modern behavioral practitioners are very familiar with. They inform the attitudes and (even in some cases) suspicion towards mentalistic language in day to day practice.

“The Ghost In The Machine”

The philosopher primarily known with the development of logical (or analytical) behaviorism was Gilbert Ryle. For most of Ryle’s academic career, he was focused on the dismissal of the mind/body distinction (Cartesian dualism, or substance dualism) that was, and still is, extremely common in psychological and philosophical writings and thought. You likely come across dualistic language like this often in even the briefest psychological conversations, which infer that mental states (thoughts, feelings, imagination, etc) occur in a hidden, or non-physical area or dimension of the mind, apart from physical and physiological processes. Ryle disagreed with this.

Belief, for example, would not be seen as an airy mental element of cognition, but something completely within the explanatory processes of biology, according to Ryle. Gilbert Ryle believed that there was a risk in these distinctions, that lead philosophy and science astray, chasing separations that did not actually exist.

Gilbert Ryle’s writings (“The Concept of Mind”) often took aim at these dualistic notions, taking the example of a mental effort of “will or volition” which then transformed into physical action (mental thought leads to physical action), as a widely held mistake. He calls these mistakes the “dogmas of the ghost in the machine”. The very study of the separation between mind and body were a waste of time, and fruitless, accord to Ryle because they were category mistakes; separated more by their linguistic definitions than any real qualities. Instead, Ryle proposed that all actions and behavior are physical in nature, and that there are propensities and dispositions that can be explained entirely by the behavioral actions of an individual to seek or avoid the stimuli involved.

For example, there would be no mind/body distinction between wanting breakfast, and cooking breakfast. For Ryle, there was nothing in some immaterial mental state that spurred the cooking of breakfast. To speak or imagine this cause-effect relation, leads to a principle misunderstanding of the event and behaviors themselves. If we tried to study this event using those terms, we would have to chase down the immaterial mental state, or depend that it existed outside of physical or observable evidence, as a part of our study. This could lead to circular reasoning very quickly. Chasing this “ghost in the machine” bears no scientific fruit.

Ryle does not preclude that there are physical process of behavior and action which can not be seen (which he calls propensities and dispositions), but that these do not reside in some immaterial state, and these propensities/dispositions can be discovered through the modes of observable behavioral action. This shares some similarity with the “private mental events” of B.F Skinner’s radical behaviorism, but does not go as far into speculation of functional and environmental relations as Skinner had. Ryle did use a behaviorist theory of mind, but one that was focused on the language of behavioral processes.

It is fair to note, however, that Ryle’s work has received criticism because the focus on language dependent on observable action may be too restrictive. Critics of Ryle’s work have often brought up that there may be a bigger distance between internal or “mental states” and verifiable behavioral actions. It is possible for most people to imagine a situation where someone is happy, while not showing outward “behavioral actions” or signs of happiness. The reverse is also true. Movie actors for example, act in ways that do not accurately represent their “mental state”. An actor’s portrayal, in some cases, does not reflect the actual propensities and dispositions that Ryle would infer from their behaviors using his methodology. It is historically more practical to understand that Gilbert Ryle’s work (especially “The Concept of Mind”) has greatly influenced how behaviorists treat the mind/body distinction, dualism, and mentalist language in their scientific writing, but that Ryle’s theories and positions were influential in part, not as a whole.

The Vienna Circle and Logical Positivism

Where might the “Logical” part of “Logical Behaviorism” come from? Why would it be called that? The answer lies in an earlier philosophical endeavor called logical positivism, developed by a group of late 19th century philosophers called the Vienna Circle. The connection between this philosophy of logical positivism with behaviorism, is that behaviorists seek a framework of language that can accurately reflect the observable facts of behavior. Without this framework of language, misconceptions, circular reasoning, and arguments about the linguistic minutiae of the scientific literature could bog the whole study of behavioral psychology as a natural science down. Dependable language leads to fewer misunderstandings in the scientific literature, and potentially better replication of what is being tested and studied.

Logical positivists, and the later logical behaviorists, wanted linguistic precision in the study of psychology and behavior. A precise language could lead to better verification of observable events. The philosophers of the Vienna Circle, called this the “principle of verifiability”, and that there should be no statements in the literature that could not be verified empirically, or at the very least is capable of verification at a future date. There are certain things that can be stated which can not be verified immediately, but allow, in their wording, a means to verify them at a later date. For example: “Next Tuesday it is going to rain.”. This statement can not be verified right now, but does allow for verification. This was important to logical positivist philosophers, and later the logical behaviorists. This is a staple of most empirical behavioral research, and is often taught as a maxim without needing explanation. It was not always the case. Without employing the “principle of verifiability”, any unverifiable statement could be used as premise with impunity. Unverifiable statements (mentalistic or substance dualistic statements for example) can not be disproven objectively, because they allow no empirical way to do so. To the logical behaviorists, these statements are hardly helpful to scientific literature.

The philosophers, scientists, and mathemeticians in the Vienna Circle, chiefly Rudolf Carnap, Moritz Schlick, Herbert Feigl, Felix Kaufmann, and A. J Ayer, developed this form of analysis using the “principle of verifiability” taking heavily from earlier philsophers like Ludwig Wittgenstein, to design a way for statements to be able to be analyzed. You likely see these types of statements in empirical and scientific research all the time without realizing it. The early logical positivists differentiated between what they called “analytical statements” and “synthetic statements”. Analytical statements could be true simply because it logically follows from their meaning.

Example (Analytical Statement): All circles are round.

Of course they are.

Synthetic statements, on the other hand, require some empirical verification in order to be confirmed or proven true. These statements when using the “principle of verifiability” can be verified.

Example (Synthetic Statement): “This cat has gray fur and is wearing clothing.”

Let’s take a look.

(Mr. Darcy)

Well, look at that. We can verify this statement with observation.

Both of these statements are important to distinguish between, but do not hold equal weight within logical positivism, and logical behaviorism. To the philosophers of the Vienna Circle, and most logical positivists, synthetic statements are what should be first and foremost of importance. Synthetic statements make claims about reality, which can be tested, and that is incredibly important when it comes to natural sciences. The use of analytical statements are more trivial, to logical positivists, because they bring no new information. Logical behaviorism shared in the logical positivist belief that true propositions and statements should be capable of scientific verification in order to be useful scientifically.

Logical Behaviorism takes from this the focus of synthetic statements about behavior, which are observable and measurable. Even when dealing with “mental concepts” or “private events”, the importance is in the language being used to create propositions that can be verified.

To Sum It Up: Logical Behaviorism Is About Language

To a logical behaviorist, concepts like the mind, and thoughts, and feelings, and imagination, all must be described in ways that have an observable or verifiable attribute to them in order to be useful scientifically. Logical behaviorism was developed in a time of strong mentalistic terminology, where circular reasoning towards behavior was common, and human action as a whole was sometimes treated as indescribable, and the mind in some ways untouchable by science. To the logical behaviorist, the semantics, or language of what we study and talk about when we try to describe behavior, even internal processes, must in some way be verifiable, or objective, in order to be useful in a scientific sense.

How we state things, and how we propose things, is important. To be too loose with language in this area invites misunderstandings, as Gilbert Ryle pointed out, or lacks the ability to be later verified, as the logical positivists believed. To make a statement that is observable, measurable, and can be verified is what many logical behaviorists believed would bring psychology, and the new branch of behaviorism, closer to that goal of being a natural science.

I hope you enjoyed this brief look at the history and reasoning behind the Logical Behaviorism theories, and the many influences. This is by no means an exhaustive dig into the rich topic, but a broad touch on the vary complex psychological and philosophical roots which all came together to bring about what we know about behaviorism and psychology, and what logical behaviorism still shines through these many decades later.

Comments? Questions? Thoughts? Leave them below! Don’t forget to follow!


References:

Clark L. Hull. (2019, February 21). Retrieved from https://en.wikipedia.org/wiki/Clark_L._Hull

Fancher, R. E., & Rutherford, A. (2017). Pioneers of psychology. New York, NY: Norton & Company.

Hull, Clark L. Principles of behavior. (1964). New York.

Ozmon, H. (2012).  Philosophical foundations of education. Upper Saddle River, NJ: Pearson.

Ryle, G. (1949). The concept of mind. New York: Barnes & Noble.

Skinner, B. F. (2015). Verbal behavior. Mansfield Centre: Martino Publ.

The new encyclopaedia Britannica. (1977). Chicago, IL: Encyclopaedia
Britannica.


Image Credits:
Artwork, and photography are originals by the author Christian Sawyer.

Token Economies: What Behavioral Science and Blockchain Technology Have In Common

“Token Economies”- two words springing up at Blockchain and Cryptocurrency summits and conferences with increasing regularity. Token Economies have been used by behavioral scientists and practitioners for decades, but recently they have taken off in the field of Blockchain and Cryptocurrency technologies. Both applications use the term “Token Economy” interchangeably. In technology conferences and summits, it is the original behavioral psychology definition that is used to describe the concept. The tech field is now using the original token economy concept and expanded it to apply to what some might call the future of commerce and currency. Exciting stuff. Here, I will break down the basic concepts of what a Token Economy is, and how both behavioral scientists/analysts use them, as well as the new application in the technology by Blockchain and Cryptocurrency developers.

tokens

The Token Economy

Let’s break it all down. What is a token economy? A token economy is a system where tokens, or symbols, are used as conditioned reinforcers which can be traded in for a variety of other reinforcers later. It is not a bartering system or prize system where objects/access/services are given directly following a target behavior, but a conditioned stimulus (token) without necessarily any intrinsic value that is agreed upon to add up to exchange or buy another reinforcing item. A common example that most of us are used to is money. Paper money, specifically, can be considered a part of a token economy in that it is “traded in” towards some terminal reinforcing stimulus (or “back up reinforcer” as it is called in behavior analysis). The paper money is a conditioned reinforcer because it has no necessary intrinsic value but has conditioned value for what it can eventually be used for within the token economy.

This was taken up originally by behavioral researchers in the 1960’s, as a form of contingency management for the reinforcement of “target behaviors”- or prosocial learning, in therapy situations. Reinforcers are important psychologically because, by definition, reinforcers change the rates of behavior that they follow. They can help teach life-changing skills, or alternatives to some destructive or undesirable behavior quickly. But, reinforcers can be tricky too. People can become bored or satiated with tangible rewards, such as food, but within a token economy, reinforcement can be delivered in the form of tokens and allow for a later exchange or choice of any number of possibilities desirable to that individual. By pairing these tokens with access to “primary reinforcers” (reinforcers that are biologically important) or other “secondary reinforcers” (stimuli that have learned value), the tokens themselves become rewarding and reinforcing- thereby creating a sustainable system of reinforcement that defies the satiation and boredom variables that the researchers originally found as barriers to progress. Alan Kazdin’s work “The Token Economy” is a fantastic resource on the origins and research that began it all.

What can a token be? Nearly everything. But, it has to be agreed upon as a token (given some value for exchange) in order to serve as a token for the purpose of trading it in, or buying with it. Giving someone a high five after doing a great job at work, for example, is not a token. It is a reward, and possibly a reinforcer, but it was not conditioned to have value, and cannot be saved or exchanged. Tokens also need not necessarily be physical, or tangible. They can be symbols, or recorded ledgers, so long as that information can be used for the exchange in the corresponding token economy. This is where blockchain and cryptocurrency technologies tie in to the original behavioral science understanding of a token economy. Can data, or information, serve as a token and be used in a token economy if it is agreed upon to have value and worth exchange? If you haven’t heard of BitCoin (a Cryptocurrency), the answer is yes.

2rarqo

Blockchains and Cryptocurrencies

What is Blockchain then? And what is a Cryptocurrency? Using our original definitions of tokens and token economies, for data or information to be considered tokens they have to be able to be exchanged and have value that can be traded within the token economy. Blockchain technology solves this by creating units of data called “blocks”. These blocks, simply put, are a growing list of data records that contain a “cryptographic hash” of previous blocks. These linked blocks form a ledger which is resistant to duplication and tampering. In layman’s terms, unlike most data that people can manipulate and come into contact with day to day, a “block” within this Blockchain cannot be altered or copied and maintains a faithful record of time and transactions. Resistance to copying/duplication means that it cannot be forged, and resistance to altering means that this data (the record of information) is seen as reliable. If we create a currency using this technology, then we have the means to create units, or tokens, that are individual, can be traded, and have a consistent and (for the cases of this introduction) unalterable record of transaction. Assigning value to this creates a digital currency called Cryptocurrency. Tokens. Transactions can take place using these blockchains. These transactions take place person to person (“peer to peer” or P2P), meaning that once a unit of cryptocurrency is exchanged from one person to another, it resembles very much a physical exchange of all other forms of currency. This exchange does not require any medium, such as a bank, like physical currency does in online banking for example.

Blockchain and Cryptocurrency developers, then, would be looking to create a form of token currency that can be traded within this broader token economy- that is both reliable enough to be used by enough people to catch on or become commercially viable, while still maintaining the benefits of a cryptocurrency (security, privacy, etc) over traditional currency. These cryptocurrencies, these units of data, these blocks, have no intrinsic value themselves. They are tokens in the very real sense that the original behavioral research intended. Their usage and effects, then, appear to follow in the same vein. Currency can be reinforcing, reinforcement can alter behavior, and once a token takes on value through the conditioning process; it can be truly valuable in its own right as a “generalized reinforcer”- a reinforcer that is backed up by many other types of reinforcers. A dollar, for example, as a widely used currency can be used for a nearly countless number of goods, services, and transactions. This makes it a good generalized reinforcer. The more a token can be traded for, the better a generalized reinforcer it becomes.

Will a form of cryptocurrency, like Bit Coin, gain this same traction as a currency, or token, to access other reinforcers in trade? Many people say yes. That’s where both behavioral scientists and blockchain developers can both find excitement in each new development and innovation.

Likes? Comments? Questions? Did I get it wrong ? Leave your comment below!

References:

  1. Alan, K. (n.d.). The Token Economy: A Review and Evaluation. New York, NY 1977: Springer. doi:10.1007/978-1-4613-4121-5
  2. Blockchain. (2019, January 13). Retrieved from https://en.wikipedia.org/wiki/Blockchain
  3. COOPER, JOHN O. HERON, TIMOTHY E. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. Place of publication not identified: PEARSON.
  4. What is Simple Token (OST) [Audio blog post]. (2018, August 22). OST Live Podcast

Image Credits:

http://www.imgflip.com

http://www.smilemakers.com

Did Cognitivism Beat Behaviorism?

hands-people-woman-meeting

Some hold firm to the idea that the division between behaviorism and cognitivism is a vast divide; where there is a winning theory and a losing theory. You’ll hear them- “Behaviorism died decades ago!” and “Thoughts about thoughts? That’s just unprovable mentalism!” shouted from entrenched believers until they are blue in the face. There may be some salient historical details that explain why they feel that way; behaviorism (arguably) replaced many of the mentalism and introspective psychological methods well into the 20th century. Then, some would say that the behaviorist movement was halted by Chomsky’s rebuttal of B.F Skinner’s “Verbal Behavior” and the rise of the 1960’s “Cognitive Revolution”. The deep division could be argued as unbridgeable. As someone who was not practicing at the time of these contrasting theories coming to a head; I always wondered what it would have been like. Did everyone see it as a giant butting of heads? Did all the researchers and scientists find themselves marked on either side? Are the loud entrenched voices of today just echoes of the past that haven’t been resolved? If so, how did cognitive behavioral therapies do so well blending the two perspectives? There had to be more than just a line in the sand. Enter Terry L. Smith, and his book “Behavior and it’s Causes”- relating the exact sentiment which I was so curious about.

“I had (just like everyone else) read Kuhn (1970), and so almost reflexively interpreted cognitive psychology and behavioral psychology as competing paradigms (see Leahey, 1992, for a discussion of how common, and mistaken, this interpretation is). Cognitive psychology was clearly on the rise, so I inferred that the Skinnerian program must be on the decline. Indeed, I thought it must have disappeared by now… What I discovered was that during the 1960’s, the Skinnerian program had actually grown at an accelerating rate. This baffled me. How could operant psychology have survived and even prospered in the midst of “the cognitive revolution”?”

-Smith (2011).

How could that be? Terry L. Smith’s book explores this topic, speculates on some great points, and comes to several strong conclusions. I won’t spoil it for you aside from one- “Operant psychology” as Smith calls it, separated itself from being tied down to every philosophical tenet of Radical Behaviorism. It was Radical Behaviorism, in Smith’s view, that had taken the beating because it was too rigid on what it would allow to be studied, and cut too much out of what could be considered the study of behavior. This was a fascinating point, to me, since I had already studied what B.F Skinner had done with Radical Behaviorism to broaden it from Methodological Behaviorism (ie. private events). We’ve heard this one before, right?

“Radical Behaviorism does not insist upon truth by agreement and can therefore consider events taking place in the private world within the skin. It does not call these events unobservable”- Skinner, 1974

This was one of the larger distinction B.F Skinner made from Watson’s methodological approach which was strictly focused on observable stimuli and responses. If we take Smith’s interpretation on what “operant psychology” is today; it goes even further from radical behaviorism by cutting the divide and seeing itself within the broader breadth of psychology as a whole. This rings true for me when I speak to the behaviorists and practitioners I see in the field- there is still that aversion to “mentalism”, but the focus on the observational thrust that comes from Watson’s strict view is mainly practical- data collection is best done when people can see and define what they track. The behaviorist tradition still lives on in the practice of Applied Behavior Analysis, for example, but Skinner’s written word is not taken as a biblical truth; the components of the philosophy and science that propelled behavioral psychology to continue to progress are still empirically validated. They are scientific findings. The ones that work and do the most good remain.

This is Smith’s main point on “operant psychology” during the “cognitive revolution”; it continued on stronger than before on its own steam because the findings were strong and reproducible. While Chomsky, and other cognitivists, had made some compelling points on the limitations of Radical Behaviorism as an idea and philosophy; it did not undercut the behavioral science as a whole. The practices, techniques, and ideas of both Methodological and Radical behaviorism that came through in the empirical work remained. The broader reaching philosophy that might limits on the science with no empirical backing? Not so much.

Keep in mind that during the “cognitive revolution” beginning in the 1960’s- research in brain mapping and neurobiology had come a long way from the days when Watson, Pavlov, Thorndike, and Skinner began their work. Behavioral theory had been running strong for the beginning of the 20th century, and was now met with convergent findings. Both had their uses, and the ideas that did not refute one another but did overlap when it came to the theories. Internal processes were becoming more understandable through the biological discoveries; which some strict behaviorists may have misinterpreted as just another form of mentalism. That’s a hang up that did not help them. On the other hand, some cognitivists still thought all of behaviorism was comparing humanity to basic stimulus-response (S-R) machines. Another misunderstanding, another hang up. My interpretation is that people fought over those illusory extremes. Those were the voices that screamed the loudest but at the same time were the most misguided on what was actually happening. I equate this to the kind of thing we see on the internet- the “strawman arguments”, where someone constructs an exaggerated facsimile of their opponents’ ideas and tears those down rather than confronting what is actually said. It creates an easy target, but does not actually represent the reality. Strict behaviorists get some things right. Strict cognitivists get some things right. Sometimes…just sometimes… both groups get things wrong too! Surprising, right? That is how anything based in theory and following the scientific method actually works.

Maybe Terry L. Smith is on to something. Maybe we consider ourselves all a part of Psychology with a capital P, and put our findings and theories out there. The right ones that can empirically and reliably help people will be the legacy.

To be fair though, I am not completely in the objective virtuous middle; I’ve read Noam Chomsky’s review of Verbal Behavior and believe he missed the point.

Thoughts? Likes? Comments? Questions? Leave them below.

References:

Chomsky, N. (n.d.). 4. A Review of B. F. Skinner’s Verbal Behavior. The Language and Thought Series. doi:10.4159/harvard.9780674594623.c6

Skinner, B. F., & Skinner, B. F. (1957). Science And Human Behavior. Riverside: Free Press.

Smith, T. L. (2011). Behavior and its causes: Philosophical foundations of operant psychology. Dordrecht: Springer.
Photo Credits: http://www.pexels.com

Happy ABA Halloween!

Title

Halloween is coming up soon, and as a treat, I’ve created some silly and fun ABA style printouts. UPDATE: For the 2019 Halloween holiday fun, all new print outs will be added as we get closer to the holiday!

  1. Spooky IOA Data!
OoOH1

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween12.pdf

2. The Horror of Subjective ABC Data!

OoOH2

Link to the full printout here: ABAHalloween2

3. The Terror of Incomplete Data!

OoOH3


Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween31.pdf

4.  The Dread of Corrupted and Lost Graphed Data!

OoOh4

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween41.pdf

5.  The Sheer Fright of Finding Ineffective and Non-Student-Centered Goals!!

OoOh5

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween5.pdf

6. The Shrieking Terror of Unnecessary Most to Least Prompting!!

OoOH6


Link to the full printout here: ABAHalloween6

7. The Dread of Pseudoscience for “Behaviors”!

OoOH7


Link to the full printout here: ABAHalloween7


8. The Panic of Misused Terms

Link to the full printout here: ABAHalloween8

OH NO! I hope I didn’t scare you too badly.

Have some candy and remember how safe and relevant all your data and interventions are… Whew.

Like them? Take them! No fee, but please be kind with artistic credit.

Why we don’t always prompt: Behavior Analysis meets Vygotsky.

kids-girl-pencil-drawing-159823

In the early 20th century, there was a developmental psychologist named Lev Vygotsky working on theories of learning and development in parallel to many of the behaviorist traditions. If you were to ask a graduate student taking behavior analytic courses who Vygotsky was, they would most likely shrug their shoulders and wonder why that was important. He isn’t Watson. He isn’t Pavlov. He isn’t Thorndike. He isn’t Skinner. He isn’t Lindsley. So, why would a behaviorist ever want to care? Well, it’s because his work ties in so closely to the behaviorist tradition, that you could in some cases use his terminology and frameworks interchangeably and still see the same results. His work can help clarify why we, as behavior analysts, trainers, educators, and even parents, should not prompt every single time we see a child begin to struggle with an endeavor or task.

To an educator or professional following the behaviorist tradition, it’s not all that hard to describe. Prompts help the learner reach a reinforcement threshold that that their response likely could not have reached on its own. Shaping- describes a process by which an emergent behavior which is similar in some way to a target behavior, is reinforced by successive approximations to become the terminal target behavior. Basically, it’s taking an “okay” behavior attempt, and rewarding the behaviors that look closer to improvement until it’s “perfected” enough to reach more naturalistic reinforcement in the broader environment. To a behaviorist, that means looking at what the learner has in their repertroire, what they can do right now, and plan to reward the responses that improve that towards some end goal response. But wait, how exactly do we know when to intervene? And why don’t we intervene every time we see the learner encounter difficulty?

The trouble with that is that sometimes a learner does not actually learn from being prompted too much. Sometimes that reinforcement only contacts the amount of effort the learner expends to receive prompting. Sometimes they become dependent on those prompts, and then it is the educator doing the behavior, and the learner receiving reinforcement. They don’t improve because they have no need to improve. They get the prize every time their educator does it for them. That behavior that the educator prompts, might never transfer through modeling. Why should it, if the reinforcer comes anyway? This is where Vygotsky comes in. Vygotsky believed that there is a Zone of Proximal Development.

Lev Vygotsky was not a behaviorist. In many ways, he was against the methodological behaviorism that was popular at the time which focused on purely observable stimulus-response relationships. Vygotsky also believed that learning was not just a process that drew from a present environment of contingencies, but a broader wealth of cultural and societal forces that accumulate through generations and have impacts that were not directly related to the behaviors at hand. However, when it comes to the Zone of Proximal Development, his theories coincide with what behaviorists would conceptualize as both repertoires and the necessary thresholds for prompting. Vygotsky believed that there was a level at which a learner could successfully accomplish tasks without assistance, and a level at the other end of their developmental range that they could not accomplish without considerable help in the form of prompting. Between that, however, was a zone where a learner could accomplish them with some collaboration and prompting and eventually surpass it to a level of independence. It’s a zone that is in many ways different from individual to individual, but within that zone of proximal development; prompting (or collaboration as he called it) was at its most effective.

Think of it like this:

Zone of the learners “actual” development Zone of Proximal Development The limit of their current developmental ability
These are responses that the learner can perform, and tasks that the learner can complete without any assistance from others. These are tasks and responses that the learner can accomplish with the assistance and prompting of others.

These are tasks and responses that are beyond the learner’s ability to accomplish and can only be produced with considerable support and assistance.

*Behaviorist Footnote:
Think of this as the responses already in the learner’s repertoire. These are “easy”.
*Behaviorist Footnote:

Think of this as the area of “shapable” responses that are likely to lead to independent future responses. Vygotsky called this “scaffolding” but the process of “shaping” is synonymous.

*Behaviorist Footnote:

The client can be prompted through these tasks, but are unlikely to be able to reproduce them even with shaping procedures at this time.

This framework delineates an interesting range where a learner needs and could use the help of an educator or teacher to help prompt them, and when not. In the initial range, prompting is unnecessary and might actually hinder the learner from engaging in those responses in their most independent forms. The learners who can engage in the “easy” responses and find that reinforcement in the broader environment would be more likely to occur in the future. Prompting too much here could stifle that. In the next range, the Zone of Proximal Development, as Vygotsky calls it; prompting could actually be of the most use! These are responses that are viable for occurring and reaching natural reinforcement, but they just need a little help at first to get there. Here, prompting in the form of modeling or shaping could help the learner take their initial responses and bring them to their terminal and most effective independent forms. This is the exciting part. This zone is where the work put in by the educator and teacher could meet maximum return on what the learner can benefit from. Now, we have to be careful not to reach for the moon here. The final zone is where, even with prompting, the learner is unlikely to be able to shape their responses successfully. This, for example, is trying to teach a learner to run before they can walk. They need those foundational responses before they can even be prompted to a more advanced terminal response. An educator who comes across this scenario might be wise to dial the expectations back.

Between those two ranges of “easy” and “unlikely”, we find the responses that can be prompted for the most good. We would not prompt too much, and stifle the learner’s ability to contact reinforcement on their own, but nor would we fail to prompt at all, and miss those responses or behaviors that just need a little push. This is where a behaviorist, teacher, educator, or even parent, can take a thing or two from Vygotsky’s work. And if you’re a tried and true behaviorist who can’t believe that a cognitivist would be mentioned here, I’d suggest an open mind. You might even be surprised about the similarities between Vygotsky and Skinner on private events and “inner speech”. We can touch on that later, but for now, think about the zone of proximal development in your life and practice; what could use a little help?

Likes? Comments? Questions? Leave them all below!

References:

Burkholder, E. O., & Peláez, M. (2000). A behavioral interpretation of Vygotsky’s theory of thought, language, and culture. Behavioral Development Bulletin,9(1), 7-9.
COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
ORMROD, J. E. (2019). HUMAN LEARNING. S.l.: PEARSON.
Image Credits: