Understanding Control, and Hope for a Better Future

“The danger of the misuse of power is possibly greater than ever. It is not allayed by disguising the facts. We cannot make wise decisions if we continue to pretend that human behavior is not controlled, or if we refuse to engage in control when valuable results might be forthcoming. Such measures weaken only ourselves, leaving the strength of science to others. The first step in a defense against tyranny is the fullest possible exposure of controlling techniques. A second step has already been taken successfully in restricting the use of physical force. Slowly, and as yet imperfectly, strong man is not allowed to use the power deriving from his strength to control his fellow men. He is restrained by a superior force created for that purpose- the ethical pressure of the group, or more explicit religious and governmental measures. We tend to distrust superior forces, as we currently hesitate to relinquish sovereignty in order to set up an international police force. But it is only through such counter-control that we have achieved what we call peace- a condition in which men are not permitted to control each other through force. In other words, control itself must be controlled.” -B.F Skinner, “Freedom and the Control of Men”

This quote taken from B.F Skinner’s “Freedom and the Control of Men” stood out strongly to me more recently than it has before. It is both a warning, and in a sense, an optimism for the future to reign in force and better understand science. “Freedom and the Control of Men” is a title that, at first, comes across as very antiquated and a little tone deaf in the word usage, but was very much written as both a critique of its time, and also geared to the readers of the future. “Men”, in this usage, refers to humankind, and not a particular sex. In it, Skinner carefully takes into account the period of time in which it was written in the United States in 1955, but an optimistic view of the future- and speaks to the reader about the topic of control, and how “tyranny” can hide in an atmosphere of democracy. He speaks to future readers directly. Coercion, violence, and disproportionate uses of these were very much alive in 1955 as forms of control, and unfortunately for us readers of today, are apparent still. Force and violence to control a population were spoken of as something to be left in the past, something to overcome as a society. Control that needed to be controlled itself. It is one of B.F Skinner’s lesser known works, but hold points that still underpin very much of the behaviorist views of a better world through science, not violence or ignorance. He did not shy away from the idea that there is a purpose in aiming for perfection, perfection is not impossible, but is not easy to attain either even in democracy. A message of humanitarianism driven by science. It is not ingrained, it is taught, shaped, and practiced. The painful realization for modern readers is that we have not come far at all from 1955.

In this piece, Skinner speaks to us in “Footnotes for the reader of the future”, which I found to be helpful and an insightful way to see that this was a piece of its time, but was also intended not to stagnate back there. It was meant to give a grounding in the period which this early behavioral science came from. B.F Skinner believed in the improvement of the future through behavioral science; a belief that I think most people who study psychology or are interested in it, believe too. Things can be better if we just strive to understand them. “Freedom and the Control of Men” was not meant as a guidebook in stamping out freedom, or forcing people to follow a path, but rather to help understand that control exists outside of the connotations of coercion. There are good forms of control that help bring order, and progress and “designing a new cultural pattern”, but also to understand that there are forms of control that can hold all of that back and grasp at power for selfishness, or indoctrination. If we do not understand both how good control, and the more selfish forms control work, designing a better future is a very difficult task. Only through science can an understanding of control be explored which is not skewed by propaganda or ideological misuse. Skinner poses a question that stands out, and serves as the underlying point to his piece:

“The question is this: Are we to be controlled by accident, by tyrants, or by ourselves in effective cultural design?”

Effective cultural design is something that B.F Skinner explores in many of his works, even in the fiction of Walden Two, but always has an equitable and positive aim for humanity. Behavior change that leads to a better world where the malevolent, violent, or selfish forms of control are not used on the populous for behavior change. Misuse of power, which is described in the quote at the start of this essay, is something Skinner warns us repeatedly about, and something I believe many of us may still see in abundance around us today. As Skinner puts it, it takes an ethical and well thought out process to effect this change through science. But even in 1955 there were opponents to the idea of science as the means to work out change. In “Freedom and the Control of Man” Skinner references two works revolving around this point: Fyodor Dostoevsky’s “Notes from the Underground”, and Aldous Huxley’s “Brave New World” to describe the general ideas of human “cussedness”, or the idea that people would reject control, even an ethically guided and scientific implementation of it through effective cultural design. At the time of its writing, Skinner references an idea of a human innate refusal of control, and a new forming fear of scientific dystopian futures arising from even the most basic forms of behavioral conditioning. Skinner poses that control exists, either by accident, by tyranny, or by a more scientific and ethical cultural process regardless.

I recommend to everyone to read “Freedom and the Control of Men”, “Notes from the Underground”, and “Brave New World”. Skinner’s ideas truly tie together nicely here with the referenced works understood as he intended. For brevity, I will put the “Piano keys” reference of Dostoevsky below, which Skinner uses to relate the common idea that humans would innately refuse all control, which would stand in the way all efforts to improve human behavior:

“…out of sheer ingratitude, sheer spite, man would play you some nasty trick. He would even risk his cakes and would deliberately desire the most fatal rubbish, the most uneconomical absurdity, simply to introduce into all this positive good sense his fatal fantastic element. It is just his fantastic dreams, his vulgar folly that he will desire to retain, simply in order to prove to himself–as though that were so necessary– that men still are men and not the keys of a piano, which the laws of nature threaten to control so completely that soon one will be able to desire nothing but by the calendar. And that is not all: even if man really were nothing but a piano-key, even if this were proved to him by natural science and mathematics, even then he would not become reasonable, but would purposely do something perverse out of simple ingratitude, simply to gain his point. And if he does not find means he will contrive destruction and chaos, will contrive sufferings of all sorts, only to gain his point! He will launch a curse upon the world, and as only man can curse (it is his privilege, the primary distinction between him and other animals), may be by his curse alone he will attain his object–that is, convince himself that he is a man and not a piano-key! If you say that all this, too, can be calculated and tabulated–chaos and darkness and curses, so that the mere possibility of calculating it all beforehand would stop it all, and reason would reassert itself, then man would purposely go mad in order to be rid of reason and gain his point!”- Fyodor Dostoevsky, Notes from the Underground

We see this all the time. Rebellion against any hint of new regulation or advice, no matter what the intent as noble or admirable. This belief still remains popular in culture today as much as it was when Skinner referenced it in 1955. There is still a belief that no matter how ethical the goal; be it wearing a face mask to reduce the risk of disease transmission, or taking advice to better oneself, there is an innate need to rebel no matter what the cost and damage it wreaks- and that the rebellion is the natural and right thing to do. Or, taking Aldous Huxley’s “Brave New World” wherein social and environmental engineering only lead to a world where people serve as cogs in a machine with very little imagination or will of their own. It was not ethical, it was tyrannical. Rebellion, in a sense, glorified as right no matter the cost. Behavioral science, and any control itself, was bad. Skinner believed that there was more to science than that. Science was a tool, and could be used for negative ends just as positive ends, but it could be used positively.

Skinner believed that behavioral science could be used to understand control, not as a form of “brain washing” or “fooling with the machinery in the human head”, but as a way to step forward and away from the very real and existing systems of the past that hold people back today. “Freedom and the Control of Men” was written with hope that the democratic philosophy that many of us know could either use science as a strength to move forward to a better future, or risk falling back into the very tyranny and violence that it was meant to overcome. In Skinner’s words:

“If Western democracy does not lose sight of the aims of humanitarian action, it will welcome the almost fabulous support of its own science of man and will strengthen itself and play and important role in the building of a better world for everyone. But if it cannot put its “democratic philosophy” into proper historical perspective- if, under the control of attitudes and emotions which it generated for other purposes, it now rejects the help of science- then it must be prepared for defeat. For if we continue to insist that science has nothing to offer but a new and more horrible form of tyranny, we may produce just such a result by allowing the strength of science to fall into the hands of despots.” – B.F Skinner “Freedom and the Control of Men”.

In 1955, these words came a decade after World War II ended, a rise of cultural and governmental attitudes towards communism, and at the very spark of the civil rights movement. There is certainly historical context that needs to be applied when reading it too, and the meaning behind this piece holds enduring hope and truth, in my opinion, about what science, especially behavioral science can bring to the world. Ignorance of control, praising the use of violence as a form of control, or holding too tightly to the notion that rebelling against even the safest forms of control, is human nature may only lead to a repeat of history in which no one benefits.

I hope you have a chance to read the works above, and take as much enjoyment and reflection as I had.


  1. Huxley, A. (1998). Brave new world: Aldous Huxley. New York, NY: Spark Publishing.

2) Dostoyevsky, F. (1993). Notes from the underground. New York, NY: Vintage Classics.

3) Skinner, B. F. (1999). Cumulative record. Place of publication not identified: Copley Pub. Excerpt: “Freedom and the Control of Men”

Comments? Questions? Leave them below.

Philosophic Doubt- When Scientific Inquiry Matters

There are important assumptions, or attitudes of science, which ground scientific study across all disciplines; Determinism, Empiricism, Experimentation, Replication, Parsimony, and Philosophic Doubt. The last one holds a key role in how we deal with the information we gain from science, and what we do with it in the future. Philosophic Doubt is the attitude of science which encourages us to continuously question and doubt the information, rules, and facts that govern our interpretation and understanding of the world (universe, etc). Philosophic Doubt is what has practitioners of science question the underpinnings of their belief, and continually do so, so that their understanding is based on consistently verifiable information. Philosophic Doubt cuts both ways- it can have a scientist test the truthfulness of what others regard as fact, but that means they also must take on the same level of scrutiny and skepticism in their own work. To some, Philosophic Doubt is a gift that has helped them expand on their ideas and shape them beyond the initial first experimental steps. To others, Philosophic Doubt is a detrimental form of skepticism clawing at information or beliefs that they hold dear. These views are not new, and in fact we can find traces of this disagreement going back to the 19th century. Here we will explore the assumption of Philosophic Doubt, including proponents and detractions both old and new.

Why do we need Philosophic Doubt anyway?

Philosophic Doubt is important to science because it has an effect on how the progression of scientific work takes place. It has scientists test their own assumptions, hypotheses, and underlying beliefs, even if those are held precious to them, against replicable evidence and new future findings. Philosophic Doubt drives experimentation, and it precedes replication as well. It is what underlies the empirical drive for seeking evidence. Without philosophic doubt, science can go wrong. A hypothesis could be formed based on inaccurate information which would never be retested. Subjective experience could entrench anecdotes in a study as a broader experience than they are. A scientist could start with what they want to find, and cherry pick only what fits their assumption. These examples are the risks of not taking Philosophic Doubt in to account. Sometimes it can simply boil down to the scientist wanting to be right, against keeping an open mind that they might not be. Holding the assumption that there is a benefit to questioning findings or previously accepted beliefs is not a slight against past experience or belief, but rather a better way of interpreting future information if it were to challenge it. Questioning is a part of science, but not everyone thought so.

“In Defence of Philosophic Doubt”

Authur James Balfour, a 19th century philosopher, debater, and scientist, took this topic head on in “In Defence of Philosophic Doubt”. Unlike today, opponents of Philosophic Doubt at the time were more interested in comparing the empirically-heavy scientific beliefs to a more open metaphysical series of alternatives- that is, they were more interested in comparing science to non-scientific belief systems as the truth of reality. When it came to psychology, there were idealists, and realists, and stoics at each others throats with concepts that could not be observed or proven. As you might already be able to see, comparing metaphysical constructs to an assumption that has them continually question their arguments and points, makes metaphysical assertions all the harder to make. Scientific points, however, make Philosophic Doubt a little easier to withstand:

Under common conditions, water freezes at 32 degrees Fahrenheit

Employing Philosophic Doubt, we can continually circle back to this assertion to test it again, and again. Pragmatically, there comes a point where we only question these basic and well founded particulars when we have reason to do so, but the doubt is always present. Sometimes for precision, sometimes to be sure that we are building off of the knowledge correctly, and others to help with the replication and experimentation assertions that grow science. Balfour was a strong proponent of natural sciences, and the use of this kind of questioning. Science founded on observation and experimentation was something truly important to him. Keep in mind, the 19th century was shaped by scientific discovery at a pace never before seen. Balfour kept an even head about this, and believed more in the assumptions of science as the path to understanding the natural world. Propositions which states laws, or which stated facts, had to be built on concrete science and not just personal belief or anecdote. Some of his points we would take as obvious today- for example, when using comparative probability, would we run an experiment or trial just once, or twice? Multiple times? If we ran something like this just once, it wouldn’t be comparative probability, but if we ran it twice and accepted this as the final answer to the question we would miss out on the further replication and experimentation on the subject. The curiosity that Philosophic Doubt embodies would keep the experiment and replication going. Without Philosophic Doubt, we fall into a trap of not questioning initial assumptions or findings.

Another interesting thing about Balfour’s work is that it came at a time where there was a great deal of belief in a mechanical universe that followed strict Newtonian laws. At the time, this was compared with more metaphysical alternatives. Balfour cautioned everyone to continually use philosophic doubt and to question both belief systems- even if the “mechanical universe” was winning by a landslide at the time. If we were to take Balfour’s points and stretch them into the future, we might see how he would have found some justification in further development in physics- quantum mechanics for example, where the Newtonian mechanical universe which was seen as sufficient to explain everything, falls a little short. Without that testing of the original tenets of physics, the use of Philosophic Doubt, we might not be where we are now. The analysis of Balfour’s work could go on for entire chapters, but I would like to top it off with an excerpt on the topic of the evolution of beliefs, and the reluctance to test our own personal beliefs:

“If any result of ‘observation and experiment’ is certain, this one is so- that many erroneous beliefs have existed, and do exist in the world; so that whatever causes there may be in operation by which true beliefs are promoted, they must be either limited in their operation, or be counteracted by other causes of an opposite tendency. Have we then any reason to suppose that fundamental beliefs are specially subject to these truth-producing influences, or specially except from causes of error? This question, I apprehend, must be answered in the negative. At first sight, indeed , it would seem as if those beliefs were specially protected from error which are the results of legitimate reasoning. But legitimate reasoning is only a protection against error if it proceeds from true premises, and it is clear that this particular protection the premises of all reasoning never can possess. Have then, then, any other? Except the tendency above mentioned, I must confess myself unable to see that they have; so that our position is this- from certain ultimate beliefs we infer than an order of things exist by which all belief, and therefore all ultimate beliefs, are produced, but according to which any particular ultimate belief must be doubtful. Now this is a position which is self-destructive.

The difficulty only arises, it may be observed, when we are considering our own beliefs. If I am considering the beliefs of some other person, there is no reason why I should regard them as anything but the result of his time and circumstances.” -Arthur James Balfour, “In Defence of Philosophic Doubt” (1879).

Back to Basics- Science and Philosophic Doubt

In “Applied Behavior Analysis ” Cooper, Heron, and Heward begin their first chapter with the basics of what science is, specifically behavioral science, and the assumptions and attitudes of science including Philosophic Doubt. Cooper, et al., consider these foundational concepts in science as a whole and relate their importance to psychology and behavioral science. In their words:

“The attitude of philosophic doubt requires the scientist to continually question the truthfulness of what is regarded as fact. Scientific knowledge must always be viewed as tentative. Scientists must constantly be willing to set aside their most cherished beliefs and findings and replace them with the knowledge derived from new discoveries.

Good scientists maintain a healthy level of skepticism. Although being skeptical of others’ research may be easy, a more difficult but critical characteristic of scientists is that they remain open to the possibility- as well as look for evidence that their own findings and expectations are wrong.” -Cooper, Heron, Heward, “Applied Behavior Analysis”, (2017).

Bonus! B.F Skinner
“Regard no practice as immutable. Change and be ready to change again. Accept no eternal verity. Experiment.”- B.F Skinner, 1979

The sentiment behind Philosophic Doubt and science is that of openness and humility. Not only is the scientific work we read subject to doubt, but our own as well. The latter is the most difficult part- challenging our own beliefs constantly, challenging our most cherished propositions and reasoning. To some, this is something that expands the horizon of future knowledge infinitely, to others; a hard trail to follow that is no easy task. In either case, perhaps this brought up the importance of Philosophic Doubt, and how it ties in with the other assumptions in science as a challenging but inseparable part of the process.

Comments? Thoughts? Likes? Questions? Leave them below.


1. Balfour, A. J. (1921). A defence of philosophic doubt: being an essay on the foundations of belief. London: Hodder & Stoughton.

2. Cooper, J. O., Heron, T. E., & Heward, W. L. (2017). Applied behavior analysis. Hoboken, NJ: Pearson.

3. Skinner, B. F. (1953). Science and human behavior: B.F. Skinner. New York: Macmillan.

The Philosophy of Logical Behaviorism

How we use language in behavioral science and psychology is important.

If you’ve ever studied psychology, or behaviorism specifically, have you ever asked?
“Why do we have to use observable terms for behavior?”
“Why do we define things in operationally and observational terms?”

These are also the questions that “Logical Behaviorism” was chiefly concerned with. If we are to treat psychology as a natural science, then proponents of logical behaviorism would suggest the language, theory, and semantics used in this process should reflect this.

Logical behaviorism is perhaps one of the more obscure branches of behaviorism as a whole, but whose history is closely tied with the more familiar methodological or radical behaviorist schools of behavioral thought. Logical behaviorism originated in the early 20th century when scientists and philosophers aimed to establish psychology as an independent and experimental natural science. Like methodological (classical) and radical behaviorism, logical behaviorism shared the focus of objectivity and reliance on measurable techniques for observation, data collection, and the rejection of introspective-heavy mentalistic explanations.

Where logical behaviorism differed from other behaviorist branches, is that its was concerned primarily with the scientific usage of language and semantics in psychology. The early proponents of logical behaviorism aimed to completely differentiate the objective and scientific behavioral psychology of the time, from the popular Freudian and Jungian introspective and mentalistic psychological writings. Because of this, logical behaviorism is often seen as more a philosophical psychology than directly empirical. Many of the positions of logical behaviorism are hidden in the works that modern behavioral practitioners are very familiar with. They inform the attitudes and (even in some cases) suspicion towards mentalistic language in day to day practice.

“The Ghost In The Machine”

The philosopher primarily known with the development of logical (or analytical) behaviorism was Gilbert Ryle. For most of Ryle’s academic career, he was focused on the dismissal of the mind/body distinction (Cartesian dualism, or substance dualism) that was, and still is, extremely common in psychological and philosophical writings and thought. You likely come across dualistic language like this often in even the briefest psychological conversations, which infer that mental states (thoughts, feelings, imagination, etc) occur in a hidden, or non-physical area or dimension of the mind, apart from physical and physiological processes. Ryle disagreed with this.

Belief, for example, would not be seen as an airy mental element of cognition, but something completely within the explanatory processes of biology, according to Ryle. Gilbert Ryle believed that there was a risk in these distinctions, that lead philosophy and science astray, chasing separations that did not actually exist.

Gilbert Ryle’s writings (“The Concept of Mind”) often took aim at these dualistic notions, taking the example of a mental effort of “will or volition” which then transformed into physical action (mental thought leads to physical action), as a widely held mistake. He calls these mistakes the “dogmas of the ghost in the machine”. The very study of the separation between mind and body were a waste of time, and fruitless, accord to Ryle because they were category mistakes; separated more by their linguistic definitions than any real qualities. Instead, Ryle proposed that all actions and behavior are physical in nature, and that there are propensities and dispositions that can be explained entirely by the behavioral actions of an individual to seek or avoid the stimuli involved.

For example, there would be no mind/body distinction between wanting breakfast, and cooking breakfast. For Ryle, there was nothing in some immaterial mental state that spurred the cooking of breakfast. To speak or imagine this cause-effect relation, leads to a principle misunderstanding of the event and behaviors themselves. If we tried to study this event using those terms, we would have to chase down the immaterial mental state, or depend that it existed outside of physical or observable evidence, as a part of our study. This could lead to circular reasoning very quickly. Chasing this “ghost in the machine” bears no scientific fruit.

Ryle does not preclude that there are physical process of behavior and action which can not be seen (which he calls propensities and dispositions), but that these do not reside in some immaterial state, and these propensities/dispositions can be discovered through the modes of observable behavioral action. This shares some similarity with the “private mental events” of B.F Skinner’s radical behaviorism, but does not go as far into speculation of functional and environmental relations as Skinner had. Ryle did use a behaviorist theory of mind, but one that was focused on the language of behavioral processes.

It is fair to note, however, that Ryle’s work has received criticism because the focus on language dependent on observable action may be too restrictive. Critics of Ryle’s work have often brought up that there may be a bigger distance between internal or “mental states” and verifiable behavioral actions. It is possible for most people to imagine a situation where someone is happy, while not showing outward “behavioral actions” or signs of happiness. The reverse is also true. Movie actors for example, act in ways that do not accurately represent their “mental state”. An actor’s portrayal, in some cases, does not reflect the actual propensities and dispositions that Ryle would infer from their behaviors using his methodology. It is historically more practical to understand that Gilbert Ryle’s work (especially “The Concept of Mind”) has greatly influenced how behaviorists treat the mind/body distinction, dualism, and mentalist language in their scientific writing, but that Ryle’s theories and positions were influential in part, not as a whole.

The Vienna Circle and Logical Positivism

Where might the “Logical” part of “Logical Behaviorism” come from? Why would it be called that? The answer lies in an earlier philosophical endeavor called logical positivism, developed by a group of late 19th century philosophers called the Vienna Circle. The connection between this philosophy of logical positivism with behaviorism, is that behaviorists seek a framework of language that can accurately reflect the observable facts of behavior. Without this framework of language, misconceptions, circular reasoning, and arguments about the linguistic minutiae of the scientific literature could bog the whole study of behavioral psychology as a natural science down. Dependable language leads to fewer misunderstandings in the scientific literature, and potentially better replication of what is being tested and studied.

Logical positivists, and the later logical behaviorists, wanted linguistic precision in the study of psychology and behavior. A precise language could lead to better verification of observable events. The philosophers of the Vienna Circle, called this the “principle of verifiability”, and that there should be no statements in the literature that could not be verified empirically, or at the very least is capable of verification at a future date. There are certain things that can be stated which can not be verified immediately, but allow, in their wording, a means to verify them at a later date. For example: “Next Tuesday it is going to rain.”. This statement can not be verified right now, but does allow for verification. This was important to logical positivist philosophers, and later the logical behaviorists. This is a staple of most empirical behavioral research, and is often taught as a maxim without needing explanation. It was not always the case. Without employing the “principle of verifiability”, any unverifiable statement could be used as premise with impunity. Unverifiable statements (mentalistic or substance dualistic statements for example) can not be disproven objectively, because they allow no empirical way to do so. To the logical behaviorists, these statements are hardly helpful to scientific literature.

The philosophers, scientists, and mathemeticians in the Vienna Circle, chiefly Rudolf Carnap, Moritz Schlick, Herbert Feigl, Felix Kaufmann, and A. J Ayer, developed this form of analysis using the “principle of verifiability” taking heavily from earlier philsophers like Ludwig Wittgenstein, to design a way for statements to be able to be analyzed. You likely see these types of statements in empirical and scientific research all the time without realizing it. The early logical positivists differentiated between what they called “analytical statements” and “synthetic statements”. Analytical statements could be true simply because it logically follows from their meaning.

Example (Analytical Statement): All circles are round.

Of course they are.

Synthetic statements, on the other hand, require some empirical verification in order to be confirmed or proven true. These statements when using the “principle of verifiability” can be verified.

Example (Synthetic Statement): “This cat has gray fur and is wearing clothing.”

Let’s take a look.

(Mr. Darcy)

Well, look at that. We can verify this statement with observation.

Both of these statements are important to distinguish between, but do not hold equal weight within logical positivism, and logical behaviorism. To the philosophers of the Vienna Circle, and most logical positivists, synthetic statements are what should be first and foremost of importance. Synthetic statements make claims about reality, which can be tested, and that is incredibly important when it comes to natural sciences. The use of analytical statements are more trivial, to logical positivists, because they bring no new information. Logical behaviorism shared in the logical positivist belief that true propositions and statements should be capable of scientific verification in order to be useful scientifically.

Logical Behaviorism takes from this the focus of synthetic statements about behavior, which are observable and measurable. Even when dealing with “mental concepts” or “private events”, the importance is in the language being used to create propositions that can be verified.

To Sum It Up: Logical Behaviorism Is About Language

To a logical behaviorist, concepts like the mind, and thoughts, and feelings, and imagination, all must be described in ways that have an observable or verifiable attribute to them in order to be useful scientifically. Logical behaviorism was developed in a time of strong mentalistic terminology, where circular reasoning towards behavior was common, and human action as a whole was sometimes treated as indescribable, and the mind in some ways untouchable by science. To the logical behaviorist, the semantics, or language of what we study and talk about when we try to describe behavior, even internal processes, must in some way be verifiable, or objective, in order to be useful in a scientific sense.

How we state things, and how we propose things, is important. To be too loose with language in this area invites misunderstandings, as Gilbert Ryle pointed out, or lacks the ability to be later verified, as the logical positivists believed. To make a statement that is observable, measurable, and can be verified is what many logical behaviorists believed would bring psychology, and the new branch of behaviorism, closer to that goal of being a natural science.

I hope you enjoyed this brief look at the history and reasoning behind the Logical Behaviorism theories, and the many influences. This is by no means an exhaustive dig into the rich topic, but a broad touch on the vary complex psychological and philosophical roots which all came together to bring about what we know about behaviorism and psychology, and what logical behaviorism still shines through these many decades later.

Comments? Questions? Thoughts? Leave them below! Don’t forget to follow!


Clark L. Hull. (2019, February 21). Retrieved from https://en.wikipedia.org/wiki/Clark_L._Hull

Fancher, R. E., & Rutherford, A. (2017). Pioneers of psychology. New York, NY: Norton & Company.

Hull, Clark L. Principles of behavior. (1964). New York.

Ozmon, H. (2012).  Philosophical foundations of education. Upper Saddle River, NJ: Pearson.

Ryle, G. (1949). The concept of mind. New York: Barnes & Noble.

Skinner, B. F. (2015). Verbal behavior. Mansfield Centre: Martino Publ.

The new encyclopaedia Britannica. (1977). Chicago, IL: Encyclopaedia

Image Credits:
Artwork, and photography are originals by the author Christian Sawyer.

Token Economies: What Behavioral Science and Blockchain Technology Have In Common

“Token Economies”- two words springing up at Blockchain and Cryptocurrency summits and conferences with increasing regularity. Token Economies have been used by behavioral scientists and practitioners for decades, but recently they have taken off in the field of Blockchain and Cryptocurrency technologies. Both applications use the term “Token Economy” interchangeably. In technology conferences and summits, it is the original behavioral psychology definition that is used to describe the concept. The tech field is now using the original token economy concept and expanded it to apply to what some might call the future of commerce and currency. Exciting stuff. Here, I will break down the basic concepts of what a Token Economy is, and how both behavioral scientists/analysts use them, as well as the new application in the technology by Blockchain and Cryptocurrency developers.


The Token Economy

Let’s break it all down. What is a token economy? A token economy is a system where tokens, or symbols, are used as conditioned reinforcers which can be traded in for a variety of other reinforcers later. It is not a bartering system or prize system where objects/access/services are given directly following a target behavior, but a conditioned stimulus (token) without necessarily any intrinsic value that is agreed upon to add up to exchange or buy another reinforcing item. A common example that most of us are used to is money. Paper money, specifically, can be considered a part of a token economy in that it is “traded in” towards some terminal reinforcing stimulus (or “back up reinforcer” as it is called in behavior analysis). The paper money is a conditioned reinforcer because it has no necessary intrinsic value but has conditioned value for what it can eventually be used for within the token economy.

This was taken up originally by behavioral researchers in the 1960’s, as a form of contingency management for the reinforcement of “target behaviors”- or prosocial learning, in therapy situations. Reinforcers are important psychologically because, by definition, reinforcers change the rates of behavior that they follow. They can help teach life-changing skills, or alternatives to some destructive or undesirable behavior quickly. But, reinforcers can be tricky too. People can become bored or satiated with tangible rewards, such as food, but within a token economy, reinforcement can be delivered in the form of tokens and allow for a later exchange or choice of any number of possibilities desirable to that individual. By pairing these tokens with access to “primary reinforcers” (reinforcers that are biologically important) or other “secondary reinforcers” (stimuli that have learned value), the tokens themselves become rewarding and reinforcing- thereby creating a sustainable system of reinforcement that defies the satiation and boredom variables that the researchers originally found as barriers to progress. Alan Kazdin’s work “The Token Economy” is a fantastic resource on the origins and research that began it all.

What can a token be? Nearly everything. But, it has to be agreed upon as a token (given some value for exchange) in order to serve as a token for the purpose of trading it in, or buying with it. Giving someone a high five after doing a great job at work, for example, is not a token. It is a reward, and possibly a reinforcer, but it was not conditioned to have value, and cannot be saved or exchanged. Tokens also need not necessarily be physical, or tangible. They can be symbols, or recorded ledgers, so long as that information can be used for the exchange in the corresponding token economy. This is where blockchain and cryptocurrency technologies tie in to the original behavioral science understanding of a token economy. Can data, or information, serve as a token and be used in a token economy if it is agreed upon to have value and worth exchange? If you haven’t heard of BitCoin (a Cryptocurrency), the answer is yes.


Blockchains and Cryptocurrencies

What is Blockchain then? And what is a Cryptocurrency? Using our original definitions of tokens and token economies, for data or information to be considered tokens they have to be able to be exchanged and have value that can be traded within the token economy. Blockchain technology solves this by creating units of data called “blocks”. These blocks, simply put, are a growing list of data records that contain a “cryptographic hash” of previous blocks. These linked blocks form a ledger which is resistant to duplication and tampering. In layman’s terms, unlike most data that people can manipulate and come into contact with day to day, a “block” within this Blockchain cannot be altered or copied and maintains a faithful record of time and transactions. Resistance to copying/duplication means that it cannot be forged, and resistance to altering means that this data (the record of information) is seen as reliable. If we create a currency using this technology, then we have the means to create units, or tokens, that are individual, can be traded, and have a consistent and (for the cases of this introduction) unalterable record of transaction. Assigning value to this creates a digital currency called Cryptocurrency. Tokens. Transactions can take place using these blockchains. These transactions take place person to person (“peer to peer” or P2P), meaning that once a unit of cryptocurrency is exchanged from one person to another, it resembles very much a physical exchange of all other forms of currency. This exchange does not require any medium, such as a bank, like physical currency does in online banking for example.

Blockchain and Cryptocurrency developers, then, would be looking to create a form of token currency that can be traded within this broader token economy- that is both reliable enough to be used by enough people to catch on or become commercially viable, while still maintaining the benefits of a cryptocurrency (security, privacy, etc) over traditional currency. These cryptocurrencies, these units of data, these blocks, have no intrinsic value themselves. They are tokens in the very real sense that the original behavioral research intended. Their usage and effects, then, appear to follow in the same vein. Currency can be reinforcing, reinforcement can alter behavior, and once a token takes on value through the conditioning process; it can be truly valuable in its own right as a “generalized reinforcer”- a reinforcer that is backed up by many other types of reinforcers. A dollar, for example, as a widely used currency can be used for a nearly countless number of goods, services, and transactions. This makes it a good generalized reinforcer. The more a token can be traded for, the better a generalized reinforcer it becomes.

Will a form of cryptocurrency, like Bit Coin, gain this same traction as a currency, or token, to access other reinforcers in trade? Many people say yes. That’s where both behavioral scientists and blockchain developers can both find excitement in each new development and innovation.

Likes? Comments? Questions? Did I get it wrong ? Leave your comment below!


  1. Alan, K. (n.d.). The Token Economy: A Review and Evaluation. New York, NY 1977: Springer. doi:10.1007/978-1-4613-4121-5
  2. Blockchain. (2019, January 13). Retrieved from https://en.wikipedia.org/wiki/Blockchain
  3. COOPER, JOHN O. HERON, TIMOTHY E. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. Place of publication not identified: PEARSON.
  4. What is Simple Token (OST) [Audio blog post]. (2018, August 22). OST Live Podcast

Image Credits:



Did Cognitivism Beat Behaviorism?


Some hold firm to the idea that the division between behaviorism and cognitivism is a vast divide; where there is a winning theory and a losing theory. You’ll hear them- “Behaviorism died decades ago!” and “Thoughts about thoughts? That’s just unprovable mentalism!” shouted from entrenched believers until they are blue in the face. There may be some salient historical details that explain why they feel that way; behaviorism (arguably) replaced many of the mentalism and introspective psychological methods well into the 20th century. Then, some would say that the behaviorist movement was halted by Chomsky’s rebuttal of B.F Skinner’s “Verbal Behavior” and the rise of the 1960’s “Cognitive Revolution”. The deep division could be argued as unbridgeable. As someone who was not practicing at the time of these contrasting theories coming to a head; I always wondered what it would have been like. Did everyone see it as a giant butting of heads? Did all the researchers and scientists find themselves marked on either side? Are the loud entrenched voices of today just echoes of the past that haven’t been resolved? If so, how did cognitive behavioral therapies do so well blending the two perspectives? There had to be more than just a line in the sand. Enter Terry L. Smith, and his book “Behavior and it’s Causes”- relating the exact sentiment which I was so curious about.

“I had (just like everyone else) read Kuhn (1970), and so almost reflexively interpreted cognitive psychology and behavioral psychology as competing paradigms (see Leahey, 1992, for a discussion of how common, and mistaken, this interpretation is). Cognitive psychology was clearly on the rise, so I inferred that the Skinnerian program must be on the decline. Indeed, I thought it must have disappeared by now… What I discovered was that during the 1960’s, the Skinnerian program had actually grown at an accelerating rate. This baffled me. How could operant psychology have survived and even prospered in the midst of “the cognitive revolution”?”

-Smith (2011).

How could that be? Terry L. Smith’s book explores this topic, speculates on some great points, and comes to several strong conclusions. I won’t spoil it for you aside from one- “Operant psychology” as Smith calls it, separated itself from being tied down to every philosophical tenet of Radical Behaviorism. It was Radical Behaviorism, in Smith’s view, that had taken the beating because it was too rigid on what it would allow to be studied, and cut too much out of what could be considered the study of behavior. This was a fascinating point, to me, since I had already studied what B.F Skinner had done with Radical Behaviorism to broaden it from Methodological Behaviorism (ie. private events). We’ve heard this one before, right?

“Radical Behaviorism does not insist upon truth by agreement and can therefore consider events taking place in the private world within the skin. It does not call these events unobservable”- Skinner, 1974

This was one of the larger distinction B.F Skinner made from Watson’s methodological approach which was strictly focused on observable stimuli and responses. If we take Smith’s interpretation on what “operant psychology” is today; it goes even further from radical behaviorism by cutting the divide and seeing itself within the broader breadth of psychology as a whole. This rings true for me when I speak to the behaviorists and practitioners I see in the field- there is still that aversion to “mentalism”, but the focus on the observational thrust that comes from Watson’s strict view is mainly practical- data collection is best done when people can see and define what they track. The behaviorist tradition still lives on in the practice of Applied Behavior Analysis, for example, but Skinner’s written word is not taken as a biblical truth; the components of the philosophy and science that propelled behavioral psychology to continue to progress are still empirically validated. They are scientific findings. The ones that work and do the most good remain.

This is Smith’s main point on “operant psychology” during the “cognitive revolution”; it continued on stronger than before on its own steam because the findings were strong and reproducible. While Chomsky, and other cognitivists, had made some compelling points on the limitations of Radical Behaviorism as an idea and philosophy; it did not undercut the behavioral science as a whole. The practices, techniques, and ideas of both Methodological and Radical behaviorism that came through in the empirical work remained. The broader reaching philosophy that might limits on the science with no empirical backing? Not so much.

Keep in mind that during the “cognitive revolution” beginning in the 1960’s- research in brain mapping and neurobiology had come a long way from the days when Watson, Pavlov, Thorndike, and Skinner began their work. Behavioral theory had been running strong for the beginning of the 20th century, and was now met with convergent findings. Both had their uses, and the ideas that did not refute one another but did overlap when it came to the theories. Internal processes were becoming more understandable through the biological discoveries; which some strict behaviorists may have misinterpreted as just another form of mentalism. That’s a hang up that did not help them. On the other hand, some cognitivists still thought all of behaviorism was comparing humanity to basic stimulus-response (S-R) machines. Another misunderstanding, another hang up. My interpretation is that people fought over those illusory extremes. Those were the voices that screamed the loudest but at the same time were the most misguided on what was actually happening. I equate this to the kind of thing we see on the internet- the “strawman arguments”, where someone constructs an exaggerated facsimile of their opponents’ ideas and tears those down rather than confronting what is actually said. It creates an easy target, but does not actually represent the reality. Strict behaviorists get some things right. Strict cognitivists get some things right. Sometimes…just sometimes… both groups get things wrong too! Surprising, right? That is how anything based in theory and following the scientific method actually works.

Maybe Terry L. Smith is on to something. Maybe we consider ourselves all a part of Psychology with a capital P, and put our findings and theories out there. The right ones that can empirically and reliably help people will be the legacy.

To be fair though, I am not completely in the objective virtuous middle; I’ve read Noam Chomsky’s review of Verbal Behavior and believe he missed the point.

Thoughts? Likes? Comments? Questions? Leave them below.


Chomsky, N. (n.d.). 4. A Review of B. F. Skinner’s Verbal Behavior. The Language and Thought Series. doi:10.4159/harvard.9780674594623.c6

Skinner, B. F., & Skinner, B. F. (1957). Science And Human Behavior. Riverside: Free Press.

Smith, T. L. (2011). Behavior and its causes: Philosophical foundations of operant psychology. Dordrecht: Springer.
Photo Credits: http://www.pexels.com

Happy ABA Halloween!


Halloween is coming up soon, and as a treat, I’ve created some silly and fun ABA style printouts. UPDATE: For the 2019 Halloween holiday fun, all new print outs will be added as we get closer to the holiday!

  1. Spooky IOA Data!

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween12.pdf

2. The Horror of Subjective ABC Data!


Link to the full printout here: ABAHalloween2

3. The Terror of Incomplete Data!


Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween31.pdf

4.  The Dread of Corrupted and Lost Graphed Data!


Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween41.pdf

5.  The Sheer Fright of Finding Ineffective and Non-Student-Centered Goals!!


Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween5.pdf

6. The Shrieking Terror of Unnecessary Most to Least Prompting!!


Link to the full printout here: ABAHalloween6

7. The Dread of Pseudoscience for “Behaviors”!


Link to the full printout here: ABAHalloween7

8. The Panic of Misused Terms

Link to the full printout here: ABAHalloween8

OH NO! I hope I didn’t scare you too badly.

Have some candy and remember how safe and relevant all your data and interventions are… Whew.

Like them? Take them! No fee, but please be kind with artistic credit.

Why we don’t always prompt: Behavior Analysis meets Vygotsky.


In the early 20th century, there was a developmental psychologist named Lev Vygotsky working on theories of learning and development in parallel to many of the behaviorist traditions. If you were to ask a graduate student taking behavior analytic courses who Vygotsky was, they would most likely shrug their shoulders and wonder why that was important. He isn’t Watson. He isn’t Pavlov. He isn’t Thorndike. He isn’t Skinner. He isn’t Lindsley. So, why would a behaviorist ever want to care? Well, it’s because his work ties in so closely to the behaviorist tradition, that you could in some cases use his terminology and frameworks interchangeably and still see the same results. His work can help clarify why we, as behavior analysts, trainers, educators, and even parents, should not prompt every single time we see a child begin to struggle with an endeavor or task.

To an educator or professional following the behaviorist tradition, it’s not all that hard to describe. Prompts help the learner reach a reinforcement threshold that that their response likely could not have reached on its own. Shaping- describes a process by which an emergent behavior which is similar in some way to a target behavior, is reinforced by successive approximations to become the terminal target behavior. Basically, it’s taking an “okay” behavior attempt, and rewarding the behaviors that look closer to improvement until it’s “perfected” enough to reach more naturalistic reinforcement in the broader environment. To a behaviorist, that means looking at what the learner has in their repertroire, what they can do right now, and plan to reward the responses that improve that towards some end goal response. But wait, how exactly do we know when to intervene? And why don’t we intervene every time we see the learner encounter difficulty?

The trouble with that is that sometimes a learner does not actually learn from being prompted too much. Sometimes that reinforcement only contacts the amount of effort the learner expends to receive prompting. Sometimes they become dependent on those prompts, and then it is the educator doing the behavior, and the learner receiving reinforcement. They don’t improve because they have no need to improve. They get the prize every time their educator does it for them. That behavior that the educator prompts, might never transfer through modeling. Why should it, if the reinforcer comes anyway? This is where Vygotsky comes in. Vygotsky believed that there is a Zone of Proximal Development.

Lev Vygotsky was not a behaviorist. In many ways, he was against the methodological behaviorism that was popular at the time which focused on purely observable stimulus-response relationships. Vygotsky also believed that learning was not just a process that drew from a present environment of contingencies, but a broader wealth of cultural and societal forces that accumulate through generations and have impacts that were not directly related to the behaviors at hand. However, when it comes to the Zone of Proximal Development, his theories coincide with what behaviorists would conceptualize as both repertoires and the necessary thresholds for prompting. Vygotsky believed that there was a level at which a learner could successfully accomplish tasks without assistance, and a level at the other end of their developmental range that they could not accomplish without considerable help in the form of prompting. Between that, however, was a zone where a learner could accomplish them with some collaboration and prompting and eventually surpass it to a level of independence. It’s a zone that is in many ways different from individual to individual, but within that zone of proximal development; prompting (or collaboration as he called it) was at its most effective.

Think of it like this:

Zone of the learners “actual” development Zone of Proximal Development The limit of their current developmental ability
These are responses that the learner can perform, and tasks that the learner can complete without any assistance from others. These are tasks and responses that the learner can accomplish with the assistance and prompting of others.

These are tasks and responses that are beyond the learner’s ability to accomplish and can only be produced with considerable support and assistance.

*Behaviorist Footnote:
Think of this as the responses already in the learner’s repertoire. These are “easy”.
*Behaviorist Footnote:

Think of this as the area of “shapable” responses that are likely to lead to independent future responses. Vygotsky called this “scaffolding” but the process of “shaping” is synonymous.

*Behaviorist Footnote:

The client can be prompted through these tasks, but are unlikely to be able to reproduce them even with shaping procedures at this time.

This framework delineates an interesting range where a learner needs and could use the help of an educator or teacher to help prompt them, and when not. In the initial range, prompting is unnecessary and might actually hinder the learner from engaging in those responses in their most independent forms. The learners who can engage in the “easy” responses and find that reinforcement in the broader environment would be more likely to occur in the future. Prompting too much here could stifle that. In the next range, the Zone of Proximal Development, as Vygotsky calls it; prompting could actually be of the most use! These are responses that are viable for occurring and reaching natural reinforcement, but they just need a little help at first to get there. Here, prompting in the form of modeling or shaping could help the learner take their initial responses and bring them to their terminal and most effective independent forms. This is the exciting part. This zone is where the work put in by the educator and teacher could meet maximum return on what the learner can benefit from. Now, we have to be careful not to reach for the moon here. The final zone is where, even with prompting, the learner is unlikely to be able to shape their responses successfully. This, for example, is trying to teach a learner to run before they can walk. They need those foundational responses before they can even be prompted to a more advanced terminal response. An educator who comes across this scenario might be wise to dial the expectations back.

Between those two ranges of “easy” and “unlikely”, we find the responses that can be prompted for the most good. We would not prompt too much, and stifle the learner’s ability to contact reinforcement on their own, but nor would we fail to prompt at all, and miss those responses or behaviors that just need a little push. This is where a behaviorist, teacher, educator, or even parent, can take a thing or two from Vygotsky’s work. And if you’re a tried and true behaviorist who can’t believe that a cognitivist would be mentioned here, I’d suggest an open mind. You might even be surprised about the similarities between Vygotsky and Skinner on private events and “inner speech”. We can touch on that later, but for now, think about the zone of proximal development in your life and practice; what could use a little help?

Likes? Comments? Questions? Leave them all below!


Burkholder, E. O., & Peláez, M. (2000). A behavioral interpretation of Vygotsky’s theory of thought, language, and culture. Behavioral Development Bulletin,9(1), 7-9.
Image Credits:

A Behaviorist’s Take on Far Cry 5

Far Cry® 5 (3).png

Forewarning to the regular readers; I’m talking about video games today. In particular, a fantastic action-adventure game I was turned on to by friends called Far Cry 5. That’s not an entire truth; I’ve played the predecessors too, but this one stands out to me narratively because it has a story based around social control. As a Board Certified Behavior Analyst, I’m drawn to these things. Imagine a world not so different from ours, where a doomsday religious cult takes control of a part of Montana and spreads a violent vision across the state corrupting the citizens to the new lifestyle of brutalization and indoctrination. That calls for a hero right? That’s the game. The thing that makes this interesting to a behaviorist is how it uses those social forces in-game to create fictional forms of coercion that in many ways matches the existing psychological science of conditioning. I like this game. It’s complex, it’s fun, and I’m going to be testing myself in its new Infamous difficulty mode over the next two weeks and during Extra Life to rack up some more donations for the local children’s miracle network hospital near me (link here and below). I’ll also try to keep spoilers beyond the psychological methodology to a minimum, Let’s get on to the psychology.

In the game, there are several bosses who control section of the map. Each of them represents a different form of that control. Spoiler alert. But honestly, no large reveals here. Joseph Seed is the big boss. He’s a sort of preacher borrowing from several religious traditions to deliver his idea on a “collapse” of society and a vision for a simpler future. He relies on a group/mob mentality, social reinforcement (a semi-Bandura style of vicarious punishment) and a form of authority that borrows from his own charisma and the religious texts he cites. Not too out of the ordinary. His doomsday cult also employs sub-bosses. John, a former lawyer, who is obsessed with having his devotees say YES, and uses similar group and social coercion. Faith, who uses a toxic mix of drugs called Bliss to create hallucinogenic induced indoctrination. Believable to a degree. Then, there’s my favorite and the reason for this post; Jacob. Jacob is a little different. He’s said to have a soldier’s background, but he uses a method of conditioning, which he refers to as a basic classical conditioning, with a substance (drug) related assistance. This puts his subjects into murderous rages/trances when he plays the song “Only You” by The Platters. He tries to make his method sound simple. He tries to make you believe it’s just simple stimulus pairing through classical conditioning.

Jacob does abhorrent experiments with these methods on both animals and humans, causing devastation and treachery across the part of the story. It’s very tragic. The thing is…he’s not just using classical conditioning. Conditioned stimulus with a conditioned response? Not quite. There’s more to it. He tries to explain his method several times and even uses the standard definition of classical conditioning to describe how he creates these diabolical effects, but when we look at the practice there’s a sinister amount of complexity that he leaves out. This fictional boss Jacob might think that it’s simply food deprivation, a song, practice in his chairs/training chambers that do it; but he’s selling himself short. He’s actually using both classical conditioning and operant conditioning. That fiend.

Far Cry® 5 (2)

Jacob’s Classical Conditioning

It might surprise you, but Jacob didn’t invent this form of conditioning. It actually has its origins with a researcher named Ivan Pavlov (and also Edward Thorndike) involving the well-known experiment with bells and salivation. There we see the pairing of a conditioned stimulus with an unconditioned response. Basic stimulus-response psychology. Now, in this fictional world of Far Cry 5, the bad guy Jacob references these things, and even Pavlov (“Pavlovian”) once or twice. I think narratively, it makes sense. He’s training killers. He sees his conditioned stimulus (a song) and their response (murderous rages) to be synonymous with that process. Except… when we look at the training, it’s not that clean. There are parts that seem to follow this method; mainly that he is engaging in a stimulus pairing procedure that works on a learned behavior change for the individual. The environmental event (or stimulus) precedes the response he is looking for. That makes sense too. Even the cutscenes play out the process correctly! We assume the original neutral stimulus “Only You” by The Platters does not lead to murderous rages to an ordinary person. He needs to make that connection happen in his victims by pairing stimulus and response. Jacob pairs that neutral stimulus, with an unconditioned stimulus (threat, through some form of a hallucinogenic and visual process), to elicit an unconditioned response (attack). Then, following this, he presents the newly paired conditioned stimulus (“Only You” song) to elicit the newly conditioned response (attack). Makes sense, right? Somewhat. But look at the training methods a little deeper and we get some complexity. He has the stimuli he wants available. He has the song. He has the wolf pictures, and the predatory images of wolves killing deer, but he also adds something else in… Reinforcement and Punishment during his trials.

Far Cry® 5

Operant Conditioning through Discrete Trial Training (DTT)

The reason I like the Jacob missions so much is that they do use real-world conditioning methods. They just undersell them a little. Jacob, the big bad guy I hated through two playthroughs of this game, uses both classical conditioning and operant conditioning to make his process work. Also, some fictional drugs and hallucinogenics, but let’s focus on what we know. Operant Conditioning is different from Classical Conditioning (or “Pavlovian Conditioning”) in one major way; it focuses on the ability of the subject to respond in a specific way, followed by a reinforcer in order to increase the frequency of that behavior or shape it towards a targeted goal. When someone mentions B.F Skinner, or Skinner Boxes, this is the type of conditioning they are talking about. Again, MINOR SPOILERS. Jacob does that to our character the first time he catches us. It’s not just the classical conditioning process of the song to the natural response of attacking when threatened. He trains our character to make that stimulus and response relationship stronger, and introduce faster and more vicious shaped behaviors to the repertoire of the character. It’s tragic. It’s sad. But his method is theoretically sound. You see, he uses what we behaviorists call Discrete Trials. The situation for each trial is exact. The Discriminative Stimulus (SD) to set it off is the same each time. Here is where the operant part comes in. The character is tasked with eliminating all enemies using the provided weapons, in an interval time frame, to complete the task and receive reinforcement for the chained behaviors. This follows the three-term contingency known as A-B-C. Antecedent. Behavior. Consequence. Let’s break it down.

(ANTECEDENT) aka Discriminative Stimulus- “Only You” Song, and visual presentation of threat-related stimuli.

(RESPONSE) – Eliminating targets.

(CONSEQUENCE)- Added time to the interval to allow for more time to complete the task for further reinforcement, and verbal praise from Jacob in the form of “Good”, “Cull The Weak” etc. This is Reinforcement.

Or… (CONSEQUENCE)- in the form of Punishment. Fail to complete the task by either being killed by enemies, or failing the time interval, and you meet the punishment contingencies of starting over from the beginning, or verbal reprimands in the form of “No”, “You are weak”, “You are not a soldier”, etc.

In other words, Jacob is shaping repertoires. He’s not just pairing behavior. He is creating a series of trained responses, operants if you will, in the presentation of his conditioned stimuli to be completed in a way that he controls. It is the fundamental ingredients of all learning, but he has twisted it a little to make this heroic character fall right into a trap of uncontrollable lapses in judgment and responding in cruel ways that are uncharacteristic or were a part of the character from the start. Chilling, right? But like a rat in a maze, or a box, the character must follow these in order to progress. Press the lever, get the cheese. Shoot the opponents, get the praise and progress.

Far Cry® 5 (5).png

Meta Game Talk: Conditioning The Players

Let’s talk a little about the big picture here. Yes, Jacob is fictional. Yes, this heroic character is fictional too. But when we look at the game from the lens of how it works on player reinforcement and punishment, we can actually see ourselves in the picture of this box. We are also conditioned, if we choose to play the game and continue to play the game, in a way that shapes and sharpens our behavioral repertoires. The same Discrete Trial Training that Jacob puts our character through, we are also participating in, and are contacting that same reinforcement and punishment as though it were our own (broadly speaking). We want to succeed. We want to continue. We want to win.

So, we get faster. We get more accurate. We learn the patterns. This is why we train. As Jacob has said so many times during these repeated trials. Each time, giving us a little more of a challenge. Each time, progressing us with different response repertoires to enact on the challenges in our way. It’s fun. In some ways, it can be a representation of the game as a whole. There are many reinforcers out there to get. Many contingencies to engage with. Even multiple endings (that’s the part that got me doing it twice).

I learned to shoot through both enemies in the revolver scene from the left. I learned to take the submachine gun in the next room and work from low to high, right, center, to left. For the shotgun, I turned corners with two lefts and one right at head level and tapped at the first sign of movement. For the rifle, I stayed low and aimed in short bursts, leading a clear line through the middle, and for the LMG… well, let’s not give it all away just yet. Your repertoires need honing too, and there are many variations that work.

That’s the fun.


The Behaviorist’s Take:

5/5 Stars for me. This game has been a joy to relax with. It’s challenging, but still can be taken in small parts and missions as time allows. It’s not too much of a time sink for someone on a professional schedule, and not too much of a learning curve for putting half an hour a day in. The story is strong, the emotional bond between the heroic character and the sympathetic (and often funny) people they meet is also a great time. They even let you make your own custom levels and challenges for your fellow players in an Arcade mode. I dig it.

As I mentioned above, this will be my game for the Extra Life 2018 Charity Event taking place the first week of November. I am, believe it or not, the weakest player on my team, but I love talking behaviorism and psychology and will be doing it all day to support the locals in Philadelphia, raising charity funds for the Children’s Hospital of Philadelphia (CHOP). I’m not only an outsider fan of their great work with children, I often have direct contact with the children’s hospital in my day to day work with young populations and can’t speak highly enough about their commitment. Extra Life is a legitimate charity, and 100% of the funds go directly to the children’s hospital. I’m leaving my link below and will be overjoyed if readers could contribute in some part to my goal so I can hold my head up high this year. Any amount at all. I’ll be streaming and will be happy to respond to any comments. Have ideas that I missed? I love those. Send those too.

Extra Life Donation Link

Comments? Like? Questions? Leave them below!


Cooper, John O., Heron, Timothy E.Heward, William L.. (2007) Applied behavior analysis /Upper Saddle River, N.J. : Pearson/Merrill-Prentice Hall

Far Cry 5 [Software]. (2018). Ubisoft Montreal, Ubisoft Toronto.

Image Credits:

Christian Sawyer, M.Ed., BCBA (original Photography/Screenshots)

Steam. http://www.steam.com- Far Cry 5 Logo

What Cats Taught Thorndike About Learning


If you’ve heard the name Edward Thorndike, you are probably aware of the importance this psychologist had on early behavioral science. He was the one that coined the term law of effect, which is a theoretical precursor to the process of reinforcement. Thorndike was interested in psychology as an observable natural science, which at the time flew in the face of introspective methods. His work inspired many of the ideas and theories of B.F Skinner, and behaviorism as a whole, but what you might not know is that his big break came from what he learned from…cats.

People who are familiar with Thorndike’s law of effect are aware that his theory underwent several revisions, and his research came into criticism; but few would dispute that his dissertation on the associative processes of animals, and the puzzle box experiment raised the right questions that would lead to many of the processes within operant conditioning that we see used today. Thorndike owes much of that to the cats he worked with during the animal research. Thorndike was interested in animal learning. Could they learn? Were all their behaviors governed solely by reflex? If they could learn, what could they learn? Could they learn by observing others? To us modern readers, who are familiar generally with animal intelligence levels, this might seem like a no brainer, but in the late 19th century when Thorndike was doing his work a sizable amount of academics still held on to the old Cartesean view of animals as unconscious automatons. These cats, and Thorndike, would call that into question. They would demonstrate that they could not only learn, but overcome an obstacle that could not possibly be a part of their reflex system- a puzzle box. Cats solving puzzles?! Thorndike must be mad! (I’m not entirely sure his critics would have been that dramatic, but skepticism was definitely there.).

His experiment was simple. Place hungry cats within a box that required a simple action to open, in order to access food outside of the box. The puzzle box itself had a door which was shut by weighted string, and that string was attached to a lever or switch; by operating these, the door would open. There were other future experiments involving buttons which worked in a similar fashion, but the single response (which was not reflexive) was consistent. At first the cats wandered around the cage meowing, and circling, until they incidentally stepped or pushed on a lever, opened the door, and gained access to the food. This was not learning. This was incidentally triggering the device. BUT… when placed within the cage again, these cats were able to reduce their time wandering and meowing before they found the trigger and let themselves out. Thorndike tracked these times, noticing not only that these cats were able to find their way out faster each time, but also the rate at which this learning took place. Thorndike constructed a learning curve. The cats struggled at first, but got faster with each new trial until their rates of responding became efficient enough to level off. Thorndike believed that to even perform this type of learning required some intelligence intrinsic to the cats. Obviously some kind of intelligence that did not rely on language or introspection.

“From the lowest animals of which we can affirm intelligence up to man this type of intellect is found.”- Edward Thorndike

“Meow.”- Edward Thorndike’s Cat

Thorndike’s initial hypotheses were not always correct or confirmed however. Learning through observation, for example, was something he could not capture with these puzzle box trials. During the initial trials, he was not able to observe a difference in the rates of cats’ responding learned through their own trial and error, and cats who observed others escaping by pressing the lever/switch. (Later studies with other animal subjects would, of course, demonstrate animal learning through observation could in fact occur with certain animals). He also believed there might be some level of insight from the cats which helped them learn these tasks, but that too was not confirmed by his initial experiment- cats seemed to be more gradual learners from experience. This type of learning, again, appeared not to rely on language or introspective thought. Thorndike noticed that when he first put cats inside the puzzle box, their behavior seemed “erratic” or “chaotic”, but after successive trials the became more focused on finding the trigger to opening the door and engaged in fewer responses which did not align with the task. The cats were no longer circling and meowing; they were approximating responses that were previously successful and allowed them access to food. Thorndike concluded from this that this was responding based on the law of effect; that it happened due to past consequences. This would later be called by behaviorists as reinforcement, and documentation of the three term contingency.

“There is no reasoning, no process of inference or comparison; there is no thinking about things, no putting two and two together; there are no ideas – the animal does not think of the box or of the food or of the act he is to perform.”- Edward Thorndike

“…”- Edward Thorndike’s Cat operating a puzzle box trigger.

That’s not all. Thorndike also theorized that cats could engage in discrimination of human vocalizations, and behave differently in situations after being spoken to. Thorndike noticed that when he approached cats behind wired netting before feeding, they would leap up on to the netting and meow.

(this author’s cat demonstrating exactly that)

To test this, he tested a loud proclamation in each condition:

“I MUST FEED THOSE CATS! (emphasis not present in original text)

Preceding conditions where he fed the cats, and

“I will not feed them.” (lack of textual enthusiasm probably accurate)

preceding conditions where he did not feed the cats.

He tracked these presentations and trials using frequency data collection, and in the conditions where he spoke “I must feed those cats”, and fed the cats, he found that cats would leap up more readily in the future, over the phrase where he did not feed them. This concept would later be referred to as responding to a discriminative stimulus. The cats would leap up and approach Thorndike (up to 60 times in the original research!) in the first condition, but also reduced leaping up when he voiced that he would not feed them. Thorndike was well aware that these cats were not spontaneously learning the English language, but they were discriminating between two very similar vocalized stimuli, and responding based on their previous experience and reinforcement. These ideas were not commonplace, or as well established as they are today. In many ways, these advances brought up unheard of avenues for the theory of learning in both animals and humans.

The theoretical implications of these experiments would shape later behavioral research into principles of operant conditioning well into not only the 20th century boom of behavioral thoughts and ideas, but even our time now in the 21st century.

Pretty impressive for cats, isn’t it?

Questions? Comments? Likes? Other?

Leave them below!


1] Chance, P. (1999). Thorndike’s Puzzle Boxes And The Origins Of The Experimental Analysis Of Behavior. Journal of the Experimental Analysis of Behavior,72(3), 433-440. doi:10.1901/jeab.1999.72-433
3] Famous Quotes at BrainyQuote. (n.d.). Retrieved from https://www.brainyquote.com/

Picture Credits: http://www.pexels.com, Christian Sawyer (Photo)

Why I Leave My Political Hat At Home


Opinion piece time. I leave my political hat at home. Or, at least I try to. I leave my belief systems about policy and voting to conversations with friends, Twitter (if I can’t help myself), and the local networking events where local politicians from town hang out- that way it’s just contextual. I’m friends with the local school board. I’m on a first name basis with the mayor of my town. I catch up and chat with the local councilmembers. I have a political life which is just as strong as my professional life. It’s not easy to split the two. More often than not, me deliberating on a choice at work does hit on several pieces of what makes my moral compass orient the way it does. I believe in compassion. I am a behavior analyst- it’s from the behaviorist tradition. It is observational, data-driven, research-based. I don’t allow personal opinion impact what happens with decisions with clients. Thankfully, data does that for me. Is this effective? Yes or no. Why? Well, the data suggests…

I can’t just put up a phase change line on a client’s progress graph because my opinion about a far-reaching political event somehow relates. It’s unfair. It’s my lens getting shifted which impacts more than me if it’s not reined in. The clients are individuals, deserving of individual care. Outside of that, it also means that I have people working with that client which report to me- RBT’s (Registered Behavior Technicians). They worked hard to get that credential. They’ve passed their tests and went through their supervised hours. They are professionals. Would it be fair for me to walk into work with a political or ideological idea in my head and try to bring it up to them? Of course not. That’s not their job. Their responsibility is to the client, based on the real world observable responses and data they see and collect. They depend on my unclouded experience and judgment. Even if they were to be outspoken about a political view (which happens), I can’t let that color my opinion of them or how I treat their judgment. It could. It easily could. But that’s my professional line drawn in the sand.

Here’s a common counter I’ve heard: Things are getting bad here. We need to speak out. We need to take a political stance in our personal and professional lives.

If it involves the vaccine pseudoscience? I’ll bite. I can justify that because the evidence is there and it relates to my work.

But here’s the pickle. The people who bring up that counter argument assume something. They assume that just because we share a job title, and do the same thing, and care about the same pursuits that we have the same political opinion, and I’d be an addition to their circle. Now, when those political views have already been expressed, I can be pretty sure whether I agree or not- and it’s a mixed bag, but surprisingly to some- I don’t share the expected viewpoints. Were they looking for differing viewpoints? I can’t be sure, but it doesn’t feel like it. Is it worth turning a workplace contentious? Is the workplace the place, and the time, to deal with these issues?

“But Chris, surely you don’t support _____.”
“You work with kids though. How could you ____?”
“If you’re not ____ then you’re ____.”
“_____ did something terrible. You can’t support ____ could you?”

I have nuanced viewpoints. They don’t follow a single ideology, or politician. That potentially makes it even worse. My political stance might not align with anyone who is unipolar in their support or views. The world is a big place. The United States is a big place. Pennsylvania is a big place. There are a lot of different people with valid but different views. In my personal life, I can vote with my conscience. I can even refuse to vote if it aligns with my conscience. I can protest who I want to protest. I can talk to local politicians from both parties. I can talk with local third-party candidates. I’m outspoken on education in these settings and with these people. But they don’t report to me. They aren’t my professional peers either. It’s the context that makes sense to me. If I meet someone from work, off the clock, and they want to talk about these issues; then I would be perfectly fine putting my thoughts out there. Discuss. Change my mind. Sure. I’d have to draw a line somewhere though. It can’t get heated. Even the small stuff would have to be calm and rational and most importantly; wouldn’t be evident at work the next day.

In my profession as a Board Certified Behavior Analyst, the board (BACB) that governs how supervisors treat supervisees are pretty clear in many respects. Dual relationships, abuses of power, conflicts of interest- they all have some clear delineation. Politics isn’t mentioned specifically, but imagine a case where there was an outspoken supervisor who did espouse their views and acted on perceived implications of those views at work. Would that affect the people directly reporting to them? How sure could we be that it wasn’t? I stepped into work on November 9th, 2016. I felt it. Whatever it was, it was there. Putting that into the supervisory relationship is a dangerous game, in my opinion. I’m not saying other people can’t do it, but it’s not something I’d feel comfortable with given the potential to go bitter.

I believe that if something needs changing, it can be done with every opportunity that a citizen has. That goes for maintaining a high held value or traditional ideal. People are free to do both. Bringing that explicitly to the workplace, with a position of influence and supervision responsibility, has risks. I’d much prefer to leave that particular hat at home.


Just me.

Photo Credits: http://www.pexels.com