Philosophic Doubt- When Scientific Inquiry Matters

There are important assumptions, or attitudes of science, which ground scientific study across all disciplines; Determinism, Empiricism, Experimentation, Replication, Parsimony, and Philosophic Doubt. The last one holds a key role in how we deal with the information we gain from science, and what we do with it in the future. Philosophic Doubt is the attitude of science which encourages us to continuously question and doubt the information, rules, and facts that govern our interpretation and understanding of the world (universe, etc). Philosophic Doubt is what has practitioners of science question the underpinnings of their belief, and continually do so, so that their understanding is based on consistently verifiable information. Philosophic Doubt cuts both ways- it can have a scientist test the truthfulness of what others regard as fact, but that means they also must take on the same level of scrutiny and skepticism in their own work. To some, Philosophic Doubt is a gift that has helped them expand on their ideas and shape them beyond the initial first experimental steps. To others, Philosophic Doubt is a detrimental form of skepticism clawing at information or beliefs that they hold dear. These views are not new, and in fact we can find traces of this disagreement going back to the 19th century. Here we will explore the assumption of Philosophic Doubt, including proponents and detractions both old and new.

Why do we need Philosophic Doubt anyway?

Philosophic Doubt is important to science because it has an effect on how the progression of scientific work takes place. It has scientists test their own assumptions, hypotheses, and underlying beliefs, even if those are held precious to them, against replicable evidence and new future findings. Philosophic Doubt drives experimentation, and it precedes replication as well. It is what underlies the empirical drive for seeking evidence. Without philosophic doubt, science can go wrong. A hypothesis could be formed based on inaccurate information which would never be retested. Subjective experience could entrench anecdotes in a study as a broader experience than they are. A scientist could start with what they want to find, and cherry pick only what fits their assumption. These examples are the risks of not taking Philosophic Doubt in to account. Sometimes it can simply boil down to the scientist wanting to be right, against keeping an open mind that they might not be. Holding the assumption that there is a benefit to questioning findings or previously accepted beliefs is not a slight against past experience or belief, but rather a better way of interpreting future information if it were to challenge it. Questioning is a part of science, but not everyone thought so.

“In Defence of Philosophic Doubt”

Authur James Balfour, a 19th century philosopher, debater, and scientist, took this topic head on in “In Defence of Philosophic Doubt”. Unlike today, opponents of Philosophic Doubt at the time were more interested in comparing the empirically-heavy scientific beliefs to a more open metaphysical series of alternatives- that is, they were more interested in comparing science to non-scientific belief systems as the truth of reality. When it came to psychology, there were idealists, and realists, and stoics at each others throats with concepts that could not be observed or proven. As you might already be able to see, comparing metaphysical constructs to an assumption that has them continually question their arguments and points, makes metaphysical assertions all the harder to make. Scientific points, however, make Philosophic Doubt a little easier to withstand:

Under common conditions, water freezes at 32 degrees Fahrenheit

Employing Philosophic Doubt, we can continually circle back to this assertion to test it again, and again. Pragmatically, there comes a point where we only question these basic and well founded particulars when we have reason to do so, but the doubt is always present. Sometimes for precision, sometimes to be sure that we are building off of the knowledge correctly, and others to help with the replication and experimentation assertions that grow science. Balfour was a strong proponent of natural sciences, and the use of this kind of questioning. Science founded on observation and experimentation was something truly important to him. Keep in mind, the 19th century was shaped by scientific discovery at a pace never before seen. Balfour kept an even head about this, and believed more in the assumptions of science as the path to understanding the natural world. Propositions which states laws, or which stated facts, had to be built on concrete science and not just personal belief or anecdote. Some of his points we would take as obvious today- for example, when using comparative probability, would we run an experiment or trial just once, or twice? Multiple times? If we ran something like this just once, it wouldn’t be comparative probability, but if we ran it twice and accepted this as the final answer to the question we would miss out on the further replication and experimentation on the subject. The curiosity that Philosophic Doubt embodies would keep the experiment and replication going. Without Philosophic Doubt, we fall into a trap of not questioning initial assumptions or findings.

Another interesting thing about Balfour’s work is that it came at a time where there was a great deal of belief in a mechanical universe that followed strict Newtonian laws. At the time, this was compared with more metaphysical alternatives. Balfour cautioned everyone to continually use philosophic doubt and to question both belief systems- even if the “mechanical universe” was winning by a landslide at the time. If we were to take Balfour’s points and stretch them into the future, we might see how he would have found some justification in further development in physics- quantum mechanics for example, where the Newtonian mechanical universe which was seen as sufficient to explain everything, falls a little short. Without that testing of the original tenets of physics, the use of Philosophic Doubt, we might not be where we are now. The analysis of Balfour’s work could go on for entire chapters, but I would like to top it off with an excerpt on the topic of the evolution of beliefs, and the reluctance to test our own personal beliefs:

“If any result of ‘observation and experiment’ is certain, this one is so- that many erroneous beliefs have existed, and do exist in the world; so that whatever causes there may be in operation by which true beliefs are promoted, they must be either limited in their operation, or be counteracted by other causes of an opposite tendency. Have we then any reason to suppose that fundamental beliefs are specially subject to these truth-producing influences, or specially except from causes of error? This question, I apprehend, must be answered in the negative. At first sight, indeed , it would seem as if those beliefs were specially protected from error which are the results of legitimate reasoning. But legitimate reasoning is only a protection against error if it proceeds from true premises, and it is clear that this particular protection the premises of all reasoning never can possess. Have then, then, any other? Except the tendency above mentioned, I must confess myself unable to see that they have; so that our position is this- from certain ultimate beliefs we infer than an order of things exist by which all belief, and therefore all ultimate beliefs, are produced, but according to which any particular ultimate belief must be doubtful. Now this is a position which is self-destructive.

The difficulty only arises, it may be observed, when we are considering our own beliefs. If I am considering the beliefs of some other person, there is no reason why I should regard them as anything but the result of his time and circumstances.” -Arthur James Balfour, “In Defence of Philosophic Doubt” (1879).

Back to Basics- Science and Philosophic Doubt

In “Applied Behavior Analysis ” Cooper, Heron, and Heward begin their first chapter with the basics of what science is, specifically behavioral science, and the assumptions and attitudes of science including Philosophic Doubt. Cooper, et al., consider these foundational concepts in science as a whole and relate their importance to psychology and behavioral science. In their words:

“The attitude of philosophic doubt requires the scientist to continually question the truthfulness of what is regarded as fact. Scientific knowledge must always be viewed as tentative. Scientists must constantly be willing to set aside their most cherished beliefs and findings and replace them with the knowledge derived from new discoveries.

Good scientists maintain a healthy level of skepticism. Although being skeptical of others’ research may be easy, a more difficult but critical characteristic of scientists is that they remain open to the possibility- as well as look for evidence that their own findings and expectations are wrong.” -Cooper, Heron, Heward, “Applied Behavior Analysis”, (2017).

Bonus! B.F Skinner
“Regard no practice as immutable. Change and be ready to change again. Accept no eternal verity. Experiment.”- B.F Skinner, 1979

The sentiment behind Philosophic Doubt and science is that of openness and humility. Not only is the scientific work we read subject to doubt, but our own as well. The latter is the most difficult part- challenging our own beliefs constantly, challenging our most cherished propositions and reasoning. To some, this is something that expands the horizon of future knowledge infinitely, to others; a hard trail to follow that is no easy task. In either case, perhaps this brought up the importance of Philosophic Doubt, and how it ties in with the other assumptions in science as a challenging but inseparable part of the process.

Comments? Thoughts? Likes? Questions? Leave them below.

References:

1. Balfour, A. J. (1921). A defence of philosophic doubt: being an essay on the foundations of belief. London: Hodder & Stoughton.

2. Cooper, J. O., Heron, T. E., & Heward, W. L. (2017). Applied behavior analysis. Hoboken, NJ: Pearson.

3. Skinner, B. F. (1953). Science and human behavior: B.F. Skinner. New York: Macmillan.

The Philosophy of Logical Behaviorism

How we use language in behavioral science and psychology is important.

If you’ve ever studied psychology, or behaviorism specifically, have you ever asked?
“Why do we have to use observable terms for behavior?”
“Why do we define things in operationally and observational terms?”

These are also the questions that “Logical Behaviorism” was chiefly concerned with. If we are to treat psychology as a natural science, then proponents of logical behaviorism would suggest the language, theory, and semantics used in this process should reflect this.

Logical behaviorism is perhaps one of the more obscure branches of behaviorism as a whole, but whose history is closely tied with the more familiar methodological or radical behaviorist schools of behavioral thought. Logical behaviorism originated in the early 20th century when scientists and philosophers aimed to establish psychology as an independent and experimental natural science. Like methodological (classical) and radical behaviorism, logical behaviorism shared the focus of objectivity and reliance on measurable techniques for observation, data collection, and the rejection of introspective-heavy mentalistic explanations.

Where logical behaviorism differed from other behaviorist branches, is that its was concerned primarily with the scientific usage of language and semantics in psychology. The early proponents of logical behaviorism aimed to completely differentiate the objective and scientific behavioral psychology of the time, from the popular Freudian and Jungian introspective and mentalistic psychological writings. Because of this, logical behaviorism is often seen as more a philosophical psychology than directly empirical. Many of the positions of logical behaviorism are hidden in the works that modern behavioral practitioners are very familiar with. They inform the attitudes and (even in some cases) suspicion towards mentalistic language in day to day practice.

“The Ghost In The Machine”

The philosopher primarily known with the development of logical (or analytical) behaviorism was Gilbert Ryle. For most of Ryle’s academic career, he was focused on the dismissal of the mind/body distinction (Cartesian dualism, or substance dualism) that was, and still is, extremely common in psychological and philosophical writings and thought. You likely come across dualistic language like this often in even the briefest psychological conversations, which infer that mental states (thoughts, feelings, imagination, etc) occur in a hidden, or non-physical area or dimension of the mind, apart from physical and physiological processes. Ryle disagreed with this.

Belief, for example, would not be seen as an airy mental element of cognition, but something completely within the explanatory processes of biology, according to Ryle. Gilbert Ryle believed that there was a risk in these distinctions, that lead philosophy and science astray, chasing separations that did not actually exist.

Gilbert Ryle’s writings (“The Concept of Mind”) often took aim at these dualistic notions, taking the example of a mental effort of “will or volition” which then transformed into physical action (mental thought leads to physical action), as a widely held mistake. He calls these mistakes the “dogmas of the ghost in the machine”. The very study of the separation between mind and body were a waste of time, and fruitless, accord to Ryle because they were category mistakes; separated more by their linguistic definitions than any real qualities. Instead, Ryle proposed that all actions and behavior are physical in nature, and that there are propensities and dispositions that can be explained entirely by the behavioral actions of an individual to seek or avoid the stimuli involved.

For example, there would be no mind/body distinction between wanting breakfast, and cooking breakfast. For Ryle, there was nothing in some immaterial mental state that spurred the cooking of breakfast. To speak or imagine this cause-effect relation, leads to a principle misunderstanding of the event and behaviors themselves. If we tried to study this event using those terms, we would have to chase down the immaterial mental state, or depend that it existed outside of physical or observable evidence, as a part of our study. This could lead to circular reasoning very quickly. Chasing this “ghost in the machine” bears no scientific fruit.

Ryle does not preclude that there are physical process of behavior and action which can not be seen (which he calls propensities and dispositions), but that these do not reside in some immaterial state, and these propensities/dispositions can be discovered through the modes of observable behavioral action. This shares some similarity with the “private mental events” of B.F Skinner’s radical behaviorism, but does not go as far into speculation of functional and environmental relations as Skinner had. Ryle did use a behaviorist theory of mind, but one that was focused on the language of behavioral processes.

It is fair to note, however, that Ryle’s work has received criticism because the focus on language dependent on observable action may be too restrictive. Critics of Ryle’s work have often brought up that there may be a bigger distance between internal or “mental states” and verifiable behavioral actions. It is possible for most people to imagine a situation where someone is happy, while not showing outward “behavioral actions” or signs of happiness. The reverse is also true. Movie actors for example, act in ways that do not accurately represent their “mental state”. An actor’s portrayal, in some cases, does not reflect the actual propensities and dispositions that Ryle would infer from their behaviors using his methodology. It is historically more practical to understand that Gilbert Ryle’s work (especially “The Concept of Mind”) has greatly influenced how behaviorists treat the mind/body distinction, dualism, and mentalist language in their scientific writing, but that Ryle’s theories and positions were influential in part, not as a whole.

The Vienna Circle and Logical Positivism

Where might the “Logical” part of “Logical Behaviorism” come from? Why would it be called that? The answer lies in an earlier philosophical endeavor called logical positivism, developed by a group of late 19th century philosophers called the Vienna Circle. The connection between this philosophy of logical positivism with behaviorism, is that behaviorists seek a framework of language that can accurately reflect the observable facts of behavior. Without this framework of language, misconceptions, circular reasoning, and arguments about the linguistic minutiae of the scientific literature could bog the whole study of behavioral psychology as a natural science down. Dependable language leads to fewer misunderstandings in the scientific literature, and potentially better replication of what is being tested and studied.

Logical positivists, and the later logical behaviorists, wanted linguistic precision in the study of psychology and behavior. A precise language could lead to better verification of observable events. The philosophers of the Vienna Circle, called this the “principle of verifiability”, and that there should be no statements in the literature that could not be verified empirically, or at the very least is capable of verification at a future date. There are certain things that can be stated which can not be verified immediately, but allow, in their wording, a means to verify them at a later date. For example: “Next Tuesday it is going to rain.”. This statement can not be verified right now, but does allow for verification. This was important to logical positivist philosophers, and later the logical behaviorists. This is a staple of most empirical behavioral research, and is often taught as a maxim without needing explanation. It was not always the case. Without employing the “principle of verifiability”, any unverifiable statement could be used as premise with impunity. Unverifiable statements (mentalistic or substance dualistic statements for example) can not be disproven objectively, because they allow no empirical way to do so. To the logical behaviorists, these statements are hardly helpful to scientific literature.

The philosophers, scientists, and mathemeticians in the Vienna Circle, chiefly Rudolf Carnap, Moritz Schlick, Herbert Feigl, Felix Kaufmann, and A. J Ayer, developed this form of analysis using the “principle of verifiability” taking heavily from earlier philsophers like Ludwig Wittgenstein, to design a way for statements to be able to be analyzed. You likely see these types of statements in empirical and scientific research all the time without realizing it. The early logical positivists differentiated between what they called “analytical statements” and “synthetic statements”. Analytical statements could be true simply because it logically follows from their meaning.

Example (Analytical Statement): All circles are round.

Of course they are.

Synthetic statements, on the other hand, require some empirical verification in order to be confirmed or proven true. These statements when using the “principle of verifiability” can be verified.

Example (Synthetic Statement): “This cat has gray fur and is wearing clothing.”

Let’s take a look.

(Mr. Darcy)

Well, look at that. We can verify this statement with observation.

Both of these statements are important to distinguish between, but do not hold equal weight within logical positivism, and logical behaviorism. To the philosophers of the Vienna Circle, and most logical positivists, synthetic statements are what should be first and foremost of importance. Synthetic statements make claims about reality, which can be tested, and that is incredibly important when it comes to natural sciences. The use of analytical statements are more trivial, to logical positivists, because they bring no new information. Logical behaviorism shared in the logical positivist belief that true propositions and statements should be capable of scientific verification in order to be useful scientifically.

Logical Behaviorism takes from this the focus of synthetic statements about behavior, which are observable and measurable. Even when dealing with “mental concepts” or “private events”, the importance is in the language being used to create propositions that can be verified.

To Sum It Up: Logical Behaviorism Is About Language

To a logical behaviorist, concepts like the mind, and thoughts, and feelings, and imagination, all must be described in ways that have an observable or verifiable attribute to them in order to be useful scientifically. Logical behaviorism was developed in a time of strong mentalistic terminology, where circular reasoning towards behavior was common, and human action as a whole was sometimes treated as indescribable, and the mind in some ways untouchable by science. To the logical behaviorist, the semantics, or language of what we study and talk about when we try to describe behavior, even internal processes, must in some way be verifiable, or objective, in order to be useful in a scientific sense.

How we state things, and how we propose things, is important. To be too loose with language in this area invites misunderstandings, as Gilbert Ryle pointed out, or lacks the ability to be later verified, as the logical positivists believed. To make a statement that is observable, measurable, and can be verified is what many logical behaviorists believed would bring psychology, and the new branch of behaviorism, closer to that goal of being a natural science.

I hope you enjoyed this brief look at the history and reasoning behind the Logical Behaviorism theories, and the many influences. This is by no means an exhaustive dig into the rich topic, but a broad touch on the vary complex psychological and philosophical roots which all came together to bring about what we know about behaviorism and psychology, and what logical behaviorism still shines through these many decades later.

Comments? Questions? Thoughts? Leave them below! Don’t forget to follow!


References:

Clark L. Hull. (2019, February 21). Retrieved from https://en.wikipedia.org/wiki/Clark_L._Hull

Fancher, R. E., & Rutherford, A. (2017). Pioneers of psychology. New York, NY: Norton & Company.

Hull, Clark L. Principles of behavior. (1964). New York.

Ozmon, H. (2012).  Philosophical foundations of education. Upper Saddle River, NJ: Pearson.

Ryle, G. (1949). The concept of mind. New York: Barnes & Noble.

Skinner, B. F. (2015). Verbal behavior. Mansfield Centre: Martino Publ.

The new encyclopaedia Britannica. (1977). Chicago, IL: Encyclopaedia
Britannica.


Image Credits:
Artwork, and photography are originals by the author Christian Sawyer.

Token Economies: What Behavioral Science and Blockchain Technology Have In Common

“Token Economies”- two words springing up at Blockchain and Cryptocurrency summits and conferences with increasing regularity. Token Economies have been used by behavioral scientists and practitioners for decades, but recently they have taken off in the field of Blockchain and Cryptocurrency technologies. Both applications use the term “Token Economy” interchangeably. In technology conferences and summits, it is the original behavioral psychology definition that is used to describe the concept. The tech field is now using the original token economy concept and expanded it to apply to what some might call the future of commerce and currency. Exciting stuff. Here, I will break down the basic concepts of what a Token Economy is, and how both behavioral scientists/analysts use them, as well as the new application in the technology by Blockchain and Cryptocurrency developers.

tokens

The Token Economy

Let’s break it all down. What is a token economy? A token economy is a system where tokens, or symbols, are used as conditioned reinforcers which can be traded in for a variety of other reinforcers later. It is not a bartering system or prize system where objects/access/services are given directly following a target behavior, but a conditioned stimulus (token) without necessarily any intrinsic value that is agreed upon to add up to exchange or buy another reinforcing item. A common example that most of us are used to is money. Paper money, specifically, can be considered a part of a token economy in that it is “traded in” towards some terminal reinforcing stimulus (or “back up reinforcer” as it is called in behavior analysis). The paper money is a conditioned reinforcer because it has no necessary intrinsic value but has conditioned value for what it can eventually be used for within the token economy.

This was taken up originally by behavioral researchers in the 1960’s, as a form of contingency management for the reinforcement of “target behaviors”- or prosocial learning, in therapy situations. Reinforcers are important psychologically because, by definition, reinforcers change the rates of behavior that they follow. They can help teach life-changing skills, or alternatives to some destructive or undesirable behavior quickly. But, reinforcers can be tricky too. People can become bored or satiated with tangible rewards, such as food, but within a token economy, reinforcement can be delivered in the form of tokens and allow for a later exchange or choice of any number of possibilities desirable to that individual. By pairing these tokens with access to “primary reinforcers” (reinforcers that are biologically important) or other “secondary reinforcers” (stimuli that have learned value), the tokens themselves become rewarding and reinforcing- thereby creating a sustainable system of reinforcement that defies the satiation and boredom variables that the researchers originally found as barriers to progress. Alan Kazdin’s work “The Token Economy” is a fantastic resource on the origins and research that began it all.

What can a token be? Nearly everything. But, it has to be agreed upon as a token (given some value for exchange) in order to serve as a token for the purpose of trading it in, or buying with it. Giving someone a high five after doing a great job at work, for example, is not a token. It is a reward, and possibly a reinforcer, but it was not conditioned to have value, and cannot be saved or exchanged. Tokens also need not necessarily be physical, or tangible. They can be symbols, or recorded ledgers, so long as that information can be used for the exchange in the corresponding token economy. This is where blockchain and cryptocurrency technologies tie in to the original behavioral science understanding of a token economy. Can data, or information, serve as a token and be used in a token economy if it is agreed upon to have value and worth exchange? If you haven’t heard of BitCoin (a Cryptocurrency), the answer is yes.

2rarqo

Blockchains and Cryptocurrencies

What is Blockchain then? And what is a Cryptocurrency? Using our original definitions of tokens and token economies, for data or information to be considered tokens they have to be able to be exchanged and have value that can be traded within the token economy. Blockchain technology solves this by creating units of data called “blocks”. These blocks, simply put, are a growing list of data records that contain a “cryptographic hash” of previous blocks. These linked blocks form a ledger which is resistant to duplication and tampering. In layman’s terms, unlike most data that people can manipulate and come into contact with day to day, a “block” within this Blockchain cannot be altered or copied and maintains a faithful record of time and transactions. Resistance to copying/duplication means that it cannot be forged, and resistance to altering means that this data (the record of information) is seen as reliable. If we create a currency using this technology, then we have the means to create units, or tokens, that are individual, can be traded, and have a consistent and (for the cases of this introduction) unalterable record of transaction. Assigning value to this creates a digital currency called Cryptocurrency. Tokens. Transactions can take place using these blockchains. These transactions take place person to person (“peer to peer” or P2P), meaning that once a unit of cryptocurrency is exchanged from one person to another, it resembles very much a physical exchange of all other forms of currency. This exchange does not require any medium, such as a bank, like physical currency does in online banking for example.

Blockchain and Cryptocurrency developers, then, would be looking to create a form of token currency that can be traded within this broader token economy- that is both reliable enough to be used by enough people to catch on or become commercially viable, while still maintaining the benefits of a cryptocurrency (security, privacy, etc) over traditional currency. These cryptocurrencies, these units of data, these blocks, have no intrinsic value themselves. They are tokens in the very real sense that the original behavioral research intended. Their usage and effects, then, appear to follow in the same vein. Currency can be reinforcing, reinforcement can alter behavior, and once a token takes on value through the conditioning process; it can be truly valuable in its own right as a “generalized reinforcer”- a reinforcer that is backed up by many other types of reinforcers. A dollar, for example, as a widely used currency can be used for a nearly countless number of goods, services, and transactions. This makes it a good generalized reinforcer. The more a token can be traded for, the better a generalized reinforcer it becomes.

Will a form of cryptocurrency, like Bit Coin, gain this same traction as a currency, or token, to access other reinforcers in trade? Many people say yes. That’s where both behavioral scientists and blockchain developers can both find excitement in each new development and innovation.

Likes? Comments? Questions? Did I get it wrong ? Leave your comment below!

References:

  1. Alan, K. (n.d.). The Token Economy: A Review and Evaluation. New York, NY 1977: Springer. doi:10.1007/978-1-4613-4121-5
  2. Blockchain. (2019, January 13). Retrieved from https://en.wikipedia.org/wiki/Blockchain
  3. COOPER, JOHN O. HERON, TIMOTHY E. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. Place of publication not identified: PEARSON.
  4. What is Simple Token (OST) [Audio blog post]. (2018, August 22). OST Live Podcast

Image Credits:

http://www.imgflip.com

http://www.smilemakers.com

Did Cognitivism Beat Behaviorism?

hands-people-woman-meeting

Some hold firm to the idea that the division between behaviorism and cognitivism is a vast divide; where there is a winning theory and a losing theory. You’ll hear them- “Behaviorism died decades ago!” and “Thoughts about thoughts? That’s just unprovable mentalism!” shouted from entrenched believers until they are blue in the face. There may be some salient historical details that explain why they feel that way; behaviorism (arguably) replaced many of the mentalism and introspective psychological methods well into the 20th century. Then, some would say that the behaviorist movement was halted by Chomsky’s rebuttal of B.F Skinner’s “Verbal Behavior” and the rise of the 1960’s “Cognitive Revolution”. The deep division could be argued as unbridgeable. As someone who was not practicing at the time of these contrasting theories coming to a head; I always wondered what it would have been like. Did everyone see it as a giant butting of heads? Did all the researchers and scientists find themselves marked on either side? Are the loud entrenched voices of today just echoes of the past that haven’t been resolved? If so, how did cognitive behavioral therapies do so well blending the two perspectives? There had to be more than just a line in the sand. Enter Terry L. Smith, and his book “Behavior and it’s Causes”- relating the exact sentiment which I was so curious about.

“I had (just like everyone else) read Kuhn (1970), and so almost reflexively interpreted cognitive psychology and behavioral psychology as competing paradigms (see Leahey, 1992, for a discussion of how common, and mistaken, this interpretation is). Cognitive psychology was clearly on the rise, so I inferred that the Skinnerian program must be on the decline. Indeed, I thought it must have disappeared by now… What I discovered was that during the 1960’s, the Skinnerian program had actually grown at an accelerating rate. This baffled me. How could operant psychology have survived and even prospered in the midst of “the cognitive revolution”?”

-Smith (2011).

How could that be? Terry L. Smith’s book explores this topic, speculates on some great points, and comes to several strong conclusions. I won’t spoil it for you aside from one- “Operant psychology” as Smith calls it, separated itself from being tied down to every philosophical tenet of Radical Behaviorism. It was Radical Behaviorism, in Smith’s view, that had taken the beating because it was too rigid on what it would allow to be studied, and cut too much out of what could be considered the study of behavior. This was a fascinating point, to me, since I had already studied what B.F Skinner had done with Radical Behaviorism to broaden it from Methodological Behaviorism (ie. private events). We’ve heard this one before, right?

“Radical Behaviorism does not insist upon truth by agreement and can therefore consider events taking place in the private world within the skin. It does not call these events unobservable”- Skinner, 1974

This was one of the larger distinction B.F Skinner made from Watson’s methodological approach which was strictly focused on observable stimuli and responses. If we take Smith’s interpretation on what “operant psychology” is today; it goes even further from radical behaviorism by cutting the divide and seeing itself within the broader breadth of psychology as a whole. This rings true for me when I speak to the behaviorists and practitioners I see in the field- there is still that aversion to “mentalism”, but the focus on the observational thrust that comes from Watson’s strict view is mainly practical- data collection is best done when people can see and define what they track. The behaviorist tradition still lives on in the practice of Applied Behavior Analysis, for example, but Skinner’s written word is not taken as a biblical truth; the components of the philosophy and science that propelled behavioral psychology to continue to progress are still empirically validated. They are scientific findings. The ones that work and do the most good remain.

This is Smith’s main point on “operant psychology” during the “cognitive revolution”; it continued on stronger than before on its own steam because the findings were strong and reproducible. While Chomsky, and other cognitivists, had made some compelling points on the limitations of Radical Behaviorism as an idea and philosophy; it did not undercut the behavioral science as a whole. The practices, techniques, and ideas of both Methodological and Radical behaviorism that came through in the empirical work remained. The broader reaching philosophy that might limits on the science with no empirical backing? Not so much.

Keep in mind that during the “cognitive revolution” beginning in the 1960’s- research in brain mapping and neurobiology had come a long way from the days when Watson, Pavlov, Thorndike, and Skinner began their work. Behavioral theory had been running strong for the beginning of the 20th century, and was now met with convergent findings. Both had their uses, and the ideas that did not refute one another but did overlap when it came to the theories. Internal processes were becoming more understandable through the biological discoveries; which some strict behaviorists may have misinterpreted as just another form of mentalism. That’s a hang up that did not help them. On the other hand, some cognitivists still thought all of behaviorism was comparing humanity to basic stimulus-response (S-R) machines. Another misunderstanding, another hang up. My interpretation is that people fought over those illusory extremes. Those were the voices that screamed the loudest but at the same time were the most misguided on what was actually happening. I equate this to the kind of thing we see on the internet- the “strawman arguments”, where someone constructs an exaggerated facsimile of their opponents’ ideas and tears those down rather than confronting what is actually said. It creates an easy target, but does not actually represent the reality. Strict behaviorists get some things right. Strict cognitivists get some things right. Sometimes…just sometimes… both groups get things wrong too! Surprising, right? That is how anything based in theory and following the scientific method actually works.

Maybe Terry L. Smith is on to something. Maybe we consider ourselves all a part of Psychology with a capital P, and put our findings and theories out there. The right ones that can empirically and reliably help people will be the legacy.

To be fair though, I am not completely in the objective virtuous middle; I’ve read Noam Chomsky’s review of Verbal Behavior and believe he missed the point.

Thoughts? Likes? Comments? Questions? Leave them below.

References:

Chomsky, N. (n.d.). 4. A Review of B. F. Skinner’s Verbal Behavior. The Language and Thought Series. doi:10.4159/harvard.9780674594623.c6

Skinner, B. F., & Skinner, B. F. (1957). Science And Human Behavior. Riverside: Free Press.

Smith, T. L. (2011). Behavior and its causes: Philosophical foundations of operant psychology. Dordrecht: Springer.
Photo Credits: http://www.pexels.com

Happy ABA Halloween!

Title

Halloween is coming up soon, and as a treat, I’ve created some silly and fun ABA style printouts. UPDATE: For the 2019 Halloween holiday fun, all new print outs will be added as we get closer to the holiday!

  1. Spooky IOA Data!
OoOH1

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween12.pdf

2. The Horror of Subjective ABC Data!

OoOH2

Link to the full printout here: ABAHalloween2

3. The Terror of Incomplete Data!

OoOH3


Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween31.pdf

4.  The Dread of Corrupted and Lost Graphed Data!

OoOh4

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween41.pdf

5.  The Sheer Fright of Finding Ineffective and Non-Student-Centered Goals!!

OoOh5

Link to the full printout here: https://behavioralinquirydotcom.files.wordpress.com/2018/10/abahalloween5.pdf

6. The Shrieking Terror of Unnecessary Most to Least Prompting!!

OoOH6


Link to the full printout here: ABAHalloween6

7. The Dread of Pseudoscience for “Behaviors”!

OoOH7


Link to the full printout here: ABAHalloween7


8. The Panic of Misused Terms

Link to the full printout here: ABAHalloween8

OH NO! I hope I didn’t scare you too badly.

Have some candy and remember how safe and relevant all your data and interventions are… Whew.

Like them? Take them! No fee, but please be kind with artistic credit.

Why we don’t always prompt: Behavior Analysis meets Vygotsky.

kids-girl-pencil-drawing-159823

In the early 20th century, there was a developmental psychologist named Lev Vygotsky working on theories of learning and development in parallel to many of the behaviorist traditions. If you were to ask a graduate student taking behavior analytic courses who Vygotsky was, they would most likely shrug their shoulders and wonder why that was important. He isn’t Watson. He isn’t Pavlov. He isn’t Thorndike. He isn’t Skinner. He isn’t Lindsley. So, why would a behaviorist ever want to care? Well, it’s because his work ties in so closely to the behaviorist tradition, that you could in some cases use his terminology and frameworks interchangeably and still see the same results. His work can help clarify why we, as behavior analysts, trainers, educators, and even parents, should not prompt every single time we see a child begin to struggle with an endeavor or task.

To an educator or professional following the behaviorist tradition, it’s not all that hard to describe. Prompts help the learner reach a reinforcement threshold that that their response likely could not have reached on its own. Shaping- describes a process by which an emergent behavior which is similar in some way to a target behavior, is reinforced by successive approximations to become the terminal target behavior. Basically, it’s taking an “okay” behavior attempt, and rewarding the behaviors that look closer to improvement until it’s “perfected” enough to reach more naturalistic reinforcement in the broader environment. To a behaviorist, that means looking at what the learner has in their repertroire, what they can do right now, and plan to reward the responses that improve that towards some end goal response. But wait, how exactly do we know when to intervene? And why don’t we intervene every time we see the learner encounter difficulty?

The trouble with that is that sometimes a learner does not actually learn from being prompted too much. Sometimes that reinforcement only contacts the amount of effort the learner expends to receive prompting. Sometimes they become dependent on those prompts, and then it is the educator doing the behavior, and the learner receiving reinforcement. They don’t improve because they have no need to improve. They get the prize every time their educator does it for them. That behavior that the educator prompts, might never transfer through modeling. Why should it, if the reinforcer comes anyway? This is where Vygotsky comes in. Vygotsky believed that there is a Zone of Proximal Development.

Lev Vygotsky was not a behaviorist. In many ways, he was against the methodological behaviorism that was popular at the time which focused on purely observable stimulus-response relationships. Vygotsky also believed that learning was not just a process that drew from a present environment of contingencies, but a broader wealth of cultural and societal forces that accumulate through generations and have impacts that were not directly related to the behaviors at hand. However, when it comes to the Zone of Proximal Development, his theories coincide with what behaviorists would conceptualize as both repertoires and the necessary thresholds for prompting. Vygotsky believed that there was a level at which a learner could successfully accomplish tasks without assistance, and a level at the other end of their developmental range that they could not accomplish without considerable help in the form of prompting. Between that, however, was a zone where a learner could accomplish them with some collaboration and prompting and eventually surpass it to a level of independence. It’s a zone that is in many ways different from individual to individual, but within that zone of proximal development; prompting (or collaboration as he called it) was at its most effective.

Think of it like this:

Zone of the learners “actual” development Zone of Proximal Development The limit of their current developmental ability
These are responses that the learner can perform, and tasks that the learner can complete without any assistance from others. These are tasks and responses that the learner can accomplish with the assistance and prompting of others.

These are tasks and responses that are beyond the learner’s ability to accomplish and can only be produced with considerable support and assistance.

*Behaviorist Footnote:
Think of this as the responses already in the learner’s repertoire. These are “easy”.
*Behaviorist Footnote:

Think of this as the area of “shapable” responses that are likely to lead to independent future responses. Vygotsky called this “scaffolding” but the process of “shaping” is synonymous.

*Behaviorist Footnote:

The client can be prompted through these tasks, but are unlikely to be able to reproduce them even with shaping procedures at this time.

This framework delineates an interesting range where a learner needs and could use the help of an educator or teacher to help prompt them, and when not. In the initial range, prompting is unnecessary and might actually hinder the learner from engaging in those responses in their most independent forms. The learners who can engage in the “easy” responses and find that reinforcement in the broader environment would be more likely to occur in the future. Prompting too much here could stifle that. In the next range, the Zone of Proximal Development, as Vygotsky calls it; prompting could actually be of the most use! These are responses that are viable for occurring and reaching natural reinforcement, but they just need a little help at first to get there. Here, prompting in the form of modeling or shaping could help the learner take their initial responses and bring them to their terminal and most effective independent forms. This is the exciting part. This zone is where the work put in by the educator and teacher could meet maximum return on what the learner can benefit from. Now, we have to be careful not to reach for the moon here. The final zone is where, even with prompting, the learner is unlikely to be able to shape their responses successfully. This, for example, is trying to teach a learner to run before they can walk. They need those foundational responses before they can even be prompted to a more advanced terminal response. An educator who comes across this scenario might be wise to dial the expectations back.

Between those two ranges of “easy” and “unlikely”, we find the responses that can be prompted for the most good. We would not prompt too much, and stifle the learner’s ability to contact reinforcement on their own, but nor would we fail to prompt at all, and miss those responses or behaviors that just need a little push. This is where a behaviorist, teacher, educator, or even parent, can take a thing or two from Vygotsky’s work. And if you’re a tried and true behaviorist who can’t believe that a cognitivist would be mentioned here, I’d suggest an open mind. You might even be surprised about the similarities between Vygotsky and Skinner on private events and “inner speech”. We can touch on that later, but for now, think about the zone of proximal development in your life and practice; what could use a little help?

Likes? Comments? Questions? Leave them all below!

References:

Burkholder, E. O., & Peláez, M. (2000). A behavioral interpretation of Vygotsky’s theory of thought, language, and culture. Behavioral Development Bulletin,9(1), 7-9.
COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED BEHAVIOR ANALYSIS. S.l.: PEARSON.
ORMROD, J. E. (2019). HUMAN LEARNING. S.l.: PEARSON.
Image Credits:

A Behaviorist’s Take on Far Cry 5

Far Cry® 5 (3).png

Forewarning to the regular readers; I’m talking about video games today. In particular, a fantastic action-adventure game I was turned on to by friends called Far Cry 5. That’s not an entire truth; I’ve played the predecessors too, but this one stands out to me narratively because it has a story based around social control. As a Board Certified Behavior Analyst, I’m drawn to these things. Imagine a world not so different from ours, where a doomsday religious cult takes control of a part of Montana and spreads a violent vision across the state corrupting the citizens to the new lifestyle of brutalization and indoctrination. That calls for a hero right? That’s the game. The thing that makes this interesting to a behaviorist is how it uses those social forces in-game to create fictional forms of coercion that in many ways matches the existing psychological science of conditioning. I like this game. It’s complex, it’s fun, and I’m going to be testing myself in its new Infamous difficulty mode over the next two weeks and during Extra Life to rack up some more donations for the local children’s miracle network hospital near me (link here and below). I’ll also try to keep spoilers beyond the psychological methodology to a minimum, Let’s get on to the psychology.

In the game, there are several bosses who control section of the map. Each of them represents a different form of that control. Spoiler alert. But honestly, no large reveals here. Joseph Seed is the big boss. He’s a sort of preacher borrowing from several religious traditions to deliver his idea on a “collapse” of society and a vision for a simpler future. He relies on a group/mob mentality, social reinforcement (a semi-Bandura style of vicarious punishment) and a form of authority that borrows from his own charisma and the religious texts he cites. Not too out of the ordinary. His doomsday cult also employs sub-bosses. John, a former lawyer, who is obsessed with having his devotees say YES, and uses similar group and social coercion. Faith, who uses a toxic mix of drugs called Bliss to create hallucinogenic induced indoctrination. Believable to a degree. Then, there’s my favorite and the reason for this post; Jacob. Jacob is a little different. He’s said to have a soldier’s background, but he uses a method of conditioning, which he refers to as a basic classical conditioning, with a substance (drug) related assistance. This puts his subjects into murderous rages/trances when he plays the song “Only You” by The Platters. He tries to make his method sound simple. He tries to make you believe it’s just simple stimulus pairing through classical conditioning.

Jacob does abhorrent experiments with these methods on both animals and humans, causing devastation and treachery across the part of the story. It’s very tragic. The thing is…he’s not just using classical conditioning. Conditioned stimulus with a conditioned response? Not quite. There’s more to it. He tries to explain his method several times and even uses the standard definition of classical conditioning to describe how he creates these diabolical effects, but when we look at the practice there’s a sinister amount of complexity that he leaves out. This fictional boss Jacob might think that it’s simply food deprivation, a song, practice in his chairs/training chambers that do it; but he’s selling himself short. He’s actually using both classical conditioning and operant conditioning. That fiend.

Far Cry® 5 (2)

Jacob’s Classical Conditioning

It might surprise you, but Jacob didn’t invent this form of conditioning. It actually has its origins with a researcher named Ivan Pavlov (and also Edward Thorndike) involving the well-known experiment with bells and salivation. There we see the pairing of a conditioned stimulus with an unconditioned response. Basic stimulus-response psychology. Now, in this fictional world of Far Cry 5, the bad guy Jacob references these things, and even Pavlov (“Pavlovian”) once or twice. I think narratively, it makes sense. He’s training killers. He sees his conditioned stimulus (a song) and their response (murderous rages) to be synonymous with that process. Except… when we look at the training, it’s not that clean. There are parts that seem to follow this method; mainly that he is engaging in a stimulus pairing procedure that works on a learned behavior change for the individual. The environmental event (or stimulus) precedes the response he is looking for. That makes sense too. Even the cutscenes play out the process correctly! We assume the original neutral stimulus “Only You” by The Platters does not lead to murderous rages to an ordinary person. He needs to make that connection happen in his victims by pairing stimulus and response. Jacob pairs that neutral stimulus, with an unconditioned stimulus (threat, through some form of a hallucinogenic and visual process), to elicit an unconditioned response (attack). Then, following this, he presents the newly paired conditioned stimulus (“Only You” song) to elicit the newly conditioned response (attack). Makes sense, right? Somewhat. But look at the training methods a little deeper and we get some complexity. He has the stimuli he wants available. He has the song. He has the wolf pictures, and the predatory images of wolves killing deer, but he also adds something else in… Reinforcement and Punishment during his trials.

Far Cry® 5

Operant Conditioning through Discrete Trial Training (DTT)

The reason I like the Jacob missions so much is that they do use real-world conditioning methods. They just undersell them a little. Jacob, the big bad guy I hated through two playthroughs of this game, uses both classical conditioning and operant conditioning to make his process work. Also, some fictional drugs and hallucinogenics, but let’s focus on what we know. Operant Conditioning is different from Classical Conditioning (or “Pavlovian Conditioning”) in one major way; it focuses on the ability of the subject to respond in a specific way, followed by a reinforcer in order to increase the frequency of that behavior or shape it towards a targeted goal. When someone mentions B.F Skinner, or Skinner Boxes, this is the type of conditioning they are talking about. Again, MINOR SPOILERS. Jacob does that to our character the first time he catches us. It’s not just the classical conditioning process of the song to the natural response of attacking when threatened. He trains our character to make that stimulus and response relationship stronger, and introduce faster and more vicious shaped behaviors to the repertoire of the character. It’s tragic. It’s sad. But his method is theoretically sound. You see, he uses what we behaviorists call Discrete Trials. The situation for each trial is exact. The Discriminative Stimulus (SD) to set it off is the same each time. Here is where the operant part comes in. The character is tasked with eliminating all enemies using the provided weapons, in an interval time frame, to complete the task and receive reinforcement for the chained behaviors. This follows the three-term contingency known as A-B-C. Antecedent. Behavior. Consequence. Let’s break it down.

(ANTECEDENT) aka Discriminative Stimulus- “Only You” Song, and visual presentation of threat-related stimuli.

(RESPONSE) – Eliminating targets.

(CONSEQUENCE)- Added time to the interval to allow for more time to complete the task for further reinforcement, and verbal praise from Jacob in the form of “Good”, “Cull The Weak” etc. This is Reinforcement.

Or… (CONSEQUENCE)- in the form of Punishment. Fail to complete the task by either being killed by enemies, or failing the time interval, and you meet the punishment contingencies of starting over from the beginning, or verbal reprimands in the form of “No”, “You are weak”, “You are not a soldier”, etc.

In other words, Jacob is shaping repertoires. He’s not just pairing behavior. He is creating a series of trained responses, operants if you will, in the presentation of his conditioned stimuli to be completed in a way that he controls. It is the fundamental ingredients of all learning, but he has twisted it a little to make this heroic character fall right into a trap of uncontrollable lapses in judgment and responding in cruel ways that are uncharacteristic or were a part of the character from the start. Chilling, right? But like a rat in a maze, or a box, the character must follow these in order to progress. Press the lever, get the cheese. Shoot the opponents, get the praise and progress.

Far Cry® 5 (5).png

Meta Game Talk: Conditioning The Players

Let’s talk a little about the big picture here. Yes, Jacob is fictional. Yes, this heroic character is fictional too. But when we look at the game from the lens of how it works on player reinforcement and punishment, we can actually see ourselves in the picture of this box. We are also conditioned, if we choose to play the game and continue to play the game, in a way that shapes and sharpens our behavioral repertoires. The same Discrete Trial Training that Jacob puts our character through, we are also participating in, and are contacting that same reinforcement and punishment as though it were our own (broadly speaking). We want to succeed. We want to continue. We want to win.

So, we get faster. We get more accurate. We learn the patterns. This is why we train. As Jacob has said so many times during these repeated trials. Each time, giving us a little more of a challenge. Each time, progressing us with different response repertoires to enact on the challenges in our way. It’s fun. In some ways, it can be a representation of the game as a whole. There are many reinforcers out there to get. Many contingencies to engage with. Even multiple endings (that’s the part that got me doing it twice).

I learned to shoot through both enemies in the revolver scene from the left. I learned to take the submachine gun in the next room and work from low to high, right, center, to left. For the shotgun, I turned corners with two lefts and one right at head level and tapped at the first sign of movement. For the rifle, I stayed low and aimed in short bursts, leading a clear line through the middle, and for the LMG… well, let’s not give it all away just yet. Your repertoires need honing too, and there are many variations that work.

That’s the fun.

ff5

The Behaviorist’s Take:

5/5 Stars for me. This game has been a joy to relax with. It’s challenging, but still can be taken in small parts and missions as time allows. It’s not too much of a time sink for someone on a professional schedule, and not too much of a learning curve for putting half an hour a day in. The story is strong, the emotional bond between the heroic character and the sympathetic (and often funny) people they meet is also a great time. They even let you make your own custom levels and challenges for your fellow players in an Arcade mode. I dig it.

As I mentioned above, this will be my game for the Extra Life 2018 Charity Event taking place the first week of November. I am, believe it or not, the weakest player on my team, but I love talking behaviorism and psychology and will be doing it all day to support the locals in Philadelphia, raising charity funds for the Children’s Hospital of Philadelphia (CHOP). I’m not only an outsider fan of their great work with children, I often have direct contact with the children’s hospital in my day to day work with young populations and can’t speak highly enough about their commitment. Extra Life is a legitimate charity, and 100% of the funds go directly to the children’s hospital. I’m leaving my link below and will be overjoyed if readers could contribute in some part to my goal so I can hold my head up high this year. Any amount at all. I’ll be streaming and will be happy to respond to any comments. Have ideas that I missed? I love those. Send those too.

Extra Life Donation Link

Comments? Like? Questions? Leave them below!

References:

Cooper, John O., Heron, Timothy E.Heward, William L.. (2007) Applied behavior analysis /Upper Saddle River, N.J. : Pearson/Merrill-Prentice Hall

Far Cry 5 [Software]. (2018). Ubisoft Montreal, Ubisoft Toronto.

Image Credits:

Christian Sawyer, M.Ed., BCBA (original Photography/Screenshots)

Steam. http://www.steam.com- Far Cry 5 Logo

What Cats Taught Thorndike About Learning

pexels-photo-774731

If you’ve heard the name Edward Thorndike, you are probably aware of the importance this psychologist had on early behavioral science. He was the one that coined the term law of effect, which is a theoretical precursor to the process of reinforcement. Thorndike was interested in psychology as an observable natural science, which at the time flew in the face of introspective methods. His work inspired many of the ideas and theories of B.F Skinner, and behaviorism as a whole, but what you might not know is that his big break came from what he learned from…cats.

People who are familiar with Thorndike’s law of effect are aware that his theory underwent several revisions, and his research came into criticism; but few would dispute that his dissertation on the associative processes of animals, and the puzzle box experiment raised the right questions that would lead to many of the processes within operant conditioning that we see used today. Thorndike owes much of that to the cats he worked with during the animal research. Thorndike was interested in animal learning. Could they learn? Were all their behaviors governed solely by reflex? If they could learn, what could they learn? Could they learn by observing others? To us modern readers, who are familiar generally with animal intelligence levels, this might seem like a no brainer, but in the late 19th century when Thorndike was doing his work a sizable amount of academics still held on to the old Cartesean view of animals as unconscious automatons. These cats, and Thorndike, would call that into question. They would demonstrate that they could not only learn, but overcome an obstacle that could not possibly be a part of their reflex system- a puzzle box. Cats solving puzzles?! Thorndike must be mad! (I’m not entirely sure his critics would have been that dramatic, but skepticism was definitely there.).

His experiment was simple. Place hungry cats within a box that required a simple action to open, in order to access food outside of the box. The puzzle box itself had a door which was shut by weighted string, and that string was attached to a lever or switch; by operating these, the door would open. There were other future experiments involving buttons which worked in a similar fashion, but the single response (which was not reflexive) was consistent. At first the cats wandered around the cage meowing, and circling, until they incidentally stepped or pushed on a lever, opened the door, and gained access to the food. This was not learning. This was incidentally triggering the device. BUT… when placed within the cage again, these cats were able to reduce their time wandering and meowing before they found the trigger and let themselves out. Thorndike tracked these times, noticing not only that these cats were able to find their way out faster each time, but also the rate at which this learning took place. Thorndike constructed a learning curve. The cats struggled at first, but got faster with each new trial until their rates of responding became efficient enough to level off. Thorndike believed that to even perform this type of learning required some intelligence intrinsic to the cats. Obviously some kind of intelligence that did not rely on language or introspection.

“From the lowest animals of which we can affirm intelligence up to man this type of intellect is found.”- Edward Thorndike

“Meow.”- Edward Thorndike’s Cat

Thorndike’s initial hypotheses were not always correct or confirmed however. Learning through observation, for example, was something he could not capture with these puzzle box trials. During the initial trials, he was not able to observe a difference in the rates of cats’ responding learned through their own trial and error, and cats who observed others escaping by pressing the lever/switch. (Later studies with other animal subjects would, of course, demonstrate animal learning through observation could in fact occur with certain animals). He also believed there might be some level of insight from the cats which helped them learn these tasks, but that too was not confirmed by his initial experiment- cats seemed to be more gradual learners from experience. This type of learning, again, appeared not to rely on language or introspective thought. Thorndike noticed that when he first put cats inside the puzzle box, their behavior seemed “erratic” or “chaotic”, but after successive trials the became more focused on finding the trigger to opening the door and engaged in fewer responses which did not align with the task. The cats were no longer circling and meowing; they were approximating responses that were previously successful and allowed them access to food. Thorndike concluded from this that this was responding based on the law of effect; that it happened due to past consequences. This would later be called by behaviorists as reinforcement, and documentation of the three term contingency.

“There is no reasoning, no process of inference or comparison; there is no thinking about things, no putting two and two together; there are no ideas – the animal does not think of the box or of the food or of the act he is to perform.”- Edward Thorndike

“…”- Edward Thorndike’s Cat operating a puzzle box trigger.

That’s not all. Thorndike also theorized that cats could engage in discrimination of human vocalizations, and behave differently in situations after being spoken to. Thorndike noticed that when he approached cats behind wired netting before feeding, they would leap up on to the netting and meow.

22448663_1713693551998266_4550124751462731526_n
(this author’s cat demonstrating exactly that)

To test this, he tested a loud proclamation in each condition:

“I MUST FEED THOSE CATS! (emphasis not present in original text)

Preceding conditions where he fed the cats, and

“I will not feed them.” (lack of textual enthusiasm probably accurate)

preceding conditions where he did not feed the cats.

He tracked these presentations and trials using frequency data collection, and in the conditions where he spoke “I must feed those cats”, and fed the cats, he found that cats would leap up more readily in the future, over the phrase where he did not feed them. This concept would later be referred to as responding to a discriminative stimulus. The cats would leap up and approach Thorndike (up to 60 times in the original research!) in the first condition, but also reduced leaping up when he voiced that he would not feed them. Thorndike was well aware that these cats were not spontaneously learning the English language, but they were discriminating between two very similar vocalized stimuli, and responding based on their previous experience and reinforcement. These ideas were not commonplace, or as well established as they are today. In many ways, these advances brought up unheard of avenues for the theory of learning in both animals and humans.

The theoretical implications of these experiments would shape later behavioral research into principles of operant conditioning well into not only the 20th century boom of behavioral thoughts and ideas, but even our time now in the 21st century.

Pretty impressive for cats, isn’t it?

Questions? Comments? Likes? Other?

Leave them below!

References:

1] Chance, P. (1999). Thorndike’s Puzzle Boxes And The Origins Of The Experimental Analysis Of Behavior. Journal of the Experimental Analysis of Behavior,72(3), 433-440. doi:10.1901/jeab.1999.72-433
2] COOPER, JOHN O.. HERON, TIMOTHY E.. HEWARD, WILLIAM L. (2018). APPLIED
BEHAVIOR ANALYSIS. S.l.: PEARSON.
3] Famous Quotes at BrainyQuote. (n.d.). Retrieved from https://www.brainyquote.com/

Picture Credits: http://www.pexels.com, Christian Sawyer (Photo)

May I have your attention please? The Nominal Stimulus vs. The Functional Stimulus

cellular-education-classroom-159844.jpeg

Hm?

What’s that?

Sorry, I wasn’t paying attention.

You’ll see this happen in some case studies, research articles, classrooms, and even therapeutic practice. A situation laid out with everything in mind to elicit the predictable response. You ask “What’s two plus two?” and eagerly await the “four!”…but it doesn’t happen. You call out to someone who’s wandered off “Hey! Over here!”, and they keep on walking. You picked out your discriminative stimulus so well but the response had little or nothing to do with it. You were missing the big piece of responding to stimuli that is absolutely obvious on paper, but so easily overlooked: Attention.

Stimulus-Response contingencies are a good place to start with explaining why this is so important, because they’re often the simplest and easiest to explain. One thing happens, a response follows it. The in-between that goes unsaid is that the respondent was actually able to perceive the stimulus, otherwise the response was either coincidental or unrelated. The stimulus that is never perceived, or attended to, is called a Nominal Stimulus. It happened. It was presented purposefully. It’s not a discriminative stimulus. It plays no role in selection. The individual is unaware that it even occurred. Nominal stimuli are the “everything else” in a situation that the intended respondent is not attending to.

Imagine a teacher in a classroom helping a student write their name. They first prompt by demonstrating how the name is written. The student does not copy it. So they take the student’s hand and physically guides them through the name writing start to finish, then they reinforce with some great descriptive praise to reinforce. Great! The student learned something, right? They’re more likely to at least approximate name writing in the future, right? How about the first letter?

Not if they were looking up at the ceiling the whole time. Nominal Stimulus.

The teacher may have set up a great visual demonstration, planned out a prompting strategy, and planned out a reinforcer to aid in learning the target behavior- but not one of those things were effective, or even meets their respective intended definitions, without the student’s attention. What the teacher was actually looking for, with any of their attempts, was a Functional Stimulus.

A functional stimulus, attended by an individual, that signals reinforcement for a specific behavior? That is the feature of the discriminative stimulus (SD) that elicits previously reinforced behavior. It’s received by the respondent in a meaningful way.

The lesson here in this distinction is that observers can sometimes assume stimulus-response relations or failures in responding because they are working with situations that present Nominal Stimuli instead of Functional Stimuli. Without distinguishing the attendance of the respondent, one could simply document a discriminative stimulus occurred when it had not. That would lead to inaccurate data, and further inaccurate intervention development based on those inaccuracies.

Check for attention. Always. It may not always be the easiest thing to discern. Auditory attending is not as easy to infer as visual attending is, but by keeping the nominal and functional stimuli in mind, you are in a better place to test for conditions that better facilitate both.

Let’s try one more example.

Take this guy in the car. He’s got his phone out. Just got a text. Now THAT was one sweet discriminative stimulus. Tons of reinforcement history signaling behind that one.

pexels-photo-804128

The street lights in front of him? Nominal stimuli.
The stop sign down the road? Nominal stimulus.
The cars on either side of him? Nominal stimuli.

Not all unattended stimuli are nominal stimuli exactly, but in a society, these signals (lights, signs, other people’s proximity) are delivered with the intended purpose of changing or governing the responses of people in order to make sure everyone drives in an orderly and safe(ish) way. Even when a person is attending, partially, to an array of stimuli around them; all supposedly “important” in one way or another, some don’t actually register without specific attention.

One more example. Last one, I promise.

793aa52a1859d8673ffb417128a80425--autism-classroom-classroom-ideas

An instructor is working with a non-verbal child to build communication. They are seated at a desk. The child is staring off at one of the walls and reciting some continuous vocal stereotypy to themselves. The instructor is guiding a communication board- a page with the alphabet on it.

They… rapidly… move the board’s position in front of the child’s finger, anticipating and…prompting… the words “I W A N T L U N C H”. They stand up with glee and reinforce this…method… with a “Great job! Let’s get lunch!”. The child continues to stare off at the wall, and continue the repetitive stereotypy until lunch is brought over.

What might that instructor infer from this process if they were not thinking about nominal stimuli? Well, they might infer that the process was in any way impacted by the child’s responding. Or, that the board and prompting was received in any way by the child. It could get a little confusing.

That’s the importance of nominal and function stimuli.

Questions? Comments? Likes? Leave them all below!

References:

Healy, A. F., & Weiner, I. B. (2013). Experimental psychology. Hoboken, NJ: Wiley.

Ormrod, Jeanne Ellis. (2012) Human learning /Boston : Pearson

Tabletop Roleplaying with a Behavior Analyst

20180803_163713.jpg

There are a vast array of opinions on role playing games. The stereotypes about them are prevalent in the popular culture of movies and televisions shows- mainly depicting the socially inept cliches rolling dice and spouting an incomprehensible language of their own. That type of depiction does get laughs, but it also is unlike anything I’ve seen in reality. I was influenced by those caricatures of role players too. For a long time I did not understand the appeal of piling up in a dark basement, playing a game about pretend people where nothing really mattered and there were so many rules to learn. Where’s the fun in that? It was the wrong outlook, but the right question. There was fun in it. It just took the experience to actually try it out and find it for myself.

Tabletop Role Playing is just a form of collective story telling. If you’ve ever seen a fictional movie and been engrossed in it, or had an idea for a novel, these are the same types of precursor behaviors to putting yourself in someone else’s shoes. There’s a fun to that. Taking on a different personality for a moment, and seeing a viewpoint unlike our own. If we want to get psychological about it, there might be some aspects of Adlerian play theory, or Bandura’s social learning through vicarious reinforcement in there. The gist of it is; one person sets the stage of the story and determines the rules of how the game is played, and the players take on a role and navigate that world for a collective goal (most of the time).

If you’re the type of person who likes making materials like token boards, graphs, or craft projects- this is right in your wheelhouse too.

It’s best to start off as a player before deciding to run your own game. You get to understand group dynamics and how collective story telling works. I was in my 20s when I first started this type of role playing. I started late. I tried a little of everything I could get invited to. Some people like settings with dragons and elves, but that’s not my type of thing exactly. I gravitated towards more realistic settings where interpersonal relationships and psychology was more grounded in humanity. Fictional worlds not too different or fantastic from our own. What I learned quickly is that these games work on Skinnerian principles- many things do, but role playing had a specific feel of reinforcement schedules that was familiar to me. The person who runs the game, sometimes called a referee, sometimes called a DM, sets the scale of what actions are reinforced and what are not.

Sometimes these are fixed reinforcement schedules based on experience: points that are rewarded that can be applied to the characters skills and attributes to make them more proficient, or more hardy to tackle the adventures. A measure of how much the character grows.

Sometimes these reinforcement schedules are variable ratio items: like in-game money, armor for your character, and tools that they can use to tackle different obstacles. A measure of what the character has, or can spend.

The players themselves run into variability by natural consequence; every action they decide to have their character make, if it is a specific skill or difficulty, comes with rolling a die to see if they succeed or fail.

These can be run like any other Skinner box. Compound schedules appear to be the most interesting to players. A fixed ratio that can be expected- perhaps collecting something important for one of the protagonists in a decided location. Or maybe a variable ratio- deciding what foes give up what item or monetary reward for being bested. Some people run their games with combat in mind; every situation is a nail to be beaten down by a well armed adventurer’s hammer. There’s a thrill to that kind of gameplay, but I find that it isn’t compelling enough for me. I prefer to create stories that have the opportunity for danger, but the risk of engaging in combat is sparsely reinforcing and has a greater opportunity for punishment. A live by the sword, die by the sword style of reinforcement schedule. There may be rewards to a quick and brutal choice, but a player can lose their character just as easily. I like using social stories in therapy to develop more adaptive skills. I use that same mindset when designing a game too- why resort to violence when you can talk your way out of trouble?

Say there is a dark concrete room, dim lights, seven enemies outnumber and surround a poorly armed player group. If they choose combat- they would most likely lose. It might work. I would allow it. Let the dice roll and see if they succeed. But more often than not, a clever player can decide to roll their die in a very different way; persuasion. I set the mark much lower for that if they have the right pitch. They make a deal even the most brutal enemy couldn’t refuse. The die is rolled- they win. Now there is one enemy less, and one more temporary friend to the adventure. The other enemies aren’t just going to stick to their hostility- maybe they overheard that, maybe they’re swayed too, maybe this causes division in the enemy group. The player group capitalizes. They play bluff roles. They play intimidation rolls. They play oratory rolls to back their fellow players up with a rousing speech. The tables turn, and now they’re on the side with higher numbers and that piece of the game is won.

That situation is harder to pull off for players. It takes more thought. More coordination. Turn taking. A minute or two to step away from the game, collect their ideas, then bring it back. I’m not trying to run a stressful table here- thinking is allowed. They devise a plan that works better than pulling a sword and pulling a trigger. I reinforce. Experience for “defeating” an entire room. They did after all. “Tangible reinforcers” in game for the characters. They get a bartered deal that they’d never get anywhere else if they’d been violent to these bad guys. Negative reinforcement- they avoid the aversive harm that is revealed to them when they now know- after persuading their enemies- that the enemies outmatched them in hidden weapons. The players used teamwork, not just some haphazard dice throwing about blood and guts. Group bonus. More experience for everyone. Why not? They played the game their way and they played it smart. These were not just four people sitting around a table doing their own random guesses for a quick and easy win, they came together with ideas that I would never have thought up for the story and won it themselves. They changed the story. Now it’s my turn to adjust my ideas to their new role played reality.

Now…It doesn’t always play out that way. Variable reinforcement is a necessity in a game of rolling dice. So is variable punishment. Sometimes the dice roll, and there’s a failure. Or worse- a critical failure! Not only is the prize not won, or the intended action not completed; it was actually a detriment to even try. Players have crashed a car. Blown up a usually harmless household item. Set a pacifist character in the game into a fit of rage and spoiled a whole quest line. That bank vault actually had a skunk in it. It happens. It’s something like a gamble, but when the reinforcement flows heavier than the punishment, it’s all worth it. It evens out. It takes a strong story, it takes a coherent direction and narrative, but the players do all the heavy lifting. They think. They plan. They roll the dice. Everyone has a great time.

You get to see patterns in that. Make it more challenging the next time. More engaging. Take the next story point in a way that you’d never have thought of before.

Let’s not forget that even when the game is done, there’s a friendship there now. People got to know each other a little better. They got to see people they talk to in a different light, more creative to one another, more inventive. Sometimes some playful rivalries come out of it. There’s also a community out there with shared experiences that goes beyond individual play groups and tables. Thousands of other people playing the same game their way. I personally love the community. I have ideas about how to run the game, and run it by others who play the same game but have done it better than me. I adapt. I improve. Sometimes, I even have an idea about how psychosis works in this imaginary world, and reach out to the internet with an interpretation on new rules-….and the creator of the game itself (Maximum Mike Pondsmith) replies.

mm

Talk about fun. Talk about reinforcement. I’ve learned never to underestimate what a good table top roleplaying game can be, or what it can bring to an otherwise ordinary afternoon. If you’ve never tried one? It’s never too late. Groups are out there with every age, every time commitment, and every skill level. Give it a shot. You might just like it.

 

Questions? Comments? Likes? Leave them below.

 

20180803_163713.jpg