Behaviour Analysis has been around for approximately 50 years. It all started with a paper by Baer, Wolf, and Risley (1968) titled "Some Current Dimensions of Applied Behavior Analysis" which you can read by following that link.
Formally, one could say it actually started with the work of Skinner who, in the early part of the 20th century, made it a bit of a mission to drag Psychology up from it's pre-scientific roots and make it something that could stand proudly shoulder to shoulder with the rest of the sciences. However, Applied Behaviour Analysis (ABA) as we know it today is really a product of work done after Baer, Wolf and Risley published their article.
In it's time ABA has tackled a number of problems. Environmental behaviours, autism, reading, education, business management and so on. It is successfully applied to any area it's wielder chooses. The emerging field of Behavioural economics and Behavioural public policy stand testament to that. Yet there is a note of discord in the ranks. Subtle though it may be. The flagship journals for ABA (Journal of Applied Behavior Analysis and Journal of Analysis of Experimental Behavior) are largely concerned, these days, with Autism and other intellectual and developmental disabilities.
Much can be said about Autism and the negative impact it has on peoples lives. Similarly much can be said about the amazing work Behaviour Analysts do around the world helping children learn to overcome any difficulties associated with Autism and lead vastly improved lives. The problem with this, however, is that we are now type-cast.
A typical conversation may go like this;
Person A: "I'm a behaviour analyst"
Person B: "Oh isn't that like a therapist?"
Person A: "Well not necessarily..."
Person B: "No it is, my friends sister had a child with Autism and they had to see a Behaviour Analyst, but she said..."
And so on, and so forth. Behaviour Analysis is now synonymous with treatment for Autism.
If we pull back slightly and take another look at what Behaviour Analysis actually is, we might gain some perspective on this development.
Behaviour Analysis is the application of behavioural principles to real life settings. Behavioural principles, in turn, are derived through experimental data. Radical Behaviourism - the philosophy of science - is induced from these principles and acts as a guiding framework within which we understand and interpret human behaviour. To claim that ABA is just about Autism is to grossly underestimate the scope of Behaviour Analysis as a tool.
It is my opinion that if ABA wishes to have a future in Psychology and more broadly in the culture at large, then it needs to expand it's remit to cover over things. As I've said before it DOES cover these things and has done in the past but few have dared to extend it to the same extent that Skinner did.
How could we apply it further then? Some have suggested a Cultural Analysis of Behaviour is needed. I concur. We can, as it were, analyse and help people on an individual level. We help people with financial difficulties by looking at what they do with their money. We help people with educational difficulties by revising the way they are taught. We help businesses with poor management cultures to see things a different way, but we miss a trick here because rarely is there any engagement on the meta-level.
Whoa. Jargon-y word. I apologise, but it is necessary. By meta-level I mean the level above. We can say that person X is acting under ABC contingency but what created ABC contingency in the first place? What is systematically causing people to have trouble with their weight? or remain unable to save for retirement? or manage a group of people successfully?
We might say that many meta-level contingencies are the product of political engagement. But I say this is only indirectly true. Skinner was not a big fan of politics. He argued that real change would be grass-roots. I agree. Politics tends to do more harm than good, even when well-intentioned. We need a culture that is fluid and capable of adapting to new problems quickly and efficiently without harming the interests of the individuals acting within it. Skinner understood that you couldn't do this from a top-down perspective. It had to be ground up.
Unfortunately we cannot do this just yet. We haven't the understanding of how cultures work - for lack of a cultural analysis - and so we cannot make changes on that meta-level. If we aren't careful we will forever remain hacking away at the leaves of a weed, when really we need to uproot the whole thing.
The future of Behaviour Analysis then is a difficult thing to predict, but certainly something possible to guide. I for one will be working towards a future where Behaviour Analysts are the biggest proponents of evidence-based culture change that relegates the blunt-edge of government to the historical dust-pile.
Monday, 30 December 2013
Sunday, 1 December 2013
Mindfulness in the Morning!
As a Behavioural Psychologist I am often expected - by laypersons and by fellow (and much more senior) academics - to be a little bit...well... cold. By cold I mean hard-headed, rational, down-to-earth (why we associate these exemplary traits with some sort of negative overtone is a topic for another time) and altogether very grounded.
I'm not expected to give much quarter to the (perceived) spiritual side of life; meditation, Buddhism, inner peace and so on are simply not conducive to scientific inquiry, nor appreciation. As a Behavioural Psychologist, I am type-cast as being anti- anything that smacks of mentalism. Indeed, my job as a Behavioural Psychologist involves me focusing quite relentlessly on Behaviour.
I'm not expected to give much quarter to the (perceived) spiritual side of life; meditation, Buddhism, inner peace and so on are simply not conducive to scientific inquiry, nor appreciation. As a Behavioural Psychologist, I am type-cast as being anti- anything that smacks of mentalism. Indeed, my job as a Behavioural Psychologist involves me focusing quite relentlessly on Behaviour.
So enters Mindfulness.
What is Mindfulness you ask? Well Mindfulness is a meditative practice born out of Jon Kabat-Zinn's definitive work with clinical patients in the 80's. It's non-clinical origins lie in ancient Buddhist thought and it based - very broadly - on the idea that through practice and focused attention one can develop more a number of positive traits - such as resilience - and can, in general, learn to resist things like depression and anxiety. For Buddhists it was (and is) seen as a way of achieving Nirvana and is mentioned specifically in the 8-fold path of traditional Buddhism.
Mindfulness as we know it is actually a secularized version stripped of its mystical origins and revised in a number of ways to deal with problems such as eating disorders (MB-EAT), Cognitive disorders (MBCT), and behavioural problems (ACT. DBT). However outside of it's clinical application it has a number of non-clinical applications that make it a wonderful thing to practice even in the absence of some psychological problem. The most mainstream application is through Mindfulness Based Stress Reduction, which whilst clinical, is nonetheless something we can all benefit from (after all, who isn't a little stressed?).
Now, Mindfulness as we know it is born - in part - out of Humanistic psychological philosophies, inspired by such people as Erich Fromm and Abraham Maslow. Humanism is not something that traditionally jives well with Behavioural Theories with it's emphasis (in fact, it's zeal) on denouncing any sort of deterministic thinking in regard to human beings. So why am I talking about it here? Well Mindfulness is not as mystical or cognitive as some of it's proponents would have us believe, and furthermore I believe that by relinquishing Mindfulness to the Cognitive psychologists on grounds of it being "out of our expertise" would be both wrong and foolish.
Wrong because Mindfulness clearly involves a number of observable (and private) behavioural patterns and involves a number of behavioural results (such as changing biological markers). Foolish because to acknowledge a subject that is outside of behavioural psychology's remit is to accept a terminal failure of our philosophy - something I am not prepared to do.
Although here is where it gets tricky. There has been little (if any) work exploring the relationship between Mindfulness proper and Behavioural psychology. So I can't say exactly HOW behavioural psychology fits in with Mindfulness (yet!).
However I can say that it works.
And so I come to the main crux of this post. I have been practicing Mindfulness-based stress reduction for two years now and I can honestly say the benefits are innumerable. Reduced stress levels, better mental acuity, better sleeping patterns, more control over impulsive behaviour and so on... I can't honestly overstate how much better I am for practicing it.
Now I have to be honest, I don't practice every day, and have gone long stretches of time without doing it (and suffered for it!), but I try and do at least 30 - 45 minutes of Mindfulness every day. Now personally I prefer to do it early in the morning. So I wake up, go to work and at around 8am, sat at my desk in my quiet office and I spend a bit of time gathering my thoughts and focusing on my mind (rather than everything else!) and it really sets me up for the day. If you can I suggest you get hold of a book or attend a course on Mindfulness and get practicing. You really won't regret it!
However I can say that it works.
And so I come to the main crux of this post. I have been practicing Mindfulness-based stress reduction for two years now and I can honestly say the benefits are innumerable. Reduced stress levels, better mental acuity, better sleeping patterns, more control over impulsive behaviour and so on... I can't honestly overstate how much better I am for practicing it.
Now I have to be honest, I don't practice every day, and have gone long stretches of time without doing it (and suffered for it!), but I try and do at least 30 - 45 minutes of Mindfulness every day. Now personally I prefer to do it early in the morning. So I wake up, go to work and at around 8am, sat at my desk in my quiet office and I spend a bit of time gathering my thoughts and focusing on my mind (rather than everything else!) and it really sets me up for the day. If you can I suggest you get hold of a book or attend a course on Mindfulness and get practicing. You really won't regret it!
If you want to know more about Mindfulness I recommend this site; http://www.bangor.ac.uk/mindfulness/
Sunday, 24 November 2013
A Positive Outlook
We are used to thinking about Psychology as a healing science, a branch of medicine and a sort of ancillary subject that deals with a particular set of earthly woes.
We talk of understanding psychosis, and neurosis, intellectual disability, and destructive habits. More and more popular psychology books take a sort of gleeful pride in showing how we can't be trusted to make our own decisions, or to think rationally, or even to know ourselves in any sort of meaningful way.
I am reminded, in a way, of the T. S. Eliot lines from the Waste Land;
Unfortunately modern psychology teaches us to really ignore the individual as an anomaly. Consider the below graph;
Just some food for thought!
We talk of understanding psychosis, and neurosis, intellectual disability, and destructive habits. More and more popular psychology books take a sort of gleeful pride in showing how we can't be trusted to make our own decisions, or to think rationally, or even to know ourselves in any sort of meaningful way.
I am reminded, in a way, of the T. S. Eliot lines from the Waste Land;
Psychology it seems, has taken on the role of showing us that "something"; the something that is beyond the obvious. It has taken a number of forms; the rampant, destructive Id, the impossible allure of the Race Consciousness, the inescapable grip of childhood, the innate incompetence of our own neurology. We are taught that Psychology is the way we understand our own inability to understand.
This, however, is only half the story.
We have our failings, and no one would actually claim that humans were perfect in any way. The problem is that we are more than just a bundle of neurosis and irrational decision making. We are also courageous, productive, integrous, thoughtful, amazing, and ultimately we have managed to build a pretty good world around ourselves as a whole.
And yet psychology has, historically, downplayed and even written off these positive aspects of human psychology. Martin Seligman, celebrated positive psychologist, lamented these facts in his 1998 presidential address to the American Psychology Association;
Yet we have scant knowledge of what makes life worth living. For although psychology has come to understand quite a bit about how people survive and endure under conditions of adversity, we know very little about how normal people flourish under more benign conditions.
This is because since World War II, psychology has become a science largely about healing. It concentrates on repairing damage within a disease model of human functioning. Such almost exclusive attention to pathology neglects the flourishing individual and the thriving community.
In fact, that speech was seen as the genesis of Positive Psychology, a perspective rather than a school, that seeks to explore and properly understand the positive side of life.
To a lot of people this seems like a trivial point but when you think about it we are so attached to this pathological model of society as a whole that we completely ignore success or anything good in favour of the bad. Its seen as indulgent or even immoral to want to understand the positive traits exhibited by successful businessmen, artists and craftsmen when a child has trouble reading, or a person with Autism can't interact with his local authority.
I believe, however, that it's worth studying why Bill Gates, or The Koch Brothers, or Usain Bolt, or J. K. Rowling succeeded despite others having similar (or less) opportunity.
Unfortunately modern psychology teaches us to really ignore the individual as an anomaly. Consider the below graph;
Typically a psychology student would be taught to exclude the obvious peak as anomalous since the goal of statistical analysis is primarily to find a pattern or average. Such a peak represents an aberration and something not typical in the population being studied, therefore its meaningless (in a Logical Positivist sense).
To me, however, that peak is the most interesting thing on the graph. Maybe it's nothing, just a fluke, but maybe it's something else entirely. Maybe its an example of some exemplary trait or behaviour that warrants further study and understanding, if we simply scrub it out as not fitting the model have we learned anything? Worse, have we robbed ourselves of a chance to learn something truly exciting about the human condition?
So, consider this, psychology has done a remarkable job in understanding what can go wrong with a person, and we are more the wiser for the century of hard work and dedicated psychological research has engendered. However, like Seligman says, perhaps it is time to look at the positive about humanity?
Perhaps we, as a culture, could use it as a springboard to get away from this obsession with failure, poverty, illness, and general nihilistic concerns that have plagued us in this post-war century? Perhaps we can see that the best way to help people is not to heal what ails them, but to show them how they can make themselves better than before?
Just some food for thought!
Labels:
behaviour,
behaviour change,
future,
nudge,
PhD,
positive,
psychology
Thursday, 3 October 2013
A PhD
On the 1st of October I officially began my PhD!
This is a big deal for me because at times I have questioned my academic ability and it's nice to have some validation for it.
My PhD is by research in Psychology and (without going into too much detail!) I will be exploring behaviour change theory and hopefully extending it beyond current theoretical boundaries.
I know I haven't posted much recently because I have been super busy but, as well as starting my PhD on the first of October I also handed in my MSc. thesis and therefore finished all the requirements for an MSc.
So for the next three years call me Philip Nelson, MSc.
Hopefully in the next few weeks I will get back into the habit of regular postings and I will start to explore some of the issues surrounding PhD work and the unique challenges it throws up.
This is a big deal for me because at times I have questioned my academic ability and it's nice to have some validation for it.
My PhD is by research in Psychology and (without going into too much detail!) I will be exploring behaviour change theory and hopefully extending it beyond current theoretical boundaries.
I know I haven't posted much recently because I have been super busy but, as well as starting my PhD on the first of October I also handed in my MSc. thesis and therefore finished all the requirements for an MSc.
So for the next three years call me Philip Nelson, MSc.
Hopefully in the next few weeks I will get back into the habit of regular postings and I will start to explore some of the issues surrounding PhD work and the unique challenges it throws up.
Saturday, 7 September 2013
Nudge confusion?!
The more I read about Nudge and Choice Architecture the more I get annoyed.
I find the concepts of Nudge / CA extremely useful. They provide a simple, jargon-free introduction to the ideas of behavioural psychology. Indeed Cass Suntein and Richard Thaler present a wonderful argument for using Nudge in a number of situations.
To go back to first principles a Nudge is (according to Thaler and Sunstein):
What these varies sources all miss is that a) a Nudge is not about forcing people to do what you want them to, b) it's not about banning or even regulation and c) it isn't the exclusive domain of governments.
Let's look at each of these issues in turn.
First of all Nudge is not about forcing people to do what you want. Mayor Bloomberg banning soda or slowing lifts is just not a good example of a Nudge. It's an example of a paternalist. No, in fact the definition states categorically that the "opt out" cost of a nudge should be very low. Believe it or not smoking restrictions would not count as Nudges in any sense of the word (more like a great big, well-meaning shove).
Secondly Nudging isn't about banning or regulating; if there is one thing the twentieth century put to rest it's that out-and-out state control just doesn't work. Not just in a broader survival-of-the-country sense but in terms of individual behaviour. Note that 60 years on from the start of the crusade against smoking people still smoke and people still start smoking. As horrible as those pictures are on the side of cartons, as expensive as it has now become and as very real lung cancer is alone those things just don't have the impact we want. So no, a Nudge is not about banning or regulating something (arguments for banning and regulating should be sustained outside of the context of Nudge). A Nudge should only be used to redirect behaviour in a net-positive way for the individual.
Thirdly and finally I think Nudge is done a great disservice by being tied ideologically to governments. Yes Governments can (and should) make use of the benefits associated with Nudge but they are hardly the only benefactors. My own work shows how business and private individuals can benefit monumentally from Nudge theory.
To be honest I get incredibly annoyed with ill-educated commentators assuming anything they dislike being proposed by government can be labelled Nudge. However a second issue I want to address also riles me up no end. This is a more theoretical point but is very important to consider.
Few people these days make any effort to distinguish between political power and economic power. A private business can use economic power to appeal to individuals and Nudge them appropriately. Soft drinks manufacturers, for example, can advertise and promote the refreshing nature of their product, or the variety, or stimulating effect or whatever they wish. They cannot, however, ban you from buying other drinks, nor make it more difficult for you to obtain them, nor in anyway force you to do anything (after all you are still free not to purchase their product). This is what we call economic power. Political power, on the other hand, is the ability to force or coerce. For example a government can simply ban a product it doesn't want you to use, similarly it can mandate the use of another product (for example driving insurance). Similarly it can avoid an outright ban and simply tax something to control consumption (for example fuel / alcohol / tobacco). If the government wishes you to do something it can - at the extreme end - force you to do it on pain of incarceration, we may quip that we are - for example - free to not pay taxes, but try not paying them one time and see what happens.
More often than not - especially in the context of Nudge - these two forms of power are used interchangeably with little consideration. For example one author (whose article I have unfortunately lost...) made a comment about how unfair it was that Pepsi could marshal vast amounts of advertising budget to "make" people buy their product, to level the playing field this author suggested that the Govt. should simply tax and restrict the sale of soft drinks - fair trade, right? Wrong.
The author clearly equates the power of Pepsi to advertise with the power of the Govt. to restrict but note that Pepsi can not do anything but make Pepsi seem appealing - by making it tasty, cheap, widely available etc... whereas the Govt. has to make no such consideration, it simply has to mandate a certain behaviour and through force of punishment use the power of negative reinforcement to get the behavioural result it wants.
Why am I banging on about this? to clarify a point. If you want to talk about Nudge you need to get ideas like "banning", "taxing" and "restricting" out of your head. These are not compatible with Nudge theory. Remember that a private business can only offer you something you want and so Nudge can unlock the potential of a business to help provide better service, higher quality staff care and be generally better for the economy and the consumer base. On the other hand if you are not careful you can easily start promoting ideas like restrictive policy and rationalising it away as merely Nudge when in actuality it's nothing of the sort.
I find the concepts of Nudge / CA extremely useful. They provide a simple, jargon-free introduction to the ideas of behavioural psychology. Indeed Cass Suntein and Richard Thaler present a wonderful argument for using Nudge in a number of situations.
To go back to first principles a Nudge is (according to Thaler and Sunstein):
“[...] any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid.”Before I get into my main point I want you to re-read that definition and then consider some of things being discussed as Nudges. Mayor Bloomberg has been called a Nudger, The Future Economist rightfully points out that a Nudger may not know what is best but nonetheless fails to acknowledge that a Nudge should leave room for alternative behaviour. New Scientist misses the mark completely and The Guardian merely uses it as a platform to bash The Conservative party.
What these varies sources all miss is that a) a Nudge is not about forcing people to do what you want them to, b) it's not about banning or even regulation and c) it isn't the exclusive domain of governments.
Let's look at each of these issues in turn.
First of all Nudge is not about forcing people to do what you want. Mayor Bloomberg banning soda or slowing lifts is just not a good example of a Nudge. It's an example of a paternalist. No, in fact the definition states categorically that the "opt out" cost of a nudge should be very low. Believe it or not smoking restrictions would not count as Nudges in any sense of the word (more like a great big, well-meaning shove).
Secondly Nudging isn't about banning or regulating; if there is one thing the twentieth century put to rest it's that out-and-out state control just doesn't work. Not just in a broader survival-of-the-country sense but in terms of individual behaviour. Note that 60 years on from the start of the crusade against smoking people still smoke and people still start smoking. As horrible as those pictures are on the side of cartons, as expensive as it has now become and as very real lung cancer is alone those things just don't have the impact we want. So no, a Nudge is not about banning or regulating something (arguments for banning and regulating should be sustained outside of the context of Nudge). A Nudge should only be used to redirect behaviour in a net-positive way for the individual.
Thirdly and finally I think Nudge is done a great disservice by being tied ideologically to governments. Yes Governments can (and should) make use of the benefits associated with Nudge but they are hardly the only benefactors. My own work shows how business and private individuals can benefit monumentally from Nudge theory.
To be honest I get incredibly annoyed with ill-educated commentators assuming anything they dislike being proposed by government can be labelled Nudge. However a second issue I want to address also riles me up no end. This is a more theoretical point but is very important to consider.
Few people these days make any effort to distinguish between political power and economic power. A private business can use economic power to appeal to individuals and Nudge them appropriately. Soft drinks manufacturers, for example, can advertise and promote the refreshing nature of their product, or the variety, or stimulating effect or whatever they wish. They cannot, however, ban you from buying other drinks, nor make it more difficult for you to obtain them, nor in anyway force you to do anything (after all you are still free not to purchase their product). This is what we call economic power. Political power, on the other hand, is the ability to force or coerce. For example a government can simply ban a product it doesn't want you to use, similarly it can mandate the use of another product (for example driving insurance). Similarly it can avoid an outright ban and simply tax something to control consumption (for example fuel / alcohol / tobacco). If the government wishes you to do something it can - at the extreme end - force you to do it on pain of incarceration, we may quip that we are - for example - free to not pay taxes, but try not paying them one time and see what happens.
More often than not - especially in the context of Nudge - these two forms of power are used interchangeably with little consideration. For example one author (whose article I have unfortunately lost...) made a comment about how unfair it was that Pepsi could marshal vast amounts of advertising budget to "make" people buy their product, to level the playing field this author suggested that the Govt. should simply tax and restrict the sale of soft drinks - fair trade, right? Wrong.
The author clearly equates the power of Pepsi to advertise with the power of the Govt. to restrict but note that Pepsi can not do anything but make Pepsi seem appealing - by making it tasty, cheap, widely available etc... whereas the Govt. has to make no such consideration, it simply has to mandate a certain behaviour and through force of punishment use the power of negative reinforcement to get the behavioural result it wants.
Why am I banging on about this? to clarify a point. If you want to talk about Nudge you need to get ideas like "banning", "taxing" and "restricting" out of your head. These are not compatible with Nudge theory. Remember that a private business can only offer you something you want and so Nudge can unlock the potential of a business to help provide better service, higher quality staff care and be generally better for the economy and the consumer base. On the other hand if you are not careful you can easily start promoting ideas like restrictive policy and rationalising it away as merely Nudge when in actuality it's nothing of the sort.
Wednesday, 21 August 2013
Evidence-based policy?
As a scientist I am pro-evidence.
More broadly I am pro-objective reality. I believe that, fundamentally reality
is founded on three axioms; Existence, Identity and Consciousness.
Existence; because something
exists. Identity because something exists as
something. Consciousness because something exists perceiving those things.
These are the axioms which,
explicitly or implicitly, guide all rational people.
I admit this isn’t exactly a
comfortable lexicon to use for someone into behavioural science, unfortunately behavioural
theory hasn’t quite extended itself yet properly to metaphysics and so there
isn’t a clear way to describe these concepts.
Yet, nonetheless I believe them to
be true. Existence, Identity and Consciousness; the triumvirate of science. From
this point of view we know that reality is knowable. That is to say, it’s
lawful (now we’re back in behaviourist territory), and so – understandable and predictable. Contra Immanuel Kant
knowledge is contextual. Not subjective, but based on the context of the
individual thinking it.
Epistemology aside, I want to focus
on one particular application of this knowledge. If reality is knowable – and above
all predictable – then so are humans, and so is human consciousness. Our
awareness – call it a complex verbal repertoire a la Skinner, or an extended
behavioural pattern through temporal dimensions a la Rachlin – is nonetheless
knowable and predictable.
So where am I going with this
extended diatribe? If human behaviour is knowable and predictable – that is to
say a function of reality not some mystical higher quality alluded to by
theologians and amateur Platonists of millennia past, then it stands to reason
that we should do something with this knowledge to improve our condition here
on Earth.
This, then, is where evidence based
policy comes into being. Evidence-based policy ostensibly began with the Blair
government and is – ostensibly – being continued by the Coalition. Both
governments agreed – formally – to put aside ideology for the sake of ideology
and instead agreed to obey the evidence – meaning obey reality.
Yet this is where it gets sticky.
As I’ve already mentioned knowledge is contextual. We aren’t omniscient. So we
have a problem. A fact is a fact. Yet it stands in the context of all the other
knowledge. For example we may know that when two atoms collide – they release a
heck of a lot of energy, but this doesn’t automatically tell us that the energy
can be used to switch on the lights and blow a city into the ground. The
application – and the subsequent effects of that application – are not knowable
until those facts are discovered, it’s not a package deal.
Take for instance what we know
about people’s behaviour. People are susceptible to superstitious behaviour, we
use heuristics – rules of thumb – to get through the day, and we are fallible. Yes we are capable of
rationality, yes we are capable of discovering facts but the flip side of
having that kind of behavioural repertoire is that we can get it wrong. Know
this is used to justify things like Nudges.
Now for a disclaimer, I support
nudges. Support them all the way to the bank. If all governments adopted that
kind of attitude the world would be happier place. Yet herein lays the problem
I’ve been alluding to. It often comes up as a snarky criticism from opponents of
Nudges; the claim is that if the Nudgers believe that people are fallible then
isn’t there a chance that they are wrong about the Nudges? The biggest mistake
people make in response to these criticisms is to pass them off as futile
attempts from right wingers to criticise the uncriticisable.
Instead, let’s look at if they have
a point. What is a nudge? The manipulation of environmental variables that
create a condition under which a chosen behaviour is more likely to be emitted –
whilst still preserving the freedom of
choice. That means that if I want to smoke –I should be able to, but not
smoking could potentially be promoted in one way or another.
Now policy analysts and writers
have been practically frothing at the mouth trying to apply this idea to every
single policy applicable; from taxes, to health promotion, to energy usage. It
can literally be applied to all levels of government. Yet, a question still
lingers, a question unanswered; should
it?
Evidence-based policy has been most
wholly accepted by the progressive end of the spectrum. Now not to paint with
too broad a brush here but progressive thinkers in general promote a more
activist form of government.
So a smart way to more efficiently
run a government appeals to them – and so it should! But, as I mentioned at the
start of this article, knowledge is
contextual, and so we need to look at the actions of a government using
Nudges in the broader context of government actions.
Nudging applies equally to our
personal, private lives and private associations as much as it does to
government actions. Nudges can help us; it can help private businesses (I wouldknow). It doesn’t have to come from a government. So we shouldn’t act like it
should.
A full political breakdown is
perhaps beyond the scope of this article. Yet I want to point to a general
theme. Just because we can do something, doesn’t mean we should do something.
The jury is still out on whether an evidence based approach to everything from
a top-down approach is necessarily in our long-term best interests.
It is perhaps wise then, to heed
the criticisms of the nudge approach. Let us be wary of our potential bias in
this matter. We should be ready to say that a nudge could be better served from
a private point of view, in a contractual sense. Certainly there are areas
where a government should act and act decisively. This is not in dispute. But
let’s apply evidence to the evidence based approach and look at whether we are
doing more harm than good. Science requires no less.
As a scientist I am pro-evidence.
More broadly I am pro-objective reality. I believe that, fundamentally reality
is founded on three axioms; Existence, Identity and Consciousness.
Existence; because something
exists. Identity because something exists as
something. Consciousness because something exists perceiving those things.
These are the axioms which,
explicitly or implicitly, guide all rational people.
I admit this isn’t exactly a
comfortable lexicon to use for someone into behavioural science, unfortunately behavioural
theory hasn’t quite extended itself yet properly to metaphysics and so there
isn’t a clear way to describe these concepts.
Yet, nonetheless I believe them to
be true. Existence, Identity and Consciousness; the triumvirate of science. From
this point of view we know that reality is knowable. That is to say, it’s
lawful (now we’re back in behaviourist territory), and so – understandable and predictable. Contra Immanuel Kant
knowledge is contextual. Not subjective, but based on the context of the
individual thinking it.
Epistemology aside, I want to focus
on one particular application of this knowledge. If reality is knowable – and above
all predictable – then so are humans, and so is human consciousness. Our
awareness – call it a complex verbal repertoire a la Skinner, or an extended
behavioural pattern through temporal dimensions a la Rachlin – is nonetheless
knowable and predictable.
So where am I going with this
extended diatribe? If human behaviour is knowable and predictable – that is to
say a function of reality not some mystical higher quality alluded to by
theologians and amateur Platonists of millennia past, then it stands to reason
that we should do something with this knowledge to improve our condition here
on Earth.
This, then, is where evidence based
policy comes into being. Evidence-based policy ostensibly began with the Blair
government and is – ostensibly – being continued by the Coalition. Both
governments agreed – formally – to put aside ideology for the sake of ideology
and instead agreed to obey the evidence – meaning obey reality.
Yet this is where it gets sticky.
As I’ve already mentioned knowledge is contextual. We aren’t omniscient. So we
have a problem. A fact is a fact. Yet it stands in the context of all the other
knowledge. For example we may know that when two atoms collide – they release a
heck of a lot of energy, but this doesn’t automatically tell us that the energy
can be used to switch on the lights and blow a city into the ground. The
application – and the subsequent effects of that application – are not knowable
until those facts are discovered, it’s not a package deal.
Take for instance what we know
about people’s behaviour. People are susceptible to superstitious behaviour, we
use heuristics – rules of thumb – to get through the day, and we are fallible. Yes we are capable of
rationality, yes we are capable of discovering facts but the flip side of
having that kind of behavioural repertoire is that we can get it wrong. Know
this is used to justify things like Nudges.
Now for a disclaimer, I support
nudges. Support them all the way to the bank. If all governments adopted that
kind of attitude the world would be happier place. Yet herein lays the problem
I’ve been alluding to. It often comes up as a snarky criticism from opponents of
Nudges; the claim is that if the Nudgers believe that people are fallible then
isn’t there a chance that they are wrong about the Nudges? The biggest mistake
people make in response to these criticisms is to pass them off as futile
attempts from right wingers to criticise the uncriticisable.
Instead, let’s look at if they have
a point. What is a nudge? The manipulation of environmental variables that
create a condition under which a chosen behaviour is more likely to be emitted –
whilst still preserving the freedom of
choice. That means that if I want to smoke –I should be able to, but not
smoking could potentially be promoted in one way or another.
Now policy analysts and writers
have been practically frothing at the mouth trying to apply this idea to every
single policy applicable; from taxes, to health promotion, to energy usage. It
can literally be applied to all levels of government. Yet, a question still
lingers, a question unanswered; should
it?
Evidence-based policy has been most
wholly accepted by the progressive end of the spectrum. Now not to paint with
too broad a brush here but progressive thinkers in general promote a more
activist form of government.
So a smart way to more efficiently
run a government appeals to them – and so it should! But, as I mentioned at the
start of this article, knowledge is
contextual, and so we need to look at the actions of a government using
Nudges in the broader context of government actions.
Nudging applies equally to our
personal, private lives and private associations as much as it does to
government actions. Nudges can help us; it can help private businesses (I wouldknow). It doesn’t have to come from a government. So we shouldn’t act like it
should.
A full political breakdown is
perhaps beyond the scope of this article. Yet I want to point to a general
theme. Just because we can do something, doesn’t mean we should do something.
The jury is still out on whether an evidence based approach to everything from
a top-down approach is necessarily in our long-term best interests.
It is perhaps wise then, to heed
the criticisms of the nudge approach. Let us be wary of our potential bias in
this matter. We should be ready to say that a nudge could be better served from
a private point of view, in a contractual sense. Certainly there are areas
where a government should act and act decisively. This is not in dispute. But
let’s apply evidence to the evidence based approach and look at whether we are
doing more harm than good. Science requires no less.
Labels:
altruism,
behavior,
behaviour,
behaviour change,
behaviour choice,
business,
change,
control,
determinism,
future,
induction,
libertarian,
libertarian paternalism,
morality,
nudge,
paternalism,
philosophy,
politics
Sunday, 18 August 2013
Shale Gas, Innovation and Behaviour Change
I'm all about innovation.
Innovation is the fuel that's drives the engine of civilization. I've spoken about this before so I won't go into excruciating detail. Needless to say innovation is what keeps us going. In the abstract innovation is a highly complex thing. It involves essentially rearranging already existing knowledge - or inducing new knowledge - and then applying it in a way that affects lasting change, either on the individual level or the group level.
Mobile phones (and now smart phones), the internal combustion engine, fuel cells, airplanes, the water-wheel; all of these are innovations. Facts of reality are first discovered, then applies to a problem - if they solve the problem effectively they become innovations.
On the flip side of this we have human behaviour. Innovation drives behaviour change but - and this is important - it also requires it.
Let me explain; using the example of mobile phones. The advent of the mobile phone opened a new range of behaviours to people. Things that were quite unreinforcing (and quite pointlessly punishing) like talking into a piece of glass and plastic whilst walking down the street suddenly became quite plausible - and indeed highly reinforcing. Mobile phone behaviour was quite unprovoked - people discovered it by themselves, but interestingly out of it came a number of norms. People started... agreeing... about what was appropriate behaviour. Voluntary mobile phone bans cropped up in places it was inappropriate. A quite decentralised etiquette arose. Apart from anything it provides a wonderful example of Adam Smith's (much derided) invisible hand. Yet this isn't an exposition on economics. It's a discussion of behaviour change.
So what does it tell us? It tells us that innovation creates new behaviours, which in turn solves problems no one really knew we had. Some argue that we are more alienated because of mobile phones - I argue we are closer together. I can, at any time, ring any number of people, from any location, and get through to them. I can even see them with 3G/4G video. When once I had to be at a hardline to talk to someone now, five minutes from a meeting, stuck in traffic, I can call them to tell them I'm late thus saving myself the necessity of an unhappy bunch of staff, and an awkward conversation on my arrival.
This, however, is only half the story. Sadly it's the only side of the story we discuss. Yet as a behaviourist I have another interest, a side of the story we don't often here told. What leads up to the innovation?
Now don't get me wrong. The western world is abuzz with how to "encourage" or "nudge" innovation. It's all the rage; the latest fad. Yet we think of it in terms of systems. We think in terms of political machinations and big, shiny ad campaigns that do little to actually drive innovation, but certainly look good on the score card. The token effort. The "seen to be doing something" tick box.
So let's get serious. One of things you learn as a behaviourist is that there is no "group mind". There is no collective will, no "greater than the sum of it's parts" entity. All there is, is people. Individuals. Each with their own learning history. Each with their own set of unique contingencies. No two people act the same, even to identical stimuli. This is both a blessing and a curse. A curse because it means there is nothing specific we can do. We can't teach "innovation" in schools and expect it to appear. A blessing, because we can teach people to create contingencies to give rise to innovation on an individual level.
Let me enumerate; innovation never looks the same. It's a different process each time. Yet there are similarities. First; a person must have the ability to learn and synthesize a vast amount of knowledge. Second; a person must be able to apply said knowledge to physical problems (and must in turn recognise that problems can indeed be solved with facts, not wishful thinking), thirdly; they must be free to apply that knowledge.
First; education. Our education is good - but it's not great. Skinner said it himself:
This leads on to our second problem; application. If you're like me you probably heard, time and time again in school that you should "apply yourself more". Yet, again if you're like me, then you probably went away a little nonplussed. What did "apply yourself" mean, anyway? was it some application of will? Some special process you didn't understand? What?
Herein lies the problem, as I alluded to earlier there is a bizarre trend in education that essentially teaches the divorce between fact and value. This is an old philosophical problem that essentially draws a line in the sand between a fact (some referent of reality) and a value (some moral proclamation). we teach for the sake of teaching. History, as I mentioned earlier, is taught with no reference to the value behind it. So we learn about WWII and the rise of Hitler - we learn WHAT happened, but we don't learn WHY it happened. The why is seen as either self-evident or worse; irrelevant. This, in part, goes back to the progressive education movement of the early 20th century where John Dewey redefined the purpose of education to a socialising end. The educational philosophy of Dewey was conceptualised by John Dumphy as follows;
Now for our third, and final, problem; are the appropriate contingencies available to allow innovation in the first place? Let's assume that we all have a self-interested desire to see innovation happen on a large scale. No one wants some genius locked up and unable to communicate or practice his new ideas.
Yet this can be quite a controversial point. I want to use the recent interest in Shale Oil (controversial in its own right) to help me make my point. Shale Oil is in and of itself, innovative. It offers a way to expand our energy supply and can potentially act as a bridge to a cleaner, cheaper form of energy such as advanced nuclear reactors and so on. How does this innovation come to light? freedom. Freedom to experiment. Freedom to apply. There is a reason North Korea doesn't have a booming energy (or any) industry.
So there are my three pre-conditions to innovation; knowledge (and the ability acquire it), an ability to apply the knowledge to real world problems, and finally the freedom to apply those problems.
There are no easy answers, and I can only offer my opinion (and I am always open to counter argument). The problems we face in society are not insurmountable, but they will need tackling eventually. If we want to succeed as a culture we need to start thinking more seriously about what we are doing - stop planning society based on ideology and start basing it on reason. I believe we can create a society where people are naturally innovative - as a norm, not an exception. Where the question of whether something should be allowed to be tried is never even asked. Where people take personal responsibility for improving their lives (and the lives of everyone else along with it).
Shale gas may prove to be dangerous, or useless, or simply too expensive. But imagine a world where we could never even try and find out?
Innovation is the fuel that's drives the engine of civilization. I've spoken about this before so I won't go into excruciating detail. Needless to say innovation is what keeps us going. In the abstract innovation is a highly complex thing. It involves essentially rearranging already existing knowledge - or inducing new knowledge - and then applying it in a way that affects lasting change, either on the individual level or the group level.
Mobile phones (and now smart phones), the internal combustion engine, fuel cells, airplanes, the water-wheel; all of these are innovations. Facts of reality are first discovered, then applies to a problem - if they solve the problem effectively they become innovations.
On the flip side of this we have human behaviour. Innovation drives behaviour change but - and this is important - it also requires it.
Let me explain; using the example of mobile phones. The advent of the mobile phone opened a new range of behaviours to people. Things that were quite unreinforcing (and quite pointlessly punishing) like talking into a piece of glass and plastic whilst walking down the street suddenly became quite plausible - and indeed highly reinforcing. Mobile phone behaviour was quite unprovoked - people discovered it by themselves, but interestingly out of it came a number of norms. People started... agreeing... about what was appropriate behaviour. Voluntary mobile phone bans cropped up in places it was inappropriate. A quite decentralised etiquette arose. Apart from anything it provides a wonderful example of Adam Smith's (much derided) invisible hand. Yet this isn't an exposition on economics. It's a discussion of behaviour change.
So what does it tell us? It tells us that innovation creates new behaviours, which in turn solves problems no one really knew we had. Some argue that we are more alienated because of mobile phones - I argue we are closer together. I can, at any time, ring any number of people, from any location, and get through to them. I can even see them with 3G/4G video. When once I had to be at a hardline to talk to someone now, five minutes from a meeting, stuck in traffic, I can call them to tell them I'm late thus saving myself the necessity of an unhappy bunch of staff, and an awkward conversation on my arrival.
This, however, is only half the story. Sadly it's the only side of the story we discuss. Yet as a behaviourist I have another interest, a side of the story we don't often here told. What leads up to the innovation?
Now don't get me wrong. The western world is abuzz with how to "encourage" or "nudge" innovation. It's all the rage; the latest fad. Yet we think of it in terms of systems. We think in terms of political machinations and big, shiny ad campaigns that do little to actually drive innovation, but certainly look good on the score card. The token effort. The "seen to be doing something" tick box.
So let's get serious. One of things you learn as a behaviourist is that there is no "group mind". There is no collective will, no "greater than the sum of it's parts" entity. All there is, is people. Individuals. Each with their own learning history. Each with their own set of unique contingencies. No two people act the same, even to identical stimuli. This is both a blessing and a curse. A curse because it means there is nothing specific we can do. We can't teach "innovation" in schools and expect it to appear. A blessing, because we can teach people to create contingencies to give rise to innovation on an individual level.
Let me enumerate; innovation never looks the same. It's a different process each time. Yet there are similarities. First; a person must have the ability to learn and synthesize a vast amount of knowledge. Second; a person must be able to apply said knowledge to physical problems (and must in turn recognise that problems can indeed be solved with facts, not wishful thinking), thirdly; they must be free to apply that knowledge.
First; education. Our education is good - but it's not great. Skinner said it himself:
We shouldn't teach great books; we should teach a love of reading. Knowing the contents of a few works of literature is a trivial achievement. Being inclined to go on reading is a great achievement.We make great pains to teach our children a wealth of knowledge, and yet we do little to instill a love of learning itself. Behaviour Analysis has gone to great pains to show a child can be taught, systematically, to love the process of learning. Yet it is not applied because we believe (and this is just my speculation but i have reason to think so) that we value the factual knowledge itself, over the process involved. Facts are divorced from reality. We learn history - but never teach the reason for learning history. We teach science - but never teach the reason for learning science... we teach a child how to deconstruct the themes of a novel - but never teach them why it matters, or what the themes mean in any kind of context.
This leads on to our second problem; application. If you're like me you probably heard, time and time again in school that you should "apply yourself more". Yet, again if you're like me, then you probably went away a little nonplussed. What did "apply yourself" mean, anyway? was it some application of will? Some special process you didn't understand? What?
Herein lies the problem, as I alluded to earlier there is a bizarre trend in education that essentially teaches the divorce between fact and value. This is an old philosophical problem that essentially draws a line in the sand between a fact (some referent of reality) and a value (some moral proclamation). we teach for the sake of teaching. History, as I mentioned earlier, is taught with no reference to the value behind it. So we learn about WWII and the rise of Hitler - we learn WHAT happened, but we don't learn WHY it happened. The why is seen as either self-evident or worse; irrelevant. This, in part, goes back to the progressive education movement of the early 20th century where John Dewey redefined the purpose of education to a socialising end. The educational philosophy of Dewey was conceptualised by John Dumphy as follows;
"I am convinced that the battle for humankind's future must be waged and won in the public school classroom by teachers that correctly perceive their role as proselytizers of a new faith: a religion of humanity that recognizes and respects the spark of what theologians call divinity in every human being...The classroom must and will become an arena of conflict between the old and new — the rotting corpse of Christianity, together with all its adjacent evils and misery, and the new faith of humanism, resplendent with the promise of a world in which the never-realized Christian ideal of 'love thy neighbor' will finally be achieved." — excerpt from an article by John Dunphy titled "A Religion for a New Age," appearing in the January/February 1983 issue of The Humanist Magazine.Now few educators in Britain would consider this an appropriate explanation of what they do. Indeed philosophical proselytizing is more common across the ocean, but the influence has nonetheless seeped in. For the purposes of innovation we need to stop teaching facts divorced from reality, and instead we need to teach facts as they are in reality. This means, in the context. Something every behaviour analyst understands quite intimately. Context is what gives us meaning. Without context we cannot understand why a behaviour occurred or whether it will occur again.
Now for our third, and final, problem; are the appropriate contingencies available to allow innovation in the first place? Let's assume that we all have a self-interested desire to see innovation happen on a large scale. No one wants some genius locked up and unable to communicate or practice his new ideas.
Yet this can be quite a controversial point. I want to use the recent interest in Shale Oil (controversial in its own right) to help me make my point. Shale Oil is in and of itself, innovative. It offers a way to expand our energy supply and can potentially act as a bridge to a cleaner, cheaper form of energy such as advanced nuclear reactors and so on. How does this innovation come to light? freedom. Freedom to experiment. Freedom to apply. There is a reason North Korea doesn't have a booming energy (or any) industry.
So there are my three pre-conditions to innovation; knowledge (and the ability acquire it), an ability to apply the knowledge to real world problems, and finally the freedom to apply those problems.
There are no easy answers, and I can only offer my opinion (and I am always open to counter argument). The problems we face in society are not insurmountable, but they will need tackling eventually. If we want to succeed as a culture we need to start thinking more seriously about what we are doing - stop planning society based on ideology and start basing it on reason. I believe we can create a society where people are naturally innovative - as a norm, not an exception. Where the question of whether something should be allowed to be tried is never even asked. Where people take personal responsibility for improving their lives (and the lives of everyone else along with it).
Shale gas may prove to be dangerous, or useless, or simply too expensive. But imagine a world where we could never even try and find out?
Saturday, 10 August 2013
Conceptual Discussion; Determinism
Behaviour Analysis is grounded in the philosophy of radical
behaviourism. Although a knowledge of radical behaviourism is unnecessary for
practicing behaviour analysts it can help when trying to understand the “big
picture” to have a basic grounding in some of the assumptions that underlay the
science.
In this blog I’d like to discuss one of the most controversial
aspects of radical behaviourism – determinism. The chances are that you’ve
heard of determinism and know something about it but I’d like to proffer a
definition anyway for our purposes here;
Determinism (in science) is the belief that any effect must invariably
have a cause. Nothing happens through some conspicuous free-agent but is
instead a product of the natural environment. - Me.
Delprato and Midgely1 describe the role of
determinism in radical behaviourism as:
In Skinner's approach, this determinism assumption is fundamental for (a)
making human behavior amenable to scientific understanding and (b) what Skinner
viewed as the primary goals of science: prediction and control. This
assumption, however, does not imply any sort of mechanistic determinism in
which stimuli and responses are contiguous and the former impel the latter.
Note that important qualifier at the end. Determinism does
not necessarily imply mechanistic stimulus-response relationships.
Indeed Determinism is the belief that for every effect there
must be a cause, but a causal event does not lead irrevocably to a set
response.
Why is this
idea controversial?
Within the natural sciences this position is not hotly
contested. Long gone are the days of Aristotelian “jubilant motion” or the
theocratic notion of “godly will”. We understand know that rock falls to earth
because of the force of gravity, and that a tree grows in reaction to
photosynthesis, nutrient absorption and so on. There is nothing mystical about
it, there is no “will to fall” or “will to grow” that causes these events.
The controversy comes when we try and apply this idea to
human psychology. Here’s the deal; we in the west have a strong tradition of
believing in free will. We believe we
have complete control over our actions. We believe in every situation we have a
choice – a free choice, nonetheless, and that in the end our behaviour is a
product of…well consciousness. Of course this is slightly circular, observe; I
ask you; “Why do you behave?” you say “because I will it”, I retort “How do you
know you willed it?” and you answer… “Because I behaved”. It’s a somewhat
useless argument that fails to offer anything valid to the debate.
The controversy is borne out of behaviour analysts insisting
that human behaviour is actually a product of our environment – it is determined – and is ultimately lawful
(and thus predictable).
Alas, I may have given you the wrong impression. I’ve implied
in the preceding paragraphs that behaviour analysts don’t believe we have choice,
or control, or anything; that we are literally puppets on a metaphysical
string. Yet everyone knows that, when faced with the agonising decision of
Chinese or Indian for tea… we definitely do make our own choices, but do we?
There is nothing controversial in suggesting that we make
choices like what to have to eat. So let’s analyse the decision of what to eat
and see if we can show how a seemingly “free” choice is actually entirely
determined by past events and environmental contingencies.
Let’s say I ask you what you’d like to eat. You reply that
you aren’t sure, either Indian or Chinese will do. What is the process you go
through when making this decision?
There are obviously a number of factors and this is a very simplified version of the decision making process, but what is really going on? Well first which do you prefer? Have you had a bad experience with one type of food? Did you have a dodgy curry one fateful evening? Or did you have a sweet and sour pork with suspiciously un-pork-like meat populating it? In the behaviour analytic world we call this your reinforcement (or learning) history.
-
- Which do you prefer? Do you like curry more than chow mein?
- Which is closer? And is time until you eat important?
- Have you heard any particularly good or bad reviews about either place?
Second, how hungry are you? If you are starving does a 45
minute drive to the nearest curry house strike fear and depression into your
heart? Does the quick and convenient walk to the Chinese inspire you with
thoughts of tucking into a crispy duck starter in 20 minutes flat? We call your
level of hunger a Motivating Operation (MO), and the relative availability of
the food a stimulus discriminant (Sd).
Finally have you heard anything about either place? Did a
friend tell you (in too much detail) about their fateful morning-after
digestive problems after a House Special? Did you hear about the particularly
excellent Kheema Naan? Depending on how you look at it we could call this
rule-governed behaviour (although I am sure that is up for debate!).
So we see that a supposedly innocuous exercise of “free will”
is actually a meticulously controlled and determined choice that can be
predicted (and with some accuracy) once you know the relevant factors. The
interesting thing, however, is that you never feel controlled.
Determinism then is not the scary concept it is so often
portrayed as. It merely recognises that behaviour does not occur in a vacuum. Now that is an incredibly important
point so let me reiterate it; behaviour
does not occur in a vacuum, it is always contextual.
This is a very important aspect of behaviourism and something
often overlooked when trying to understand behaviour. Context is the sum total
of the relevant environmental factors (including learning history) that affect
behaviour. For example the ringing of a bell means many different things to
different people and will elicit a number of different responses; but it doesn’t
change the fact that the behaviour that occurs is still determined by the
ringing of the bell.
So let’s wrap this up;
-
- Determinism is a key foundation of radical behaviourism
- It delineates how human behaviour is actually a product of environmental contingencies and not some hypothetical inner cause
- It is a controversial principle to apply to human behaviour
- It provides the foundation on which human behaviour can be predicted and controlled
1Delprato, D. J., & Midgley,
B. D. (1984). Some Fundamentals of B F
Skinner’s Behaviorism.
Tuesday, 6 August 2013
Permission to wax philosophic and a change for the better
I don’t often have bouts of philosophical meandering. I’m
not a florid person. Nor am I extravagant in the way I speak. I’m an introvert,
through and through. Get me going on a topic and I’ll talk like the wind but
try and get me to talk about myself and I’ll go quiet, get awkward and simply
give up the ghost and end the conversation – hopefully politely. I think that
now, however, is the time for some philosophical self-indulgence.
Ideas matter. Without them we are like helpless infants,
groping for an understanding that never comes, seeing only a frightening,
disconnected morass of concrete perceptions that never unify into any sort of
understanding. Thankfully no one really operates on this level. But ideas are
only helpful if they are true. I don’t want to get into a difficult debate
about the nature of truth, suffice it to say I believe that there can be
objective truth – that is to say there are facts that can be discovered that pertain
to reality that exist independent of our wishes and beliefs. To put into
standard philosophical terms; A = A.
I don’t think it’s presumptuous to say that few us every
check the premises by which we live. Even those of us that do, do so on such
narrow terms that we see reality clearly in only one field (we call these
people academics, or scientists) and more often than not hold false beliefs in
other areas of our lives (seemingly without contradiction). I by no means exclude
myself from this group. My speciality is human behaviour, and I have a lot to
learn, but I am least part-way rational about it; but let me ask you, how much
of your understanding of human behaviour is based on superstition? Folklore?
Old wives tales?
If you shirk off this accusation, consider this; you see a teenager dressed in a full tracksuit, cap down low, swaggering along the street. You immediately tense; will he attack me? Abuse me? Do something uncomfortable and then accuse me of something? What is this based on? Past experience? Partly, but most of us only rarely see these things, instead it’s an image, carefully crafted and reinforced by social mores, media, news, discussions with likeminded individuals and so on. What if he helps an old woman carry her bags? Or inquires about your day in a friendly manner? Will you change your opinion of people who broadly fall into this category? Unlikely. Instead you’ll write it off. The carefully constructed image is too psychologically comfortable to just cast away.
If you shirk off this accusation, consider this; you see a teenager dressed in a full tracksuit, cap down low, swaggering along the street. You immediately tense; will he attack me? Abuse me? Do something uncomfortable and then accuse me of something? What is this based on? Past experience? Partly, but most of us only rarely see these things, instead it’s an image, carefully crafted and reinforced by social mores, media, news, discussions with likeminded individuals and so on. What if he helps an old woman carry her bags? Or inquires about your day in a friendly manner? Will you change your opinion of people who broadly fall into this category? Unlikely. Instead you’ll write it off. The carefully constructed image is too psychologically comfortable to just cast away.
Another good example is with politicians. Have you ever
noticed how quickly you are able to explain away a mistake made by someone “on
your side”, whereas you can quick and harsh when you notice a mistake by
someone on the other side of the fence?
The truth is most people don’t understand human behaviour.
Those of us that do profess to understand it will be the first to acknowledge what
we do know is only preliminary. The science is young. What we do know, however,
is powerful; very powerful.
The world is awash in problems, big ones, small ones, ones
that affect the whole of humanity and some that affect just you. The time isn’t
right yet to make changes on a global scale. I’ll discuss global change another
time. What we can change, however, is the little problems. The ones that affect
you.
And now to the crux of my point. You see now why I
soliloquised on waxing philosophic at the start of this blog post, I guess I am
in a reflective mood, and I’m a stickler for developing (hopefully) coherent
arguments. So here goes; I am overweight.
It’s not exactly a secret, and those who know me, know that
I am quite open and honest. I’m not avoiding the issue, I’m not ashamed really.
It’s just how I am. How I’ve always been. Unfortunately this makes the root of
my behaviour incredibly difficult to pinpoint. Why did I start over-eating when
I was so young? I have my theories but they are private. The problem is that I
over-eat. Or, more precisely, over-eat the bad things.
All the standard explanations fail me. I am not from a lower
socio-economic status, I do not live in a food “desert”, I don’t lack adequate
cooking skills, and whilst I am hardly well-off I can easily afford healthy
food – and when I can’t, I can usually create something passably healthy from
very cheap fare. So what am I to do? Should I just admit I have terrible will
power? Consign myself to the status of terminal (and I mean that in the full
implication of the word) obesity? Pass off the responsibility to my genes?
No. I know too much about human behaviour to give in to
those sorts of explanatory fictions. Essentially they pass the buck.
So I’ve decided to apply the skills I know best to solve
this problem once and for all. You may or may not have heard of the 5:2 diet, I won’t go
into an explanation in this blog but you can get the ebook cheaply, and find
out loads of info on the web. Basically it involves healthy eating interspersed
with light fasting. I’ve heard good things from colleagues and I intend to
trial it for at least two months to see if it has any effect. I’ll be weighing
myself (and providing waist measurements) daily and graphing them to try and
ascertain any affect. Part of the behaviour intervention will be posting it
online. Public commitment is a powerful motivator. Not to mention it’s a
wonderful learning opportunity for me.
There is, however, a truth beneath the truth (a world below
the world) that I want to highlight. You can’t save the world with grandiose
actions and sweeping changes. Not the American Revolution, not the Communist
Revolution, not the internet, not globalisation, or the EU, or Mini-skirts of
Mary Quant (god bless her!) can have the long term effects they are meant to. They
can lay the foundation, create the architecture, provide the environment, but
ultimately individuals must decide to make the changes. It gives new meaning to
that tired old phrase “Be the change you want to see in the world”. But far
from the boring platitudes of aging hippies and new-age wannabes, this simple
phrase hides a simple truth; behaviour is something the individual does. Society
doesn’t “behave”, governments don’t “behave”, and even businesses don’t “behave”.
We do. And we make a choice in every situation how we are going to behave. Good
or bad, rational or irrational (and is there a difference between good and
rational?), the outcome is for us to decide. So I want to be an example, a
small, simple, example of what we can do to make the world a better place.
I won’t post the graph until more info has been collected,
it looks a little sad now and wholly unhelpful.
Friday, 2 August 2013
The Softer Side of Behaviour Change
Behaviour Change is my job and it’s
very easy to see it as just that, a job; just a process. The human side of it can become lost in the
theories and equations and so on. But at its core behaviour change is about people.
Behaviour change is often criticised for being too
mechanistic and harsh; more interested in the results and not the process. This
is, in part, a holdover from the halcyon days of lab-rat testing and
pidgeon-boxes of yore. Yet it is also down to the seemingly heartless way
businesses re-organise to maximise efficiency – oh and get rid of some ballast
along the way – and the way governments blithely ban or restrict activities
they see as inappropriate (or worse, politically useful) and so behaviour change
has become synonymous for some with things like the horrendous Soviet social
experiments, and the austere white-coats of scientists in lab prodding and
poking and shocking and starving animals all in the name of science.
But this is a myth.
Yes, once upon a time in days gone by behaviour was seen as
something mechanical, to be reduced to equations and symbols (you can thank
Watson and the Logical Positivists for that) but nowadays behaviour change is
anything but mechanistic and reductionist!
A brilliant paper by Delprato and Midgely explores
some of the aspects of Behaviour Analysis and put those (and other) myths to
rest. But nonetheless the myths persist.
We are seen as manipulative and controlling; for example
Nudge by Sunstein and Thaler was hailed as revolutionary in it’s simple
portrayal of complex ideas that appealed to a broad audience, it showed how we
can nudge behaviour to be better and
more effective without restricting the freedom of the individual.
Despite this the
reaction from some was vehement – to say the least. People were shocked that
governments could be inspired by such a supposedly totalitarian idea. This article here explains in the typically hyperbolic terms of sensationalist media how inevitably
the nudge will become a shove, citing Mayor Bloomberg of New York’s efforts to
control NY citizens behaviour, presumably alluding to the soda ban that hit the
headlines recently, but completely ignoring the fact that outright bans are
simply NOT allowed in Nudge theory.
So what can be done? Well better effort on our part to
down-play the talk of controlling and manipulating and instead highlight the
inherent freedom a behaviour change allows. The assumption of a lot of critics
is this; we are behaving rationally but not in the way we are “supposed to” as
defined by some shadowy agency. So along comes behaviour change techniques to
force us round pegs into some government-approved square holes. This is simply
not true; yes we can act rationally but we don’t. It’s learned. And it’s damn
hard to actually learn.
Second we have to re-assess our own goals within the
science. I am relatively new to academia and far be it from me to start
demanding sweeping changes in the way we think and act. Yet, it seems to me,
with the fresh eyes I bring, that a bit of positive PR would do us no end of
good. Not to mention a focus on inter-disciplinary efforts to make our work more
aesthetically pleasing to the man on the street, and in turn, more acceptable.
I don’t want to rattle on about this because it can become
dry quickly but it’s worth remembering that what is perceived as behaviour
change by the man on the Clapham omnibus and the scientists working on it can
sometimes be at odds. This is our fault for not portraying it in proper terms.
We aren’t against freedom, we aren’t against rationality, we aren’t against a
good life. We don’t want big brother, we don’t want intrusive regulations where
unnecessary, and we don’t want to control people like puppets.
I don’t have the answers, but maybe if we start asking the
right questions the answers will come.
Tuesday, 30 July 2013
The Misuse of Altruism (and why it matters!)
The term “altruism” is thrown around a lot in terms of
social capital, citizen democracy, behaviour change and so on. We talk about encouraging altruism; but what
do we mean?
When questioned most people give the layman’s answer; altruism = kindness. But does it?
The importance of language
When questioned most people give the layman’s answer; altruism = kindness. But does it?
The importance of language
Skinner spoke often and prolifically on the importance of
correct language use. Words have meanings - they refer to perceptual units in reality
(See Introduction to Objectivist Epistemology by Ayn Rand for an excellent, if
not too technical, look at epistemology and word usage) that are concrete and
distinct from other percepts. These words are referring to concepts that
different from all other concepts. Sadly there is a trend in modern language to
use words pragmatically, as mere placeholders for whatever the speakers feels. This may be all well and good in
normal day to day language but it can be deadly when used in the sciences. Words
are specific and we have to bare this in mind.
So what does Altruism
mean?
Altruism is a distinct philosophical view of morality that
states, at its primary, man should sacrifice in all things. Immanuel Kant, arguably one of the most important
philosophies in modern history, was an advocate of altruism. His philosophy basically
prescribed self-sacrifice. Now, when questioned, most people do not believe
such things. For example we can’t sacrifice our food, or our air, or our money we would die. It’s impossible to
consistently sacrifice. At its logical extreme self-sacrifice means suicide.
Luckily this view is rarely held by people. Most people accept some degree of
self-interest as necessary for survival. In and of itself this is not a
problem. The problems arise when people use the term altruism to mean kindness.
The importance of
kindness
Our society functions in part because of good will. When we
help a friend, when we hold a door open, when we volunteer our time, or sit
down and help our child with their homework we are being kind. Being good. I
know of no philosophers (save perhaps Nietzsche) who don’t advocate some kind
of kindness (heh): and for good reason. Kindness has survival value. It helps
us build companies, institution and services. It keeps the world turning in a pleasant,
civilised manner.
The problem
The problems come when a Psychologist or policy makers talks
about Altruism as though it means Kindness. This is referred to as an intellectual package deal. Consider the
following example; a policy maker wishes to encourage kindness amongst school
children by instituting lessons on co-operation, sharing etc… in the
legislation this is referred to as Altruism
training. The teachers are instructed to teach altruism as currently
described as being kind, sharing etc… there are no complaints (after all
altruism means kindness and kindness is good!) so off they go. Soon, however,
the government decides that Altruism can now be expanded to include mandatory “voluntary
service” (something now instituted in schools in America) as a function of
Altruism. Since no one bothered to properly define altruism in the first place
this is accepted (perhaps begrudgingly) because to deny it is to explicitly
deny altruism – something most people are unwilling to do.
Now this may seem like an unlikely example but note this has
actually happened – and continues to happen. Let me be clear I support the fostering of pro-social skills.
I think kindness is a good thing. I disagree, however, with altruism. It has
demonstrably no survival value to an individual and in the countries where pure
altruism has been mandated by law (Mao’s China, USSR, Cuba etc…) it has led to
nothing but ruin. Not to mention the complete breakdown of law and order, whereas
conversely countries with a more self-interested model (the modern West) tend
to have higher rates of charity, kindness and good will from strangers. The
take home message is Altruism does NOT equal kindness.
Why does this matter
This seems like a pernickety point mired in technical
details but the words we use have a direct impact on the way we act and think.
We all know about the fabled “Catholic guilt”, the notion that Catholics,
taught quite aggressively, that self-interest is bad tend to feel constantly
guilty. A direct contra-position to what they are supposed to feel. It’s not
because self-interest is incompatible with kindness, but, surprisingly, that Altruism
is incompatible with kindness. When we work on policy, or behaviour change, or
anything that involves people we need to be clear that kindness is the
pro-social behaviour we want not altruism,
because kindness has defined limits and can be successfully quantified whereas
Altruism becomes a blank check for systemic abuse of peoples good nature for
malicious ends.
Final thoughts
A further reminder, I am NOT against kindness and so on.
Nothing makes me feel better than helping out a friend in need. But that doesn’t
mean I should accept the whole package of self-sacrifice as a way of life, and
nor should anyone. A civil society can and will (I am ever the optimist) be
forged around the ideas of good will and kindness. They are essential to our
continued improvement and survival but to get there we need to uncouple the
idea of Altruism = kindness.
Wednesday, 24 July 2013
Innovation, the Individual and the State
Governments in the West are finally beginning to recognise
the powerful role of innovation. That is to say they are recognising that human
beings thrive when they are encouraged to think creatively about the problems
they face. Increasingly the age-old method of mechanistic taxation, subsidy,
incentives and punishments are being eschewed for more dynamic models of
behaviour change. For example large taxes on cigarettes has not been sufficient
to reduce smoking (and in turn not allayed the massive healthcare costs which
in modern society are in part borne by the state), recent proposals to
introduce plain packaging to cigarettes are designed to reduce the allure of
brand recognition (and supposedly the discriminative function of such stimuli
[see my Behaviourism article]) on the face of it this seems like a good
pragmatic suggestion (setting aside ethical arguments) but consider it in some
detail. The assumption underlying it is that smoking behaviour is a function of
the availability and salience of cigarettes; a fair assumption, but perhaps not
the whole story. Instead, consider the effect it will have on public perception
of smoking. By making it someone hidden, something secret and frowned upon you
risk increasing the appeal amongst certain individuals in so far as it becomes
a grown-up thing to do, something secretive… almost like a club! How powerful
does it feel to be part of a special club? We all know the sense of belonging
it creates.
This is not a blog about smoking regulation and I’ll leave
that sticky subject for another blog, but my point is that standard approaches
to managing behaviour have been ineffective.
Considerations about the individual are often left entirely
out of an analysis. Similarly the negative effects of state intervention are often
ignored or treated as a necessary evil. We often hear talks about “innovation
in society” or how “our community can innovate”. These are fancy buzz-words
picked up by politicians designed to garner votes, but they don’t hold up to
serious analysis. Anyone who works in a sector charged with innovation on any
level know how deeply personal innovation is. Yet it often requires
collaboration but that is not its defining feature.
The individual
Each person is unique. We all have a unique learning history
and, as such, set of behaviours. Our thinking (after all another type of
behaviour) is equally unique, especially as we diversify. A psychologist sees
the world in a very different way to a designer and an anthropologist. Each of
us, however, is capable of innovating in interesting and important ways. For
example, as a behavioural psychologist, I tend to view things in terms of – you
guessed it – behaviour. In designing a unique intervention I may be biased
towards my own literature, my own way of looking at things. I may,
concurrently, completely ignore the role of visual design (since I am not
skilled in it) and as such miss an important innovative opportunity (and vice
versa).
The State
The state has an important role in civilised society. It is
a policeman, a peacemaker, a co-ordinator, and a protector. It has legitimate
functions and illegitimate functions. When the state over-steps its bounds it
can wreak havoc with innovation. Example time again! Consider the NHS. The
pride and joy of every British citizen. A shining light in the modern world.
Unfortunately it has failed to innovate. Mired in bureaucratic roundabouts it
becomes impossible (indeed illegal!) to innovate within the confines of it. Any
innovation that does occur is slow and ineffective because it becomes dependent
on government approval. Want to trial a new way of handing A&E? Forget it.
Want to reorganise a ward for better management? No way. Those on the ground;
nurses, doctors, cleaners, healthcare workers of all stripes, are hog-tied and bound
by rules and regulations. Those charged with managing the NHS are given wide
powers but are not directly connected (nor skilled enough) to make the
appropriate changes, similarly they are hidebound to obey those above them and
so on. When you finally get to the top – parliament – you are faced with
opposing political factions all with their own ideas of what an NHS should look
like and all obstinate in their refusal to agree on anything that could
possibly help patients because it violates their ideological commitment.
The Truth
It’s a hard pill to swallow (pun intended!) but innovation
requires freedom; importantly it requires the freedom to fail. In our highly
controlled public sector innovation is all but stifled. Failure is not
tolerated. In fact there is a direct incentive to stick with the status quo
because those who do risk it all and fail are summarily fired and publically
humiliated and witch-hunted. On the one hand we demand better services, on the
other hand we can’t stomach the conditions required.
Something has to give
Something has to give
If we are truly serious as a society about innovation we
have to recognise some vital truths. The age of big government, top down
intervention and micro-management of public services has to end. This is not an
ideological argument but a pragmatic one. We cannot have our cake (perfectly
symmetrical, all-purpose public services) and eat it too (a rapidly innovative
set of services).
So what’s to be done?
It’s impossible to say for sure what needs to be done (we
need some meta-innovation for that one!) but what we do know is that our public
services need more breathing space. We need to introduce incentives for
trialling new ways of doing things. We need to be prepared to fail. We need to
respect the role of the individual in such measures. Instead of rejecting
evidence-based interventions in favour of political point scoring our
governments have to recognise that more autonomy has to be granted to public
services in order that they can make bespoke changes to respond to the evolving
needs of the citizenry they serve. If including a flat-fee for hospital
admissions reduces the strain on our doctors and nurses, we have to be prepared
to implement it. If a widening of police powers decreases crime, we have to be
prepared to do it. More importantly though we should be allowed to see it fail.
The answer is not a simple one. There is a long argument to
be had about how much freedom is necessary to both safeguard public services
for as long as people want them, but to make sure those services don’t devolve
into inefficient drains on limited resources, in the end they are funded by our
taxes and in a very real respect we are all owed well-run services. Central to
it all though is a very simple truth. We either innovate, and on a massive
scale, or we risk losing the services we hold so dear. Can we let go of
preconceptions about what a government and an individual should and should not
do in order to achieve the best possible world for everyone?
Tuesday, 16 July 2013
The Behaviour of a Behaviour Changer
As you may be able to tell from my “about me” page I was recently hired by the Wales Centre for Behaviour change as a Behavioural
Psychologist. Since I got the job a number of people have asked me… “What do
you do?” and often I have to stop and back track my thoughts because… well…
it’s not exactly easy to describe what a Behavioural Psychologist does,
it’s more about what a Behavioural Psychologist is.
Simply put I provide theoretical expertise to the team I
work with, and I help with intervention design for businesses that work with
us.
But that’s not what I am; it’s just what I do. What I
am is something very different… and it’s hard to come to terms with.
So a little bit of context is necessary here; 4 years ago I
started my undergraduate degree at Bangor University, I was 18, impressionable
and armed with only a vague sense of what I wanted to do with my life. At first
it was difficult to acclimatize to my new role as “student”, I was equipped
with my own home, disposable income, responsibility. No one was breathing down
my neck, and no one was following up on my every move. I was free – but trapped
by the unfamiliar contingencies of my surrounding.
Over the three years of my degree I got used to it. Then,
just as I was used to being a student I became a post-grad. Damn, now there is
a jump! The most interesting thing for me was seeing with hindsight how
controlled undergraduate study was (even though relatively it was free),
compared to post-grad which was very hands-off. There’s no spoon-feeding. It’s
sink or swim.
So that’s me until now. I was a student, and then I was a
post-graduate student. If you aren’t flush with academic culture there is a big
difference between an undergraduate and a post graduate in terms of respect and
expectation, but there is a sheer gulf between student and behavioural
psychologist.
When I found out I had got this job I was elated. The
reality, of course, hadn’t really set in. At first I didn’t believe it. I told
myself it wasn’t what I thought it was. I told myself it would be revoked. I
told myself everything except; “you’re going to be a Behavioural Psychologist”.
I started the job.
The weirdest thing is that people actually listen to me. Imagine going to work every day and being told what to do by people who know way more than you, and then once you just start to master a task they pull the rug out and expect you to learn something new again – imagine this repeating itself for four years and you’ve got yourself the basic student experience. Now imagine the next day you go into work and all of a sudden people are hanging on your word, nodding as you speak, actually asking your advice. That’s what it’s like to go from student to Behavioural Psychologist.
It’s ridiculously scary.
The weirdest thing is that people actually listen to me. Imagine going to work every day and being told what to do by people who know way more than you, and then once you just start to master a task they pull the rug out and expect you to learn something new again – imagine this repeating itself for four years and you’ve got yourself the basic student experience. Now imagine the next day you go into work and all of a sudden people are hanging on your word, nodding as you speak, actually asking your advice. That’s what it’s like to go from student to Behavioural Psychologist.
It’s ridiculously scary.
So one and a half months into my job, what am I? Well I’m a
behavioural psychologist and I’ve shrugged off most of the student mantle. It’s
amazing really how much we are defined by the opportunities presented to us.
Would I be as I am had I simply passed in
There’s a lesson, if you’re willing to look. We are a product of our environment. Our self-image, our sense of responsibility, our way of relating to ourselves and others – all are tied inextricably to the context we find ourselves in.
to a PhD program typical of my peers?
Probably not. I’d have assumed a different set of behaviours to cope with the
differing contingencies. There’s a lesson, if you’re willing to look. We are a product of our environment. Our self-image, our sense of responsibility, our way of relating to ourselves and others – all are tied inextricably to the context we find ourselves in.
P.S/ A bit of meta-reflection – as I sit here writing this
blog I’m casting my mind back to the last month or so, trying to distil the
essence of it, and as I re-read the words I’ve written it all seems much of a
muchness. It’s anti-climactic. More of a musing. I hope I haven’t bored you
with it!
Subscribe to:
Posts (Atom)