What is Imagination?

imagination

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The dictionary defines imagination as
a) the formation of a mental image picture
b) the formation of an idea
c) the formation of a concept that is not real or present
d) the faculty permitting visionary or creative thought.

What I am concerned with is this faculty of imagination. For example, writing a musical composition or a novel, forming a scientific theory, or designing a car or a jumbo jet. These are creative acts.

Materialists since the Enlightenment have argued that imagination is the recombination of ideas. Ideas are abstracted from the senses.

Hume

David Hume (1711-1776) said:
“…this creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting or diminishing the materials afforded to us by the senses and experience. When we think of a golden mountain we only join two consistent ideas, gold and mountain, with which we were formerly acquainted. A virtuous horse we can conceive; because, from our own feeling, we can conceive virtue; and this we may unite to the figure and shape of a horse, which is an animal familiar to us. In short all the materials of thinking are derived either from our outward or inward sentiment: The mixture and composition of these belongs alone to the mind and will. Or, to express myself in philosophical language, all our ideas or more feeble perceptions are copies of our impressions or more lively ones.” [Hume 1748]

Hume did not say how he selected these two concepts out of the millions we have. He did not say whether the combination of two concepts is always meaningful. Nor how to select a useful combination.

‘Golden mountains’ and ‘virtuous horses’ seem to be only the stuff of novels.

Hume did not address how we can talk about horses and mountains in the first place. These things do not exist in our heads but in the world.

Huxley

In the 19th Century Thomas Huxley (1825-1895) claimed that conscious choice played no part in this combining of two or more concepts.

The consciousness of brutes would appear to be related to the mechanism of their body simply as a collateral product of its working, and to be as completely without any power of modifying that working as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery. Their volition, if they have any, is an emotion indicative of physical changes, not a cause of such changes. … the argumentation which applies to brutes holds equally good of men.” [Huxley 1912]

This view is called ‘Epiphenomenalism’ (Greek epi= upon; phainomai = to appear). It is contrary to the frequent notion that our thoughts and desires cause our bodies to move in certain ways and towards certain goals.

The epiphenomenalist says physical events in the brain cause these thoughts and desires . The idea is that physical things such as dropping a weight on your foot causes impulses in the brain which you experience as pain. But you are not able to originate any actions. The physical processes in your nerves and brain cause the screams and hops which follow. ‘You’ are an impotent spectator stuck inside an autonomous robot. There is nobody home.

Behaviourism

The 20th century school of psychology known as behaviourism espoused epiphenomenalism. Behaviour is always the result of some reflex in response to some stimulus. The founder of this school, John B Watson (1878 –1958) held that

“… such things [as concepts and general ideas are] mere nonsense; … all of our responses are to definite and particular things. I never saw anyone reacting to tables in general but always to some particular representative… The question of meaning is an abstraction, a rationalisation and a speculation serving no useful scientific purpose.” [Watson 1920]

We have now got rid of meaning.

In the same vein the Oxford philosopher Gilbert Ryle (1900-1976) contended that we could paraphrase any statement involving ‘mind words’ such as ‘imagine’ without loss of meaning. A statement about behaviour would be as good or better:
picturing, visualising, or ‘seeing’ is a proper and useful concept but its use does not entail the existence of pictures which we contemplate or a gallery in which such pictures are ephemerally suspended. Roughly imaging occurs, but images are not seen… True, a person picturing his nursery is, in a certain way, like that person seeing his nursery, but the similarity does not consist in his really looking at a real likeness of his nursery, but in his really seeming to see his nursery itself, when he is not really seeing it. He is not being a spectator of a resemblance of his nursery, but he is resembling a spectator of his nursery…. There is no answer to the spurious question, ‘where do the objects reside that we fancy we see?’ since there are no such objects.” [Ryle 1949,2000]

The mind has disappeared!

The psychologist B F Skinner (1904–1990) applied the behaviourist principle to political theory. He advocated the abolition of the idea of ‘Autonomous Man’. “Freedom and dignity illustrate the difficulty. They are the possessions of the autonomous man of traditional theory, and they are essential to practices in which a person is held responsible for his conduct and given credit for his achievements. A scientific analysis shifts both the responsibility and the achievement to the environment.” [Skinner 1971, 1973].

Now, even the brain has disappeared.

The self-negating nature of the above ideas does not seem to have bothered any of their advocates. The arguments in whatever way they are mistaken, are certainly imaginative.

Artificial Intelligence

One of the reasons that behaviourism has fallen somewhat from favour is the rise of ‘Artificial Intelligence’. Computers have internal states. So the idea that the internal states of human beings are irrelevant is no longer plausible (if it ever was).

No one has yet built a machine that displays ‘imagination’ as well as a human. Nor one that displays the versatility of a human.

Some computer programs can now make a passable attempt at music, novels etc . IBM’s ‘Deep Blue’ has beaten the human world champions at such complex games as go and chess. This has given rise to doomsday scenarios where computers will think for themselves. They may decide that humans are suitable merely as pets or, worse, that humans are dispensable all together.

This idea depends for its force on the asserted equivalence of the human brain and a computer. In the early days of AI it was claimed that humans functioned according to certain rules such as ‘if I am thirsty, have a drink”. It soon became clear that there were so many rules it was impossible to code them all. The end result of this technology was a few applications in narrow domains of expertise.

It also led to wide-spread disillusionment with AI.

The next technical advance which revived the hope for AI was the use of statistics to digest huge quantities of input data and ‘learn’ a task. This approach when applied to speech gave rise to automatic translation. You can now speak into a smart phone in one language and get back the ‘same’ meaning in another desired language.

But this technique has limits. The translation programs need huge quantities of text in the two languages to ‘understand’ how the two languages correspond. The programs don’t understand what the various sentences refer to. Humans can understand a far bigger vocabulary than the best translation apps. They can understand made up words. They also understand that words refer to things and events in the real world.

The same approach to ‘machine learning’ led to automatic face recognition. An accessible explanation of how such ‘neural networks’ function is on YouTube [Serrano 2017] These machines need billions of faces to become proficient. Humans can recognise a person after one sighting..

The technique which currently has the AI community buzzing is so-called ‘deep learning’. This depends on multiply-connected artificial ‘neural networks’. Each network learns aspects of the task to be accomplished by the whole machine. These are then integrated in successive layers into appropriate conclusions or actions. This is the kind of thing used in self-drive cars where many sensors feed into the top layers of the neural network.

Such computers may make fewer mistakes than a human on similar tasks, but they are not infallible. They do not tire but they are not conscious.   They can’t extrapolate from experience in the same way that humans can.

Some computer scientists who have invested many years in the pursuit of ‘AI’ agree:

Today’s AI, which we call weak AI, is an optimizer, based on a lot of data in one domain that they learn to do one thing extremely well. It’s a very vertical, single task robot, if you will, but it does only one thing. You cannot teach it many things. You cannot teach it multi-domain. You cannot teach it to have common sense. You cannot give it emotions. It has no self-awareness, and therefore no desire or even understanding” [Lee 2018]

Of course, the entities which display ‘imagination’ in the development of AI are the AI designers.

Imagination is Creation

My purpose is not to define ‘imagination’ exactly. I assert that human beings have such a faculty.

The starting point for a philosophical or scientific investigation is recognition that human beings have the ability to perceive, understand, imagine, communicate and act.

Human beings are able to think creatively to a greater or lesser degree.  Jumbo jets now exist when a century ago they did not.   By any criterion that is creation.

 

 

References

[Hume 1748] Hume D An Enquiry concerning the Human Understanding Sect II (modern version ed E Steinberg)

[Huxley 1912] Huxley T Method and Results Macmillan : p240, p243 available at
http://www.archive.org/details/methodresultsess00huxluoft

[Lee 2018] Lee K-F We are here to create available at   https://www.edge.org/conversation/kai_fu_lee-we-are-here-to-create

Ryle G (1949,2000) The Concept of Mind Penguin p234,237

Skinner B F (1971,1973) Beyond Freedom and Dignity Penguin p30

[Watson 1920] Watson JB Is Thinking Merely The Action Of Language Mechanisms? British Journal of Psychology11, 87-104. available at
http://psychclassics.yorku.ca/Watson/thinking.htm

[Serrano 2017] Serrano L A Friendly Introduction to Neural networks and Image Recognition available at
https://www.youtube.com/watch?v=2-Ol7ZB0MmU

What is Understanding?

What does it mean to say one understands something?

light bulb

The dictionary lists three main meanings for ‘understanding’:

  • the perception and comprehension of the ideas expressed by others
  • the power of forming sound judgment about a course of action
  • something mutually understood or agreed upon.

So understanding requires

  • perception of what is and is not in one’s environment,
  • a familiarity with the subject under discussion and
  • experience of the consequences of actions and events.

With understanding one can think and act flexibly.

Aristotle

Aristotle (384-322BC) discussed ‘What is’ in his book The Categories.

What exists according to Aristotle (simplifying it rather) are entities or things. For example: a rock, a house, an animal, a person. Entities persist over a period of time and can change their properties and react to events.

Entities have properties such as colour, weight, intelligence etc. Properties can be said to ‘exist’ but they cannot exist without the entity. There is no such thing as the colour ‘red’ without some entity which we describe as red.

This was a big departure from the idea of Aristotle’s teacher, Plato (427-347BC). Plato held that the properties of things were the truest form of existence. Thus all the things we describe as red are pale imitations of the true ‘RED’ which exists in the real ‘World of Forms’. The world in which we live is a mere shadow of the ‘World of Forms’.

Natural Law

The ‘World of Forms’ may seem strange and counter-intuitive. But modern science and philosophy holds to the idea of ‘Natural Law’. This idea comes from the laws discovered in the realm of physics.

Physical laws are not universal generalisations about particular things (cats, desks, planets). They are rather statements about universal properties (eg mass, charge, momentum). ‘Red’ isn’t a universal property but energy is.

Even so, physicists tend to think of objects like photons and electrons as carriers of energy etc.

Plato’s legacy is that some people hold that the world we apparently live in is not the real world.

Computers

Those objects in the real world that ‘carry’ understandings are human beings. How they actually do this is the subject of much philosophical debate.

One popular idea is that ‘all thought is computation’. This is known as ‘strong artificial intelligence’ or ‘computationalism’. So understanding is some kind of computation. Computation can be defined in mechanical terms. On this view some configuration of levers and cogwheels can understand something. A thermostat, for example, can understand temperature.

In the 19th century Charles Babbage (1791-1871) published designs for a computer which consisted of cogwheels, cams and so on. It is difficult to believe that such a computer would actually understand anything.

Modern computers do not change this conclusion. They score over Babbage’s computer only in miniaturisation, scale (number of ‘cogwheels’) and speed.

There is a subjective ‘feel’ to understanding something. There is an even more pronounced ‘feel’ to not understanding something (doh!). If humans are mechanical computers of some kind there seems to be no reason why we feel anything. For example, why do we feel pain?

Mathematics

The mathematician Roger Penrose admits he doesn’t really know what ‘understand’ means. He thinks this is because he is a mathematician. Mathematicians do not need precise definitions of things they are talking about. They only need to say something about the connections between them. [Penrose 1993]

Penrose thinks computationalism must be wrong: some mathematical things are not computational. For instance, there is a conjecture due to Goldbach (1690-1764) that every even number can be expressed as the sum of two primes (eg 40 = 17 + 23, 198=97+101). There is no computer program that can verify or refute this conjecture. Trials show that it is true up to about 10^18.

Penrose holds that a specific set of rules do not make up ‘understanding’.

Memes

Another idea is that the mind consists of ‘units of cultural transmission’ called ‘memes’. The biologist Richard Dawkins coined this term by analogy with ‘gene’. Various principles, catch-phrases, fashions, ways of making pots, even tunes are ‘memes’.

Memes propagate themselves in the meme pool by leaping from brain to brain via a process which in the broad sense can be called imitation.” [Dawkins 1976]

Philosopher Daniel Dennett takes this up:
A human mind is itself an artefact created when memes restructure a human brain in order to make it a better habitat for memes” [Dennett 1995.1]

Dennett expands this idea: “A scholar is just a library’s way of making another library” [Dennett 1995.2 ]

Dennett provides no commentary on how memes replicate. Do they blend like cake ingredients or do they have dominant/recessive characteristics like genes? [Orr 1996]

So ‘understanding’ on this view is having the appropriate meme in operation at the appropriate time. Do you have a library?

Scientism

There are some people who think that the only understanding there is is scientific understanding. For instance, the neuroscientist Sam Harris claims that
questions of right and wrong, good and evil are questions about human and animal well-being. The moment we admit this we see that science can, in principle, answer such questions – because the experience of conscious creatures depends on the way the universe is.”

‘Well-being’, according to Harris, includes not only happiness, but also “truth, justice, fairness, intellectual pleasure, courage, creativity and having a clear conscience.” This approach to morality “will completely dislodge religion from the firmament of our concerns. The world religions will land somewhere near astrology, witchcraft and Greek mythology on the scrapheap. In their place we will have a thoroughgoing understanding of human flourishing.” [Gefter 2010]

This view is an example of ‘scientism’ . This is the belief that the methods of the physical and natural sciences are appropriate (or even essential) to all other disciplines. Including philosophy, the humanities and the social sciences. [Burkeman 2013]

So what are the methods of the natural sciences? Some eminent scientists such as Sir Percy Medawar (1915-1987) contend that there is no such method. Consider a ‘scientist’ who set out to find a cure for (say) rheumatoid arthritis. He would fail only because he did not know the ‘scientific method’ or he was too lazy to apply it. In either case he should be fired. [Medawar 1984]

The Sciences

Perhaps the best exemplars of our understanding are the sciences. But this does not mean that the only understanding there is is through science. People understood a lot about a lot of things long before science was formulated as a discipline (if it ever was).

Philosophers Bennett and Hacker contend: “it is absurd to suppose that science … is the primary measure of what does and does not exist. One needs no science to discover…that there is a tree in the garden or that there are no trees in one’s room.” [Bennett & Hacker 2003]

But we can say what scientific theories and discoveries have done for us. They explain phenomena. They enable us to predict new phenomena. They enable us to some degree to control the area under study.

Generally understanding proceeds from explanation through prediction to control, but not always. Steam power was controllable before there was any proper explanatory theory.

Understanding

From this we can define ‘understanding’ in operational terms:
‘Understanding’ is the appreciation of the properties and behaviour of things in the real world to the point where we can

  • explain phenomena and events,
  • predict new ones and – ultimately
  • control them.

It is not necessary to be able to do all three perfectly: there are degrees of understanding.

A thermostat fails the test.

The ‘ability of humans to perceive, understand, imagine, communicate and act’ is the fundamental starting point for any theory of the mind.

You will find this discussed in the first chapter of my book ‘Rethinking the Mind’.
Here: https://www.amazon.co.uk/Rethinking-Mind-1-Historical-Perspective-ebook/dp/B007JYFHVM;

References

[Aristotle c330BC] The Categories (transl WD Ross) available at
www.constitution.org/ari/aristotle-organon+physics.pdf

[Bennett & Hacker 2003] Bennett MR & Hacker PMS The Philosophical Foundations of Neuroscience Blackwell Publishing p374

[Burkeman 2013] Burkeman O “‘Scientism’ wars: there’s an elephant in the room, and its name is Sam Harris” The Guardian 2017/8/27 available at https://www.theguardian.com/news/oliver-burkeman-s-blog/2013/aug/27/scientism-wars-sam-harris-elephant

[Dawkins 1976] Dawkins R (1976,1989) The Selfish Gene Oxford University Press p192

[Dennett 1995.1] Dennett D Darwin’s Dangerous Idea Penguin p365

[Dennett 1995.2 ] Dennett D Darwin’s Dangerous Idea Penguin p346

[Gefter 2010] Gefter A “Crusader for Science (interview with Sam Harris)” New Scientist vol 208 (2782) p46-47

[Medawar 1984] Medawar P The Limits of Science Oxford University Press p51

[Orr 1996] Orr H A “Boston Review: Dennett’s Strange Idea” available at
http://www.bostonreview.net/BR21.3/Orr.html

[Penrose 1993] Penrose R Shadows of the Mind Vintage p68

what is perception?

Image

perception chequerboard illusion
I regard the ‘ability to perceive, understand, imagine, communicate and act’ as the fundamental starting point for any theory of the mind. You will find this discussed in the first chapter of my book ‘Rethinking the Mind’. This post concentrates on perception (the first of the five abilities) and what it is.

Representationalism

The predominant idea in modern neuroscience concerning perception is that what we perceive are images or representations in the brain. In other words the world around us is an hallucination created by the brain. (See for example [Seth2017] ).

This view is known as ‘Representationalism’.

It is wrong.

What we perceive are the various objects, and their attributes and behaviours in the real world.

‘Hallucination’ in common parlance means ‘seeing or hearing things that are not there’. According to psychology professor Benny Shanon of the Hebrew University in Jerusalem, [Shanon 2003] the characteristics of those things we describe as hallucinations are:

  1. Vividness: subjectively the experience is that of a vivid perception.
  2. Non-correspondence: Factually the experience does not correspond to any real objects or state of affairs in the real world.
  3. Ignorance: the cognitive agent however is not cognizant of 2).
  4. False Judgment: hence the hallucinatory experience involves false judgment on the part of the cognitive agent.
  5. Negative evaluation: Thus overall the hallucinatory experience is evaluated pejoratively, and it is assumed that it is of no positive import. Typically experience is taken to be indicative of some psychological impairment.
  6. Dismissal: Implied in all this is the assessment that any person other than the one having the hallucinatory experience will adhere to the negative evaluation indicated in 5).”

So to classify perception as an hallucination is to deny all objective criteria about what is and what is not. This justifies the current philosophical fad called ‘postmodernism’.

Postmodernism is a term applied to certain approaches in the social sciences and philosophy. It is characterised by ethical relativism and subjectivity. It emphasises the social construction of ‘knowledge’. It is generally sceptical towards science. Thus, “if it’s true for you, it’s true”; “there are facts and alternative facts.”; “there is western truth and Russian truthetc.

Perhaps rather than neuroscience validating postmodernism, it is the other way around. Perhaps ‘neuroscience’ as it is currently practised is on rather shaky philosophical ground.

Direct Realism

The Scottish Enlightenment philosopher Thomas Reid (1710-1796) long ago recognised that we cannot deny certain principles consistently. Among these principles is: “Those things that we clearly perceive by our senses really exist and really are what we perceive them to be.”[Reid 1785]

For if we deny this principle we cannot meaningfully converse with others. You might think this thing is a lion but I think it’s a typewriter.

Reid’s principle is known as ‘direct realism’.

perception illusion revealedReid did not deny that our senses can deceive us. Neuroscientists are forever finding new ways in which they can. In the picture above the two areas labelled ‘A’ and ‘B’ appear to be different shades of grey (Adelson’s chequerboard illusion). They are actually the same which is obvious when we block out the rest of the picture as here:

But it does not follow that because our senses can be deceived that they always are. Or even that they are deceived most of the time. We can be caught up in a movie and identify with the characters, but we can still recognise the fact that it is a movie.

Once the chequerboard illusion above is pointed out, we recognise the truth. Only postmodernists and neuroscientists conclude from that that nothing is real.

Neuroscientists could retort that because everything goes on in the brain neuroscience knows that the brain is real. They profess to be good card-carrying materialists. This is an inherent contradiction with the idea that the world is an hallucination.

You perceive by using your senses of sight, hearing, taste and smell. You see with your eyes. You hear with your ears etc. You do not see, hear, taste or smell with your brain. The brain is not an organ of perception, even though you cannot see, hear etc without the brain. You, the person, is what perceives, and this is manifest in the way you act and communicate.

The senses are not infallible. It is possible to find that what you thought you perceived was not actually what was there. In this case you say, “I thought … but it turned out that …”.

JJ Gibson

The psychologist JJ Gibson (1904 – 1979) classified vision in four distinct levels [Gibson 1986]:

  • Snapshot when the eye is stationary and functioning like a camera;
  • Aperture when the eye is able to scan the environment from a fixed position;
  • Ambient when the organism is able to turn its head and look around;
  • Ambulatory when the organism can walk around.

Gibson investigated these types of vision. In one experiment he had subjects look through a camera shutter so that they obtained a ‘snapshot’ wide-angle view of the environment for a fifth of a second. The subject had to find out what objects were on a table in front of him. He could take as many ‘snapshots’ as he liked and he could scan the table by moving his head.

Perception was seriously disturbed and the task was extremely difficult. What took only a few seconds with normal looking required many fixations…there were many errors.

Gibson emphasised the need for a person to move and see things from different perspectives to perceive what is really there. There are several clues in the environment during ambulatory vision such as perspective, parallax, occlusion of one surface by another and so on that locate an organism in its environment. In particular the organism can perceive itself as well as its environment. “Information about the self accompanies information about the environment” because we perceive parts of our own bodies.

Gibson’s experiments reveal that perception is not a passive process whereby photons strike the retina and stimulate various neural processes which are labelled as ‘perception’. It is an active process where the person deliberately changes his viewpoint (by moving his eyes, head and body) to extract from the environment those things that are invariant. Thus we perceive a table top as a rectangular surface even though all the retinal images relating to the table top are trapezoids of varying shape.

Perception does not produce ‘mental representations’. Perception enables the organism to function in its environment through active exploration.

So perception is the process whereby we get knowledge of our environment and the objects in it, their attributes and behaviour, and events.

References

[Gibson 1986] Gibson JJ  (1986) The Ecological Approach to Visual Perception Lawrence Erlbaum Associates

[Reid 1785] Reid T Essays on the Intellectual Powers of Man Essay 6 Chapter 5 p253-263 avaialable at http://www.earlymoderntexts.com

[Seth 2017]  Seth A Your brain hallucinates your conscious reality  TED Talk 18 Jul 2017  available at https://www.youtube.com/watch?v=lyu7v7nWzfo

[Shanon 2003] Shanon B “Hallucinations” Journal of Consciousness Studies vol 10 (2) p3

Milgram Revisited: “Only obeying orders”

Adolf Eichmann (below) was tried in Israel in 1961 for crimes against humanity. Eichmann’s crimes were in his handling of the logistics of transporting millions of Jews to concentration camps built for the purpose of their extermination during WW2.    His defence was ‘only obeying orders’.

only obeying orders

Milgram’s Experiments

Eichmann’s defence inspired Stanley Milgram (1933-1984) a psychologist at Yale University to perform one of the most infamous of social psychology experiments.   He wanted to find out how far a person would proceed in inflicting pain in obedience to the authority figure of the experimenter.

He chose people varying widely in age, occupation and education as subjects. From the subject’s point of view he and another person came to the laboratory to take part in a study of memory and learning.   They were given a scientific sounding rationale for the study.   One of them became a ‘teacher’, the other a ‘learner’.

The ‘teacher’ was shown an electrified chair and given a sample 45 volt shock. The ‘learner’ was then placed in the electrified chair, wired up with electrodes and told that he will be read lists of word pairs.    When he hears the first one again he is supposed to say the second word.   If he makes a mistake he will be given an electric shock.

The ‘teacher’ was then taken to a different room (linked by intercom) where he was placed in front of a control panel with thirty switches labelled 14 to 450 volts with descriptive designations from ‘slight shock’ to ‘danger: severe shock’ and finally ‘xxx’.

The experimenter in a grey lab coat starts the ‘teacher’ off with the word pairs.   He tells the ‘teacher’ to administer the next level of electric shock when the ‘learner’ gets the word pairing wrong.

In fact, the ‘learner’ is an actor who receives no shocks but acts as though he did. The experimenter unemotionally in the face of objections from the ‘teacher’ just encourages him to continue the experiment. When the learner starts to make mistakes the level of electric shock is stepped up. “At 75 volts, he grunts; at 120 volts he complains loudly; at 150 he demands to be released from the experiment… At 285 volts his response can be described only as an agonised scream. Soon thereafter he makes no sound at all.” (Milgram 1973)

Milgram solicited predictions of the result of his experiment from 14 colleagues.   They almost uniformly predicted that the ‘teacher’ would refuse to obey the experimenter at 150 volts where the learner asks to be released from the experiment.   In fact about 60% of the ‘teachers’ went to the end of the experiment administering the full 450 volts.

The subjects (‘teachers’) were usually agitated during the experiment – sweating, trembling, stuttering or laughter fits.   They were much relieved at the end of the experiment to find they had not hurt anyone – though some showed no emotion throughout. Variations of the experiment were tried to find what parameters influenced the result. When the ‘teacher’ was allowed to choose the shock level rather than being told to raise it to the next level, the average shock chosen was less than 60 volts – lower than the point at which the victim showed the first signs of discomfort.   Only 2 out of 40 subjects went as high as 320 volts.

When the experiment was altered so that the experimenter gave his instructions by telephone rather than being in the room with the ‘teacher’, the percentage of ‘teachers’ obedient to the 450 volt level fell to 20%. When the ‘teacher’ was relieved of the responsibility of pulling the lever that administered the shocks, and merely specified the level at which the shock should occur the percentage of ‘teachers’ going all the way to 450 volts went up to 92%.   In that case the subjects claimed that the responsibility rested with the person who actually pulled the lever.

Milgram concluded, “The essence of obedience is that a person comes to view himself as the instrument for carrying out another person’s wishes, and he therefore no longer regards himself as responsible for his actions… The most far-reaching consequence is that the person feels responsible to the authority directing him but feels no responsibility for the actions that the authority prescribes.  Morality does not disappear – it acquires a radically different focus: the subordinate person feels shame or pride depending on how adequately he has performed the actions called for by authority … the most fundamental lesson of our study [is that] ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority.” (Milgram 1973)

The experiment has been repeated in various parts of the world with even higher percentages of obedience in some cases. Milgram gave the subjects  personality tests in an attempt to find those aspects of personality or character that would predict how far the subjects would go, but he found no correlation with any of the test results.

New Experiment

Now a slightly different version of Milgram’s experiment has been performed by a group of ‘cognitive neuroscientists’ from University College London and the Free University of Brussels led by Patrick Haggard (Caspar 2016). They wanted to find out to what degree the participants felt ‘in charge’ when they knowingly inflicted pain on each other and when they knew the aim of the experiment.

In the new experiments the participants (all female) were tested in pairs.  They took turns being ‘agent’ and ‘victim’ thus ensuring reciprocity.    Each was initially given £20.  The agent sat facing the ‘victim’ and so could monitor directly the effect of her actions.    In a first group of participants, the agent could freely choose on each trial to increase her own remuneration by taking money (£0.05) from the ‘victim’ (financial harm) or not.   Money transfer occurred in 57% of trials.   In a variation of the experiment the financial harm was accompanied by an electric shock to the ‘victim’ at a level that was tolerable but not pleasant (the electric shock was administered in 52% of trials).

In both of these groups the experimenter stood by and in some cases told the agent to take the money (group 1) or shock the victim (group 2).   In the other cases the experimenter told the agent to exercise her free choice.   There were also a number of trials as controls where the experimenter asked the agent to press the space bar whenever she wanted (‘active’) and where the experimenter pressed the agent’s finger on the space bar (‘passive’).

In order to investigate the agent’s ‘sense of agency’ (“the subjective experience of controlling one’s actions, and, through them, external events”) the key presses caused a tone to sound after a few milliseconds (variously 200, 500 and 800 msec) and the participants were asked to judge the length of the interval. The rationale behind this is that action-result times are perceived as shorter when the person carries out the action voluntarily (such as raising one’s arm) than when the action is done passively (someone else raises the arm).   So if coercion reduces this sense of agency, interval estimates should be longer in the coercive than in the free-choice condition.

Thus there were several comparison sets of data: free choice versus coercion, financial harm versus physical harm and harm versus no harm, as well as the control conditions (active versus passive).   When they were ordered to press a particular key (producing either harm or no-harm), the participants judged their action as more passive than when they had free choice and they perceived the time interval from the tone as longer (p=0.001).   This did not change depending on whether there was a harmful outcome, though it did when the potential harm was greater (ie physical rather than financial).

So the conclusion was that the coercion rather than the severity of the actual outcome was the determining factor in the sense of agency.   The agent experienced less sense of agency when she was coerced than when she freely chose between the same options – regardless of whether harm was actually inflicted.   So the plea “Only obeying orders” might not be just an attempt to avoid blame “but may rather reflect a genuine difference in subjective experience of agency.”

The participants were also given personality tests prior to the experiments to see if there were any predisposing factors. It was found that those scoring higher on empathy showed a greater reduction in the sense of agency when their actions had harmful outcomes.

In a second experiment, the same procedures were used but the agents were also hooked up to an electroencephalogram (EEG) to investigate changes in brain activity associated with the free choice / coercive conditions.   When an unpredictable stimulus such as a tone occurs it is followed by a ‘negative response potential’ approximately 0.1 seconds later in the frontal part of the scalp (usually referred to as the N100).  The expectation was that the N100 would be larger in amplitude when the agent freely chose her action compared with that when she felt coerced. This was indeed the case (amplitude ratio approx 1.3).   So not only the subjective ‘sense of agency’ but also neurophysiological activity is reduced under coercion.

Haggard says people genuinely feel less responsibility for their actions when following commands regardless of whether they are told to do something evil or benign. So the ‘only obeying orders’ excuse shows how a person feels when acting under command.

Before Haggard did these experiments he had (along with the majority of neuroscientists and many modern philosophers) already espoused the philosophical viewpoints of physicalism¹ , epiphenomenalism² and reductionism³ . He claims that mind-body causation is dualist and “incompatible with modern neuroscience” since most neuroscientists believe that “conscious experiences are consequences of brain activity rather than causes.”    “Philosophers studying ‘conscious free will’ have discussed whether conscious intentions could cause actions, but modern neuroscience rejects this idea of mind–body causation.    Instead, recent findings suggest that the conscious experience of intending to act arises from preparation for action in frontal and parietal brain areas. Intentional actions also involve a strong sense of agency, a sense of controlling events in the external world.    Both intention and agency result from the brain processes for predictive motor control….” (Haggard 2005)

And again : “… the cause of our ‘free decisions’ may at least in part, be simply the background stochastic fluctuations of cortical excitability.” (Filevich 2013)

Discussion

These experiments are interesting but care must be taken in their interpretation and in the consequences that may be claimed for jurisprudence. It is not clear whether the neurophysiological activity causes the subjective sense of agency or vice versa. What the experiments do reveal is that coercion causes both reduced sense of agency and reduced neurophysiological activity.

The experiments only concern what Elizabeth Pacherie terms ‘present-directed intentions’ ie those intentions which “trigger the intended action, …sustain it until completion, …guide its unfolding and monitor its effects”.   They do not touch upon ‘future directed intentions’ which are “terminators of practical reasoning about ends, prompters of practical reasoning about means and plans, and intra- and interpersonal coordinators” (Pacherie 2006).

One presumes that Haggard and his colleagues were motivated by future directed intentions when they decided to do the experiments and write their paper. They were not simply acting as the result of ‘stochastic fluctuations of cortical activity.’   If so, then the sweeping general conclusion loses its force.

The 18th century philosopher David Hume (1711-1776) thought that every object of the mind must be either an immediate perception or an ‘idea’ – a faint copy of some earlier perception.(Hume 1748)   This was criticised by his contemporary Thomas Reid (1710-1796) :“It seemed very natural to think that [Hume’s book] required an author and a very ingenious one at that; but now we learn that it is only a set of ideas that came together and arranged themselves by certain associations and attractions.” (Reid 1764)

According to Haggard and his colleagues not even ideas are now involved – only ‘stochastic fluctuations of cortical activity’.

The question of who bears personal responsibility is important to the rule of law. Certainly the person who gives the order to harm is culpable for the consequences.   But this does not absolve the person who actually carries out the order. The degree to which people feel responsible on average does not change the moral responsibility of any individual act.   Nor does it justify the inclusion of such ‘mitigating’ circumstances into criminal law.

Hannah Arendt (1906-1975) wrote a book on Eichmann’s trial (Arendt 1963), in which she coined the phrase “the banality of evil”.   It is not clear exactly what she meant by the phrase.   Milgram thought that she meant that Eichmann was not a “sadistic monster” but “an uninspired bureaucrat who simply sat at his desk and did his job“, and that she “came closer to the truth than one dare imagine.” (Milgram 1973)   It may well be true that in some situations evil is not perpetrated by fanatics and psychopaths but by ordinary people who see their actions as normal (banal = commonplace) within the prevailing conditions.   If so all of us are capable of committing horrendous crimes when the circumstances are right.

It is easy to see how ‘situationism’ (the philosophical belief that people act according to the situation in which they find themselves rather than by virtue of any moral or philosophical outlook they might have) is a credible paradigm.   But it predicts the actions of only 2/3rds of the subjects in the Milgram study. The new study suggests that there are character traits (eg ‘empathy’) that predict some aspect of the results (ie reduced sense of agency where there was a harmful outcome) more accurately.   But we do not excuse criminality on the grounds of character traits.

Evil was a common place in Nazi Europe, but for Arendt that did not render it excusable.   Whilst Arendt saw Eichmann as a cog in the machinery of the Final Solution she did not excuse his crimes nor fail to hold him morally responsible for his actions.   “If the defendant excuses himself on the ground that he acted not as a man but as a mere functionary whose functions could just as easily have been carried out by anyone else, it is as if a criminal pointed to the statistics on crime – which set forth that so-and-so-many crimes per day are committed in such-and-such a place – and declared that he only did what was statistically expected, that it was a mere accident that he did it and not somebody else, since after all somebody had to do it.” (Arendt 1963)

Despite the pressures some people do have the resources to buck authority even when the authority has far more clout than the man (or woman) in the grey lab coat. For example, the US GI Ronald Ridenhour forced the US congress to investigate the My Lai massacre in Vietnam where US servicemen massacred an entire village of 300 or more civilians in 1968. (Ridenhaur 1969) There were many people such as Raul Wallenburg (1912-1947) and Oskar Schindler (1908-1974) who protected Jews from the holocaust despite great personal risk.

If there are attempts to influence the law on the basis that these experiments prove diminished responsibility they should be dismissed.


The above contains passages extracted from the book Rethinking the Mind. Get the first volume here: https://www.amazon.com/Rethinking-Mind-1-Historical-Perspective-ebook/dp/B007JYFHVM

Notes

  1. Physicalism: the doctrine that everything is physical, ie all is matter and energy in its many forms and hence subject to the laws of physics.
  2. Epiphenomenalism: the doctrine that mental events are mere by-products of physical events and that mental events in themselves do not cause anything. In the classic description due to Thomas Huxley (1825-1895) consciousness is simply a collateral product of the working of the body in the same way that a steam whistle accompanies the work of a locomotive engine.
  3. Reductionism: the doctrine that explanations of phenomena are to be found in the smaller entities that comprise it eg) heredity in terms of DNA or in this case, human activities in terms of neural firings.

References

Arendt H (1963) Eichmann in Jerusalem: A Report on the Banality of Evil Penguin

Caspar EA, Christensen JF, Cleeremans A & Haggard P (2016) Coercion changes the Sense of Agency in the Human Brain Current Biology available at http://dx.doi.org/10.1016/j.cub.2015.12.067

Filevich E, Kühn S, Haggard P (2013) Antecedent Brain Activity Predicts Decisions to Inhibit PLOS 1 (February 13, 2013) available at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0053053

Haggard P (2005) Conscious Intention and Motor cognition Trends in Cognitive sciences vol 9(6) p 290-295 available at http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613%2805%2900119-1

Hume D (1748) An Enquiry Concerning Human Understanding Section II ‘Of the Origin of Ideas’ para 12.

Milgram S (1973) The perils of Obedience Harpers Magazine p62-77 available at http://home.subell.net/revscat/perilsofobedience.html

Pacherie E (2006) Towards a dynamic theory of Intentions in Pockett S, Banks WP & Gallagher S (eds) Does Consciousness cause behavior MIT Press p 145-167 available at http://hal.archives-ouvertes.fr/docs/00/35/39/542/PDF/dynamics-intention-MIT-Pacherie-2006.pdf

Reid T (1764) An Enquiry into the Human Mind chapter 2.6 (ed J Bennett) available at http://www.earlymoderntexts.com/authors/reid

Ridenhour R  (1969) Letter to US Congress available at
http://www.law.umkc.edu/faculty/projects/ftrials/mylai/ridenhour_ltr.html

Why Nations Fail review

Book Review

Acemoglu D & Robinson JA (2013) Why Nations Fail. The Origins of Power, Prosperity and Poverty Profile Books

why nations fail review korea
A satellite photograph of Korea at night shows North Korea as dark as – well – night, whilst South Korea blazes forth with light pollution. The South is the 29th richest country in the world with a GDP of $37,000 per head. The North is one of the poorest ($1,800 GDP per head) suffering from periodic famine and desperate poverty. Why is this?

One easy answer is that the North is a dictatorship whereas the South is a democracy. Democracies are good; dictatorships are bad.

It is not so simple.

At the end of WWII Korea was divided between North and South at the 38th parallel. In 1950 the North invaded the South and almost succeeded in overrunning it. At the end of the Korean War (1953) the states were again divided, but both were dictatorships. The South’s GDP increased at 10% per year between 1962 and 1979. It only became a democracy in 1987 with separate executive, legislative and judicial bodies after a succession of 3 dictators (two by coups d’état, one of whom was assassinated).

Figuring out what are the vital factors and what drives the changes occurring in a society is difficult. There are no two identical societies in which one isolated factor can be changed to see what happens. Any theory is liable to have elements of pre-supposing the answer (for example: democracy versus dictatorship). So any theory about how and why some nations become tolerant and prosperous but others become intolerant and poverty stricken is likely to be controversial. Similar problems arise in trying to account for why formerly tolerant and prosperous nations reverse and become repressive and poor.

It is to this problem of why some nations succeed and some fail that Daron Acemoglu, a Turkish-American professor of economics at M.I.T., and James Robinson, a British professor of public policy studies at the University of Chicago (hereafter A&R) have tackled in their book Why Nations Fail: The Origins of Power, Prosperity and Poverty. This book has been generally well received.

I will outline some of the earlier theories and the criticisms by A&R and then their theory and some of the criticisms that have been levelled against it.

Geography

The map of the world shows affluent societies in the temperate areas and poor societies in the tropical areas within 30° of the equator. This is particularly marked in Africa. The idea then is that the great division between rich and poor countries is caused by geography. The reasons for this are the pervasiveness of tropical diseases such as malaria, the scarcity of animals that could be used as cheap labour, and the poverty of the soil. There are exceptions, for example, the rich countries of Singapore and Malaysia, but both of these have access to the sea. This allows trade because it is much cheaper to transport cargo by sea than by land.

A&R criticise this theory on several grounds despite its initial appeal. The Indus Valley civilisation is the first recorded great civilisation and it is situated in what is modern Pakistan, well within the tropics. Central America before the Spanish invasions was richer than the temperate zones. One of the world’s currently poorest nations, Mali (GDP $18 billion), where half of the population of 14.5 million live on less than $1.25 per day, was once ruled by the richest person who has ever lived. Mansa Musa Keita I (c. 1280 – c. 1337) had a fortune of $400 billion in today’s money. His wealth included vast quantities of gold, slaves, salt and a large navy(!).

History … leaves little doubt that there is no simple connection between tropical location and economic success.” (A&R p51) There are also vast differences in wealth within the tropics and temperate regions at the present time. There is a sharp line between poverty and prosperity between North & South Korea, between Mexico and the United States, and between East and West Germany before reunification.

Ignorance

The reason poor nations are poor according to this hypothesis is that their governments are not educated in how a modern economy should be run. Their leaders have mistaken ideas on how to run their countries. Certainly, leaders of central African countries since independence have made bad decisions when viewed from outside. The IMF recommend a list of economic reforms that poor states should undertake including:

  • reduction of the public sector,
  • flexible exchange rates,
  • privatization of state run enterprises,
  • anticorruption measures and
  • central bank independence.

The central bank of Zimbabwe became ‘independent’ in 1995. It was not long before inflation took off reaching 11 million % pa (officially) by 2008 with unemployment around 80%.

But according to A&R it is not ignorance that is the source of bad decisions: “Poor countries are poor because those who have power make choices that create poverty. They get it wrong not by mistake or ignorance but on purpose. To understand this, you have to go beyond economics and expert advice on the best thing to do and, instead, study how decisions actually get made, who gets to make them, and why those people decide to do what they do.” (p68)

Culture

One idea about the rise of Europe from the 17th century was that it was caused by the ‘protestant work ethic’. Alternatively, the relative prosperity of former British colonies like Australia, and the U.S.A. was caused by the superior British culture. Or perhaps it is just European culture that is better than the others. These smug ideas don’t hold much water when you look at China and Japan, or when you look at the conduct of the European powers in their colonies. Some of those colonies are now prosperous and some are not.

At the start of the Industrial Revolution in the 18th and 19th centuries Britain had relative stability.   It had a tolerant clubby society that encouraged individualism.   It protected invention through patents.   It had a market for mass-produced goods. According to A&R this was not culturally caused.   Rather it was the result of definite structures in society and political arrangements. (p56)

Modernisation

According to this theory (also known as the Lipset or Aristotle theory) when countries become more economically developed they head towards pluralism, civil liberties and democracy. There is some evidence that this holds in Africa since 1950. (Anyanwu & Erhijakpor 2013)

But A&R object that US trade with China has not (yet) brought democracy there. The population of Iraq was reasonably well educated before the US-led invasion, and it was believed to be a ripe ground for the development of democracy, but those hopes were dashed. The richness of Japan and Germany did not prevent the rise of militaristic regimes in the 1930’s. (p443)

Other Theories

There are other theories about where and when prosperity will arise or disappear.
Several experts discuss the cause of the Industrial Revolution in The Day the World Took Off (Dugan 2000). They surmise:

  • historical accident (p182);
  • capitalism (p135);
  • the availability of raw materials (p66);
  • consumerism (p64);
  • the habit of drinking tea or beer rather than contaminated water (p18);
  • the need to measure time (p100);
  • the rise of the merchant classes (bourgeoisie) (p141).

Also discussed in The Day the World Took Off is the settlement in the Glorious Revolution of 1688 which brought political stability (p82). Finance became available through the establishment of the Bank of England. Incentives for investment, trade and innovation appeared through the enforcement of property rights and patents for intellectual property.

Acemoglu & Robinson

The political and economic factors exemplified by the Glorious Revolution are what A&R develop in their 500+ page book on why nations fail. A&R make several major points in their analysis.

1. Centralization

The first requirement for economic growth is a centralized political set up. Where a nation is split into factions, as is the case today in Somalia and Afghanistan, it is difficult to centralize power. This is because “any clan, group or politician attempting to centralize power in the state will also be centralizing power in their own hands, and this is likely to meet the ire of other clans, groups and individuals who would be the political losers of this process.”(p87) Only when one group of people is more powerful than the rest can centralization occur.

2. Extractive Economic Institutions

Economic institutions are critical for determining whether a country is poor or prosperous. A&R define extractive economic institutions as those “designed to extract incomes and wealth from one subset of society to benefit a different subset.” (p76) The feudal system that existed in Europe around 1400 and persisted in places into the 20th century was extractive. Wealth flowed upwards from the many serfs to the few lords. In later times, colonialism flowed wealth away from the locals to the colonists. A particular example was King Leopold II (1835-1909) of Belgium who ruled over the Congo Free State from 1885 to1908. He built his personal wealth through copper, ivory and rubber exports supervised by a repressive police force that enforced the local slave labour. A considerable but unknown proportion of the population were murdered or mutilated in the pursuit of Leopold’s wealth. (Bueno de Mesquita 2009)

Economic Growth can occur where there are extractive economic institutions provided there is centralization of power. It is in the interest of the exploiters to increase production for their own gain. A&R claim that this growth cannot continue for ever. It comes to an end “because of the absence of new innovations, because of political infighting generated by the desire to benefit from extraction, or because the nascent inclusive elements were conclusively reversed…” (p184) They thus predict that China’s growth will stall unless it manages somehow to transition to inclusive institutions (p442).

3. Extractive Political Institutions

Extractive economic institutions are set up by whoever it is that has political power. They will be better off if they can extract wealth from the rest of society and use that wealth to increase their power. “[They] have the resources to build their (private) armies and mercenaries, to buy their judges, and to rig their elections in order to remain in power. They also have every interest in defending the system. Therefore, extractive economic institutions create the platform for extractive political institutions to persist. Power is valuable in regimes with extractive political institutions, because power is unchecked and brings economic riches.” (p343)

4. Inclusive Political Institutions

Political institutions that distribute power broadly in society and subject it to constraints are pluralistic. Inclusive political institutions are those that “are sufficiently centralized and pluralistic.” (p81)

This agrees with the American political scientist Bruce Bueno de Mesquita that one of the main factors in having benevolent government is the presence of a large coalition (which he calls the ‘selectorate’) of those who have a say in who rules.(Bueno de Mesquita 2009)
The Glorious Revolution in Britain of 1688 limited the power of the king and gave parliament the power to determine economic institutions. It opened up the political system to a broad cross section of society.   They were able to exert considerable influence over the way the state functioned.

Before 1688 the king had a ‘divine right’ to rule the state by law. Afterwards even the king was subject to the Rule of Law. “[The Rule of Law] is a creation of pluralist political institutions and of the broad coalitions that support such pluralism. It is only when many individuals and groups have a say in decisions, and the political power to have a seat at the table, that the idea that they should all be treated fairly starts making sense.” (p306)
Britain stopped censoring the media after 1688. Property rights were protected. Even ‘intellectual property’ was protected through patents, which enabled innovators and entrepreneurs to gain financially from their ideas. According to A&R it is no accident that the Industrial Revolution followed a few decades after the Glorious Revolution. (p102)

‘Inclusive Political Institutions’ is not the same as democracy. Great Britain after the Glorious Revolution was not a democracy in the modern sense. The franchise was limited and with disproportionate representation. For instance, the constituency of Old Sarum in Wiltshire had 3 houses, 7 voters and 2 MPs. Not until 1832 did the franchise extend to 1 in 5 of the male population. Only in 1928 did all women get the vote. Similarly, the prosperous nation, the United States, did not grant franchise to ‘all’ males until 1868, to ‘all’ females until 1920 and all African Americans until 1965.

There are many examples of countries where democratic voting occurs but few political institutions of an inclusive nature, if any, exist. There ‘democracy’ tends to be a conflict between rival extractive institutions.

According to A&R the reason the Middle East is largely poor is not geography. It is the expansion and consolidation of the Ottoman Empire and its institutional legacy that keeps the Middle East poor. The extractive institutions established under that regime persist to the present day. It is just different people running them.

5. Inclusive Economic institutions

Inclusive economic institutions … are those that allow and encourage participation by the great mass of people in economic activities that make best use of their talents and skills and that enable individuals to make the choices they wish. To be inclusive, economic institutions must feature secure private property, an unbiased system of law, and a provision of public services that provides a level playing field in which people can exchange and contract; it must also permit the entry of new businesses and allow people to choose their careers.” (p74)

These features of society all rely on the state.   It alone can impose the law, enforce contracts and provide the infrastructure whereby economic activity can flourish. The state must provide incentives for parents to educate their children, and find the money to build, finance and support schools.

Economic growth and technological change is what makes human societies prosperous. But this entails what the Austrian-American economist Joseph Schumpeter called ‘creative destruction’. This term describes the process whereby innovative entrepreneurs create economic growth even whilst it endangers or destroys established companies. “[The] process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.” (Schumpeter 1942)

A&R opine that the fear of creative destruction is often the reason for opposition to inclusive institutions. “Growth… moves forward only if it is not blocked by the economic losers who anticipate that their economic privileges will be lost and by the political losers who fear that their political power will be eroded.” (p86) Opposition to ‘progress’ comes from protecting jobs or income, or protecting the status quo.

The central thesis of this book is that economic growth and prosperity are associated with inclusive economic and political institutions, while extractive institutions typically lead to stagnation and poverty. But this implies neither that extractive institutions can never generate growth nor that all extractive institutions are created equal.” (p91)

7. Critical Junctures

A critical juncture is when some “major event or combination of factors disrupts the existing balance of political or economic power in a nation.” (p106) Similar events such as colonization or decolonization have affected many different nations, but what happens to the society at such critical junctures depends on small institutional differences.

100 years before the Glorious Revolution Britain was ruled by an absolute monarch (Elizabeth I). Spain was ruled by Philip II and France by Henry III. There was not much difference in their powers, except that Elizabeth had to raise money through parliament. Henry and Philip were able to monopolize transAtlantic ‘trade’ for their own benefit. Elizabeth could not because much of the English trade was by privateers, who resented authority. It was these wealthy merchant classes who played a major role in the English Civil War and the Glorious Revolution.

Once a critical juncture happens, the small differences that matter are the initial institutional differences that put in motion very different responses. This is the reason why the relatively small institutional difference led to fundamentally different development paths. The paths resulted from the critical juncture created by the economic opportunities presented to Europeans by Atlantic trade.” (p107)

________________

Criticisms

One of the difficulties with political and social theory is that once a formula has been hit upon, everything then becomes interpreted in the light of that formula. Once Marx had explained economics in terms of labour and its exploitation, there was no room for those who espoused that idea to see anything different. So extractive versus inclusive institutions could be just another seductive idea.

1. Economists Michele Boldrin, David Levine and Salvatore Modica make a similar point in their review (Boldrin, Levine & Modica 2012). They say that if we lack an axiomatic definition of what is ‘inclusive’ and what is ‘extractive’, independent of actual outcomes, then the argument becomes circular and subject to a selection bias. Some of A&R’s examples are “a bit strained”.

For example, after Julius Caesar established the ‘extractive empire’ the ‘fall of Rome’ did not occur for four centuries. The success of South Korea, Taiwan and Chile (which had non-inclusive political institutions but evolved into inclusive ones) might lead one to suppose that “pluralism is the consequence rather than the cause of economic success.” (The Anyanwu study mentioned above in connection with the modernisation theory did find a correlation between economic success with democracy in Africa.   But they also found that the extent of oil reserves in the country tended to stop the development of democracy. This is what you would expect from A&R. I think there are cross-causative factors. The rise of the merchant classes in England was a major factor in the development of English politics as A&R show).

In the case of Italy the political institutions are the same in the North and the South.   But the North is prosperous whereas the south is dependent on handouts from the North. BL&M acknowledge that the south suffers from economic exploitation (Mafia) but this suggests that political institutions are only part of the story since there is no national border. They also say there is a danger in using satellite photographs as economic evidence as in this particular case “the poorest part of Italy is the most brightly lit.” The apparent brightness of parts of photographs depends on several factors including the curvature of the Earth and where the satellite is with respect to the subject. The picture of Italy here shows the north: the Po Valley as the brightest lit.why nations fail review italy

Germany from the mid 19th century until the end of WW2 prospered under extractive institutions, and led the world in its chemical industry. It did have compulsory education and social insurance and an efficient bureaucracy, but it could hardly be thought of as inclusive. Nazi Germany invented and produced the first jet planes and rockets. The “brief period of inclusiveness, the Weimar Republic, was an economic catastrophe.

Again, the Soviet Union “did well under extractive communist institutions,” but floundered after a coup d’état established inclusive political institutions.

According to BL&M, Zimbabwe is a disastrous case of moving towards more inclusive institutions by extending the franchise to a wider population and lifting trade restrictions. (I find it difficult to believe that Zimbabwe can be regarded as consisting of inclusive political and economic institutions).

BL&M suggest that the focus of A&R is on what happens within nations when a great many developments within nations depend on what happens between nations. Not the least of these developments being invasions and war. BL&M perceive that many historical crises, including the current crisis in Greece, stem from debt, yet A&R do not mention this. The French Revolution and the rise of Nazism came from debt crises, as did the English Civil War.

BL&M argue against A&R’s stance that intellectual property rights brought in after the Glorious Revolution was one of the spurs for the Industrial Revolution. They show that patents were barriers to progress. They are passionate advocates of liberalizing copyright, trademark and patent laws which they see as the enemy of competition and ‘creative destruction’. (Boldrin & Levine 2008) I have sympathy with this view, but that’s a different story for another day. (see also Hargreaves 2011).

What BL&M’s cases seem to suggest is that we need stricter criteria for ‘inclusive’ and ‘extractive’. These nations were inclusive in some respects and extractive in others.   It is difficult to decide which were or are the most pertinent factors.

A further complication is the passage of time. How long before an ‘inclusive’ or ‘extractive’ feature starts to make a change to the society? A&R do not suggest that prosperity manifests immediately or immediately disappears when a society transitions from one to the other.

2. One of the principal proponents of the geography theory is Jared Diamond, a professor of Geography at the University of California, Los Angeles. He acknowledges that inclusive institutions are an important factor (perhaps 50%) in determining prosperity but not the overwhelming factor (Diamond 2012). He favours historically long periods of central government and geography as major factors. He also makes the point that why each of us as individuals becomes richer or poorer depends on several factors. These include “inheritance, education, ambition, talent, health, personal connections, opportunities and luck…” So there is no simple answer to why nations become richer or poorer.

3. William Easterly, a professor of economics at New York University, complains that A&R have “dumbed down the material too much” by writing for a general audience. They rely on anecdotes rather than rigorous statistical evidence (when “the authors’ academic work is based on just such evidence” ). So the book “only illustrates the authors’ theories rather than proving them.”

Conclusions

All three of these critical reviews acknowledge that Why Nations Fail is a great book. It should be read by anyone with an interest in politics.

Apart from the central thesis outlined above A&R provide many examples and great historical detail. This alone makes it a good read, even if you have philosophical aversions to the conclusions.

There is no simple solution to the problem of failed states but at least a correct diagnosis might lead to a greater percentage of success. Such explanations as ‘geography’, ‘culture’ and ‘historical accident’ do not offer much hope. Imposing ‘democracy’ on states that are anarchic or repressive does not seem to have worked so far, though it might form part of a solution once the system that has kept the nation repressed has been remedied.

You might think that the people who are in charge of states that extract wealth from their populations and gather power to themselves are psychopaths. They probably are. But it is usually the system that has existed for a considerable time, or is easily adapted to this end, that exists before the person takes power. The system tends to persist longer than any individual. There are more than enough psychopaths around to engineer a revolution or coup that puts them in charge when they see the advantages that may accrue. So getting rid of a dictator is only likely to replace him with another one. Where it does not, the likely consequence is the de-centralisation of the state with warring factions.

A&R make the point that “avoiding the worst mistakes is as important as – and more realistic than – attempting to develop simple solutions.“(p437)

References

Acemoglu D & Robinson JA (2013) Why Nations Fail. The Origins of Power, Prosperity and Poverty Profile Books

Anyanwu JC & Erhijakpor AEO (2013) Does Oil Wealth Affect Democracy in Africa? African Development Bank available at http://www.afdb.org/fileadmin/uploads/afdb/Documents/Publications/Working_Paper_184_-_Does_Oil_Wealth_Affect_Democracy_in_Africa.pdf

Boldrin M and Levine DK (2008) Against Intellectual Monopoly, Cambridge University Press. available at http://www.micheleboldrin.com/research/aim/anew01.pdf

Boldrin M, Levine D & Modica S (2012) A Review of Acemoglu and Robinson’s Why Nations Fail available at http://www.dklevine.com/general/aandrreview.pdf

Bueno de Mesquita B (2009) Predictioneer The Bodley Head (published in the USA as ‘The Predictioneer’s Game’ )

Hargreaves I (2011) Digital Opportunity. A Review of Intellectual Property and Growth (report commissioned by UK government) available at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/32563/ipreview-finalreport.pdf

Schumpeter J A (1942) Capitalism, Socialism and Democracy Harper & Brothers

Taylor F (2013) The Downfall of Money: Germany’s Hyperinflation and the Destruction of the Middle Class Bloomsbury

Don’t lose your mind for Utopia

by Michael Davidson

Thomas More

Thomas More (1478-1535), Lord Chancellor to Henry VIII (1491–1547) of England, wrote the book ‘Utopia’[1] first published in 1516. The book describes a fictional island and its politics and customs. The word is derived from the Greek ou = not and topos = place, hence utopia = no place. There is also the Greek eu = good which sounds similar, so utopia = good place (the current meaning). It is not clear whether More was presenting this mythical island as the perfect state or whether he was saying no such place could stably exist. Given the political climate of the time he was probably wise to be equivocal on the matter. He eventually lost his head anyway.

There is no private property or money on Utopia. All produced goods are stored in warehouses where people get what they need. All property is communal so houses are periodically rotated between citizens. All meals are communal. There are no private gatherings. All wear similar woollen garments. Premarital sex is punished by enforced lifetime celibacy. Adultery and travel within the island without a passport are both liable to be punished by enslavement.

You might not think that this would be a pleasant place to live, but there has been at least one attempt to implement such a society (Michoacán, Mexico circa 1535) and More was revered by Lenin for promoting the “liberation of humankind from oppression, arbitrariness, and exploitation.” [2]

Plato

Thomas More mentions Plato (427-347BC) favourably, and was obviously well acquainted with Plato’s Republic [3] which is arguably the first attempt to design a ‘perfect state’. In Plato’s republic there are 3 classes of citizen: the rulers, the military and the workers (merchants, carpenters, cobblers, farmers and labourers). The rulers are the philosophers (those devoted to reason); the military (called Guardians) are the spirited or ambitious; and the rest are those who know only their desires. The rulers rule with absolute power, exercise strict censorship so that only good and true ideas prevail, and ensure by appropriate education that they are succeeded by like minded philosophers. All citizens know their place in society and may not change it, for to do so would be to rebel against the institutions.

For the rulers and the potential rulers, family life would be abolished in favour of communal living. All promising children who showed spirit or reason (from whatever class – though Plato advocated a eugenic program of mating the ‘best men’ with the ‘best women’) were to be removed from their families to be educated as potential rulers. Their training would be in gymnastics and military music until the age of twenty. Then mathematics and astronomy for ten years, followed by a thorough study of Plato’s philosophy. Those that didn’t quite make it through the course at any stage were to be assigned to the military. By the time they have successfully finished this study they will be over 50, will have developed such a devotion to Plato’s philosophy that they will rule only through their sense of justice which requires their ruling wisely in recompense for the superb education the state has provided for them. Since the rulers are just, good etc and they have absolute powers there is no need for laws or votes.

Thomas Hobbes

More and Plato were idealists who believed in worlds beyond this one.   But totalitarian states can also be based on a materialist view of Man. Thomas Hobbes (1588-1679) in his book Leviathan (1651) regards the State as something like an artificial man “the sovereign is the soul, the magistrates are artificial joints, reward and punishment are the nerves, wealth and riches are the strength” and so on.

Hobbes thinks that ‘in the state of nature’ Man is or would be in a perpetual state of turmoil. Without “a common power to keep them all in awe”, there would be a war of “every man against every man.”[4] The solution is for men to surrender their liberty to a sovereign power. It does not much matter whether the sovereign power is a monarchy, an aristocracy or a democracy. The essential point according to Hobbes is that the sovereign must have absolute power. Only in this way can the populace have a secure and orderly existence.

Such perfect societies would not be so bad if they were confined to books, but every so often societies built on similar lines spring into being. This is often the result of a revolution or coup in the name of some dream. Whereas these societies tend not to last long, reversing the process to a more libertarian one is often painful. It is not usually possible to impose democracy on what was previously a dictatorship. Although democracies may be born in a coup they also evolve, as is evident in the many different versions of democracy that exist in the world today.

John Locke

Freedom of speech, freedom of enterprise, rule of law, property rights and the ability to remove unpopular governments from power without disrupting society are characteristics of democracies. In popular parlance only voting is seen as distinguishing democracy from other systems. But there is a lot more to it than that.

The above characteristics are attributed in no small part to John Locke (1632–1704) who published Two Treatises of Government in 1689 [5] which seeks to throw light on the basis of political authority. Locke does not reckon much to Hobbes’ absolute sovereign power. He sees the original ‘state of nature’ as happy and tolerant. The State is formed by a social contract which entails a respect for natural rights, liberty of the individual, constitutional law, religious tolerance and general democratic principles. The various institutions form a system of checks and balances. A government must be deposed if it violates natural rights or constitutional law. The state is concerned with procuring, preserving, and advancing the civil interests of the people: life, liberty, health and property through the impartial execution of equal laws. These principles were eventually enshrined in the constitutions of many modern democracies as ‘self evident’. The history of the world shows there was nothing much about them that was evident before Locke. The whole idea of ‘human rights’ (ie those rights arising from being a human being as opposed to sovereign rights, marital rights etc) stems from this era and can be attributed to Locke in no small part.

Montesquieu

Democracies depend for their equilibrium on many interdependent and independent institutions. The doctrine of the ‘Separation of Powers’ due to Montesquieu (1689–1755) was based at least partly on his observation of Locke’s England. This had recently become a constitutional monarchy through the “Glorious Revolution”(1688) which installed William of Orange and his wife Mary on the throne with increased parliamentary authority. According to Montesquieu political liberty is a “tranquillity of mind arising from the opinion each person has of his safety.” In England this was obtained through the separation of the Legislative, Executive and Judicial branches of the administration.[6] The merging of these powers into one body would be a recipe for tyranny, he said. The separation of powers was a major consideration in the drafting of the US Constitution (1788).

According to Montesquieu democracy can be corrupted not only where the principle of equality of all citizens does not exist but also where the citizens fall into a spirit of ‘extreme equality’ where each considers himself on the same level as those who are in charge. People then want to “debate for the senate, to execute for the magistrate, and to decide for the judges. When this is the case virtue can no longer subsist in the republic.”[6] The ideal of a ‘Free Press’ exists so that corruption in high places can be exposed, but it can deteriorate into this ‘extreme equality’. This is shown by the recent scandals in England where certain sections of the press have represented themselves as the conscience of the nation in all matters political and judicial whilst at the same time having so little respect for the truth in high profile cases like Christopher Jefferies¹ and engaging in criminal activity like phone hacking². Continue reading

Do we have Free Will? And why it matters to you.

You might think that after several thousand years of debate we have exhausted all the arguments as to whether we have Free Will, or whether our actions are caused by prior events. So say the protagonists on both sides of the argument: but they still argue! Now there are some new approaches that could throw light on the problem.

Free Will is the idea that we are able to choose between alternative courses of action and actually cause something to happen. For example, I can decide to lift my arm and I consider this to be something I could have decided to do or not.

Determinism is the idea that all events are necessary effects of earlier events: future events are as fixed and unalterable as past events.

Determinism is not quite the same as ‘fatalism’. Fatalism is the doctrine that what is going to happen is going to happen regardless of what you do. For example, you will die of a heart attack on such and such a date, regardless of changing your diet, exercising, medical intervention and so on. Determinism does not predict necessary future outcomes; it merely states that whatever the outcome turns out to be it was the result of prior natural causes.

This idea of determinism is sometimes held to come from Isaac Newton’s discoveries of the laws of motion and gravity. These led us to the idea of a ‘clockwork universe’. The French mathematician Pierre-Simon Laplace (1749-1827) claimed that if we knew all the laws of nature and the position of all the particles in the universe at a particular instant we could know the future (and the past) precisely. He did not, though, say how we could start to verify this.

In fact the idea of determinism is not recent and has roots in ancient Greek philosophy and has come through various brands of Christianity before Newton.

It is only in the last few years that we have realised that Newton’s laws do not imply a clockwork universe. In certain circumstances these laws cause chaotic behaviour. The mathematician James Lighthill (1924 – 1998) even apologised to the lay community for mathematicians giving a false impression for 250 years. (Lighthill 1986)

In the 20th century Newtonian physics was displaced by Quantum Mechanics which showed that determinism of the kind envisaged by Laplace is false. For instance it is not possible to predict when an individual atom of radium will emit an alpha particle and become an atom of radon. All that can be predicted is what proportion of a certain mass of radium will have turned into radon in a certain time.

One philosophical response to quantum mechanics is to insist that indeterminism is true: so our actions must be random, and we don’t cause them anyway. The effort here seems to be to deny free will regardless.

You might think that a determinist would necessarily shun the idea of free will and personal responsibility since our actions are all the product of physical brain activity over which the self (if it exists) has no control. Those who believe that determinism and free will are mutually exclusive – are known as incompatibilists.

Those who believe that free will can be reconciled with determinism are called ‘compatibilists’ and according to the contemporary philosopher John Searle (b1932) this is the majority view among philosophers.

Incompatibilists who believe in determinism are known as ‘hard determinists’.
determinism

There is no word (as yet) for people who believe that both determinism is false and freewill does not exist (randomists, perhaps?).

One hard determinist is the neuroscientist Colin Blakemore (b1945): “… all those things that you do when you feel that you are using your mind (perceiving, thinking, feeling, choosing, and so on) are entirely the result of the physical actions of the myriad cells that make up your brain.” Consequently, “It makes no sense (in scientific terms) to try to distinguish sharply between acts that result from conscious intention and those that are pure reflexes or that are caused by disease or damage to the brain.” It seems to follow that “the addict is not ill and is surely not committing a crime simply by seeking pleasure.” (Blakemore 1988)

Another hard determinist, or perhaps a randomist, since he allows the influence of random events in biological development and behaviour, is the biologist Anthony Cashmore. Even if quantum theory eventually shows that determinism is false “it would do little to support the notion of free will: I cannot be held responsible for my genes and my environment; similarly I can hardly be held responsible for any [random] process that may influence my behaviour.” (Cashmore 2010)

Whether or not determinism is true there are philosophers who believe that free will is impossible on purely logical grounds. The philosopher Arthur Schopenhauer (1788-1860) said “A man can surely do what he wants to do. But he cannot determine what he wants.

Carrying on this theme the philosopher Galen Strawson (b1952) believes that what one wants is “just there, just a given, not something you chose or engineered – it was just there like most of your preferences in food, music, footwear, sex, interior lighting and so on… [Wants] will be just products of your genetic inheritance and upbringing that you had no say in… you did not and cannot make yourself the way you are.” (Strawson G 2003) If you can make yourself the way you are then you must have some nature that enables you to do that; if you can make that nature then you must have that ability built in and so on for an infinite regress. Since there is an infinite regress the idea of free will must be false.

Determinism is of course also tied to an infinite regress which is only terminated by the idea of the ‘big bang’ (but what caused the big bang?).

There are philosophers such as Ayn Rand (1905 – 1982) who believe, contrary to Strawson, that Man is a being of self-made soul. (Rand 1966)

Jean-Paul Sartre (1905 – 1980) claimed we have free-will whether we like it or not: “We are always ready to take refuge in a belief in determinism if this freedom weighs upon us or if we need an excuse.” (Sartre 1956)

The free-will determinism debate is anchored in fixed metaphysical positions which are then dressed up in complex and seemingly incontrovertible arguments.

Compatibilism regards ‘free will’ not as independent agency but, rather, the feeling of independent agency. Thus a person acts freely when they do what they wished to do and they feel they could have done otherwise. One of the earliest compatibilists was Thomas Hobbes (1588 – 1679): “…from the use of the words free will, no liberty can be inferred of the will, desire, or inclination, but the liberty of the man; which consisteth in this, that he finds no stop in doing what he has the will, desire, or inclination to do.” (Hobbes 1690) A person would not do what they wished to do, or do what they did not wish to, except if they were coerced by acute discomfort, threat or torture.

For the compatibilist the wish is determined by the genetic makeup and life history of the person, nature plus nurture, so free will is just being able to act as one wishes without coercion. However, the person, according to determinism, has no power to change his or her future whether he or she is coerced or not. So the feeling of having been able to have done otherwise than what he or she did must be a delusion. Thinking freely must also be an impossibility. In particular, the espousal of the doctrine of determinism must have been determined, and those who defend the opposite, non-determinism, must have been similarly determined.

This leads to an endless debate between non-determinists who believe they can induce the determinist to make a non-determined decision and determinists who believe they can determinedly box in the non-determinist to see his impotence. John Searle was evidently once asked, “If someone could unequivocally prove determinism, would you accept it?” to which Searle replied, “Are you asking me to freely accept or reject such a proposition?” He points out that if “you go into a restaurant and they give you a menu and you have to decide between the veal and the steak. You cannot say to the waiter, ‘Look, I’m a determinist. Que sera sera’ because even doing that is an exercise of freedom. There is no escaping the necessity of exercising your own free choice.” (Searle 2000)

In other words whether we have free will or not, it is a difference that does not make a difference.

Incompatibilists who reject determinism but accept free will are called Libertarians. Libertarianism is the theory about freedom that despite what has happened in the past, and given the present state of affairs and ourselves, just as they are, we can choose or decide differently than we do – act so as to make the future different.

The idea is that the future normally consists of several alternatives and one has the power to choose freely which alternative to pursue.

A modern libertarian is the former New South Wales Supreme Court Judge, David Hodgson (1939-2012). He accepts that some combination of deterministic laws and quantum randomness is one form of causation. But he insists there is another kind of causation operating in the conscious decisions and actions of human beings, and perhaps also of non-human animals, ie ‘volitional causation’ or ‘choice’. He suggests that physical law does not necessarily imply determinism, ie a number of possible futures may all be consistent with physical law. He grants that the choices a person might come to may partly be the result of unconscious reasons and motives codified in the neural mechanisms. But the function of consciousness is to “allow choice from available alternatives on the basis of consciously felt reasons …the rationality and insight of normal adult human beings, even though far from complete or perfect, is generally sufficient for them to be considered as having free will and responsibility.” (Hodgson 1999)

The motive of both libertarians and compatibilists seems to be to justify holding people morally responsible for their actions. The libertarian might also claim that if we are not free agents then there is no basis for morality at all. The fear is that if moral responsibility is a prerequisite for guilt, blame, reward and punishment, and no one can do anything other than what they do, then no one should be rewarded or punished just as hard determinism seems to imply. Some hard determinists claim that reward and punishment is justified on the grounds that people do respond to reward and punishment in a determined way. But this leads to the view that the rewarders and punishers do what they do without grounds or justice, whereas the rewarded or punished are suckers, taken in by the authority of the judgers, continuing to believe in their guilt or worth. (Warnock 1998)

Compatibilists hold that even though people cannot do anything other than what they do, they are nevertheless morally responsible. There is an argument from Donald MacKay (1922-1987) which shows that even if there is a Laplacian demon or God who knows all about the state of my brain and even if He claims to be able to predict my every action I can have no reason to believe any of His predictions (which necessarily must include His knowledge of whether I believe the prediction or not). As I do not know whether He has predicted I will believe or not, He has given me no grounds for believing the prediction or not. (McIntyre 1981)

So according to MacKay, even if the universe is determined the self must regard itself as an agent capable of moral choices and act accordingly. Determinism makes no difference to how we conduct our lives.

Philosophers of a determinist persuasion have stuffed the self into a variety of strait-jackets in an attempt to avoid the dreaded idea of the soul. Personal experience must be denied or at least proscribed at the risk of introducing personal agency. The idea of a responsible self is opposed by the idea of scientific explanation and prediction. On the other hand philosophers of a libertarian conviction try to find in science evidence that the world is not ‘causally closed’. This could allow free will and justify the retention of our jurisprudence, against the revisionist urgings of those determinists who feel all punishment is unjust.

Peter Strawson (1919-2006) thinks that the metaphysical dispute between the compatibilists and the incompatibilists is ill-framed. It can be resolved if each side would relax a little. The compatibilist normally portrays jurisprudence as an objective instrument of social control, excluding the essential element of moral responsibility. The incompatibilist is appalled that if determinism is true then the concepts of moral obligation and responsibility really have no application, and the practices of punishing and moral condemnation etc are really unjustified. (Strawson P 1962)

But both sides, says Strawson, neglect the fact that “it matters to us [a great deal] whether the actions of other people – and particularly of some other people – reflect attitudes towards us of goodwill, affection, or esteem on the one hand or contempt, indifference, or malevolence on the other …The human commitment to participation in ordinary inter-personal relationships …is too thoroughgoing and deeply rooted for us to take seriously the thought that a general theoretical conviction might so change our world that, in it, there were no longer any such things as inter-personal relationships as we normally understand them… The existence of the general framework of attitudes itself is something we are given with the fact of human society. As a whole, it neither calls for, nor permits, an external ‘rational’ justification.”

According to Strawson, determinism does not entail that anyone who caused an injury was ignorant of causing it or had acceptable reasons for reluctantly going along with causing it. Nor does it entail that nobody knows what he’s doing or that everybody’s behaviour is unintelligible in terms of conscious purposes or that everybody lives in a world of delusion or that nobody has a moral sense which is what would be required if determinism was at all relevant. Compressing Strawson’s argument down from his 11,000 words : If determinism is true this would imply that our nature includes the concept of moral responsibility that we apply in our jurisprudence. It would not be rational, even if determinism is true, to change our world to dispense with our moral attitudes.

Nicholas Maxwell recasts the problem from ‘free-will versus determinism’ to ‘wisdom versus physicalism’. (Maxwell 2005) For of all the various constructions that could be placed on the term free-will he considers that the one most worth having is not the ‘capacity to choose‘ but rather, ‘the capacity to realise what is of value in a range of circumstances‘ (in both senses of the word ‘realise’ ie: apprehend and make real). Secondly he characterises physicalism as “the doctrine that the universe is physically comprehensible.” It is not determinism but the idea that the universe is understandable that characterises physicalism. The problem of free will then comes down to how can that which is of value associated with human life (or sentient life more generally) exist embedded in the physical universe? In particular how can understanding and wisdom exist in the physical universe?

Both Peter Strawson’s and Nicholas Maxwell’s reformulation of the free-will debate appear to be compatibilist with respect to moderated concepts of free-will and determinism. Anything that weakens fundamentalist views ought to be welcomed, though how these views can be taken forward into empirical investigation is not apparent.

I think that one of the difficulties with the debate on free will is what it means to talk about ‘moral responsibility’. The usual interpretation of this concept is that when someone has done something reprehensible we hold them to account: we blame them for some situation and punish them. Blame is the attempt to impose shame on the part of the offender so as to inhibit activity. The dictionary definition of ‘responsibility’ is only vaguely related to this scenario. Responsibility is ” (Latin respondeo = to respond) the quality or state of being able to respond to any claim or duty.” Thus a responsible person can set in place those procedures necessary to prevent harm; if he has done wrong he can act to put the situation right; if some situation arises that is perceived as morally wrong he can take the requisite actions. Irresponsibility is where one seeks to evade one’s duty by excuses and inaction. Those who claim that no one is responsible for anything should be asked what they are ashamed of.

It seems to me that what is worth having for one’s self and for people in the society at large is this ability to respond to situations (to take ‘responsibility’) and do whatever is necessary in the circumstances we find ourselves. This means that responsibility is tied in closely with wisdom: it is responsible to acquire wisdom, it is wise to act responsibly.

Whether we have ‘free will’ in some ultimate sense or whether our actions are ultimately ‘determined’ is a metaphysical matter. Such concerns are junior to the fact of ‘moral responsibility’ which we can (hopefully) exercise regardless of our metaphysical leanings.

Libertarianism does not entail the idea that decisions are divorced from circumstances. It does presuppose, I believe, the ability to predict the future with some degree of confidence. “Able to choose otherwise in the same circumstances” restricts the possibilities for ‘free will’ by demanding that free will means nothing more than caprice. Responsible action requires gathering the information relevant to the decision at which time the decision may become ‘necessitated’ by what one now knows. This does not mean that that information caused the decision or that one is relieved of the responsibility for that decision.

Scientific investigation of the questions of free-will and compatibilism are difficult in principle because they are metaphysical issues that science cannot address directly. The particular side of the debate that people take would seem to depend on introspection of their decision making processes. For much of the 20th century psychological investigation of introspective accounts was considered worthless. So there is very little research on the subject.

There are however questions related to the metaphysical problem of free will that can be investigated empirically. For instance, the question of whether one’s attitude to the question of free-will affects one’s moral sense has been investigated.

In one experiment 119 undergraduates were randomly assigned to one of five groups to answer the same set of 15 standard reading-comprehension, mathematical and reasoning problems. (Vohs & Schooler 2008) Participants were told they would receive $1 for each problem they correctly solved. In three of the groups participants marked their own answers and paid themselves after which they shredded their answers. This gave ample opportunity to cheat. The other two groups had no opportunity to cheat. The five groups were treated slightly differently.

The three cheating-possible groups were given a series of 15 statements which they were supposed to think about for one minute each.

One group were given statements that were pro-determinism such as “a belief in free will contradicts the known fact that the universe is governed by lawful principles of science” and “Ultimately we are biological computers – designed by evolution, built through genetics, and programmed by the environment“.

Another group were given statements that were pro-freewill such as “I am able to override the genetic and environmental factors that sometimes influence my behaviour” and “Avoiding temptation requires that I exert my free will.”

The third group were given neutral statements such as “Sugar cane and sugar beets are grown in 12 countries.”

One of the two no-cheating groups was also given the pro-determinism statements to study before doing the test. The other was given the free-will statements. So this gave two groups of interest that could cheat – one primed with determinism, one primed with freewill; and three control groups to act as a ‘base line’. The average reward for the group primed for determinism that were able to cheat was $11 ± 1 whereas the other four groups each obtained approx $7 ± 1 (with non-significant variation).
free will and cheaters
It thus appears that the spreading of deterministic views is liable to increase modest forms of unethical behaviour, a result significant at the 1% level. Whether this generalises to more serious offences and whether the belief in determinism may compensate these minor offences with an increased compassion for the less well off and a decrease in the desire for revenge is not known.

Nevertheless it seems that the question of free-will is not just philosophical but is of great interest in jurisprudence as libertarians such as David Hodgson claimed.

So do we have free will? Well, if this experiment generalises we’d better believe it.

References

Blakemore C 1988 The Mind Machine BBC Books pp7, 270,170

Cashmore AR (2010) The Lucretian Swerve: The biological basis of human behaviour and the criminal justice system Proc Nat Acad Sci USA vol 107(10) p4499-4504

Hobbes T (1690) Leviathan chapter 21

Hodgson D (1999) Hume’s Mistake Journal of Consciousness Studies vol 6 no 8-9 p210

Lighthill J (1986) The Recently Recognised Failure of Predictability in Newtonian Dynamics Proceedings of the Royal Society of London A 407: 35-50.

Maxwell N (2005) Science versus Realization of Value, not Determinism versus Choice Journal of Consciousness Studies vol 12 no 1 p53

McIntyre JA (1981) MacKay’s Argument for Freedom Journal of American Scientific Affiliation 33 (Sept) p169-171

Rand A (1966) Philosophy and a Sense of Life The Romantic Manifesto Signet p 28

Sartre J-P (1956) Being and Nothingness: An essay in phenomenological ontology (transl HE Barnes) New York: Philosophical library p78

Searle JR (2000) Consciousness Free Action and the Brain Journal of Consciousness Studies vol 7 no 10 p11

Strawson G (2003) The Buck Stops – Where? (Interview with T Sommers) The Believer (march 2003)

Strawson P (1962) Freedom & Resentment Proceedings of the British Academy vol 48 p1-25

KD Vohs & Schooler JW (2008) The Value of Believing in Free Will: Encouraging a belief in determinism increases cheating Psychological Science vol 19(1) p 49

Warnock M (1998) An Intelligent Person’s Guide to Ethics Duckworth p 92