About Michael Davidson

author of "Rethinking the Mind" computer systems designer BSc Physics, PhD Astronomy

What is Imagination?

imagination

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The dictionary defines imagination as
a) the formation of a mental image picture
b) the formation of an idea
c) the formation of a concept that is not real or present
d) the faculty permitting visionary or creative thought.

What I am concerned with is this faculty of imagination. For example, writing a musical composition or a novel, forming a scientific theory, or designing a car or a jumbo jet. These are creative acts.

Materialists since the Enlightenment have argued that imagination is the recombination of ideas. Ideas are abstracted from the senses.

Hume

David Hume (1711-1776) said:
“…this creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting or diminishing the materials afforded to us by the senses and experience. When we think of a golden mountain we only join two consistent ideas, gold and mountain, with which we were formerly acquainted. A virtuous horse we can conceive; because, from our own feeling, we can conceive virtue; and this we may unite to the figure and shape of a horse, which is an animal familiar to us. In short all the materials of thinking are derived either from our outward or inward sentiment: The mixture and composition of these belongs alone to the mind and will. Or, to express myself in philosophical language, all our ideas or more feeble perceptions are copies of our impressions or more lively ones.” [Hume 1748]

Hume did not say how he selected these two concepts out of the millions we have. He did not say whether the combination of two concepts is always meaningful. Nor how to select a useful combination.

‘Golden mountains’ and ‘virtuous horses’ seem to be only the stuff of novels.

Hume did not address how we can talk about horses and mountains in the first place. These things do not exist in our heads but in the world.

Huxley

In the 19th Century Thomas Huxley (1825-1895) claimed that conscious choice played no part in this combining of two or more concepts.

The consciousness of brutes would appear to be related to the mechanism of their body simply as a collateral product of its working, and to be as completely without any power of modifying that working as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery. Their volition, if they have any, is an emotion indicative of physical changes, not a cause of such changes. … the argumentation which applies to brutes holds equally good of men.” [Huxley 1912]

This view is called ‘Epiphenomenalism’ (Greek epi= upon; phainomai = to appear). It is contrary to the frequent notion that our thoughts and desires cause our bodies to move in certain ways and towards certain goals.

The epiphenomenalist says physical events in the brain cause these thoughts and desires . The idea is that physical things such as dropping a weight on your foot causes impulses in the brain which you experience as pain. But you are not able to originate any actions. The physical processes in your nerves and brain cause the screams and hops which follow. ‘You’ are an impotent spectator stuck inside an autonomous robot. There is nobody home.

Behaviourism

The 20th century school of psychology known as behaviourism espoused epiphenomenalism. Behaviour is always the result of some reflex in response to some stimulus. The founder of this school, John B Watson (1878 –1958) held that

“… such things [as concepts and general ideas are] mere nonsense; … all of our responses are to definite and particular things. I never saw anyone reacting to tables in general but always to some particular representative… The question of meaning is an abstraction, a rationalisation and a speculation serving no useful scientific purpose.” [Watson 1920]

We have now got rid of meaning.

In the same vein the Oxford philosopher Gilbert Ryle (1900-1976) contended that we could paraphrase any statement involving ‘mind words’ such as ‘imagine’ without loss of meaning. A statement about behaviour would be as good or better:
picturing, visualising, or ‘seeing’ is a proper and useful concept but its use does not entail the existence of pictures which we contemplate or a gallery in which such pictures are ephemerally suspended. Roughly imaging occurs, but images are not seen… True, a person picturing his nursery is, in a certain way, like that person seeing his nursery, but the similarity does not consist in his really looking at a real likeness of his nursery, but in his really seeming to see his nursery itself, when he is not really seeing it. He is not being a spectator of a resemblance of his nursery, but he is resembling a spectator of his nursery…. There is no answer to the spurious question, ‘where do the objects reside that we fancy we see?’ since there are no such objects.” [Ryle 1949,2000]

The mind has disappeared!

The psychologist B F Skinner (1904–1990) applied the behaviourist principle to political theory. He advocated the abolition of the idea of ‘Autonomous Man’. “Freedom and dignity illustrate the difficulty. They are the possessions of the autonomous man of traditional theory, and they are essential to practices in which a person is held responsible for his conduct and given credit for his achievements. A scientific analysis shifts both the responsibility and the achievement to the environment.” [Skinner 1971, 1973].

Now, even the brain has disappeared.

The self-negating nature of the above ideas does not seem to have bothered any of their advocates. The arguments in whatever way they are mistaken, are certainly imaginative.

Artificial Intelligence

One of the reasons that behaviourism has fallen somewhat from favour is the rise of ‘Artificial Intelligence’. Computers have internal states. So the idea that the internal states of human beings are irrelevant is no longer plausible (if it ever was).

No one has yet built a machine that displays ‘imagination’ as well as a human. Nor one that displays the versatility of a human.

Some computer programs can now make a passable attempt at music, novels etc . IBM’s ‘Deep Blue’ has beaten the human world champions at such complex games as go and chess. This has given rise to doomsday scenarios where computers will think for themselves. They may decide that humans are suitable merely as pets or, worse, that humans are dispensable all together.

This idea depends for its force on the asserted equivalence of the human brain and a computer. In the early days of AI it was claimed that humans functioned according to certain rules such as ‘if I am thirsty, have a drink”. It soon became clear that there were so many rules it was impossible to code them all. The end result of this technology was a few applications in narrow domains of expertise.

It also led to wide-spread disillusionment with AI.

The next technical advance which revived the hope for AI was the use of statistics to digest huge quantities of input data and ‘learn’ a task. This approach when applied to speech gave rise to automatic translation. You can now speak into a smart phone in one language and get back the ‘same’ meaning in another desired language.

But this technique has limits. The translation programs need huge quantities of text in the two languages to ‘understand’ how the two languages correspond. The programs don’t understand what the various sentences refer to. Humans can understand a far bigger vocabulary than the best translation apps. They can understand made up words. They also understand that words refer to things and events in the real world.

The same approach to ‘machine learning’ led to automatic face recognition. An accessible explanation of how such ‘neural networks’ function is on YouTube [Serrano 2017] These machines need billions of faces to become proficient. Humans can recognise a person after one sighting..

The technique which currently has the AI community buzzing is so-called ‘deep learning’. This depends on multiply-connected artificial ‘neural networks’. Each network learns aspects of the task to be accomplished by the whole machine. These are then integrated in successive layers into appropriate conclusions or actions. This is the kind of thing used in self-drive cars where many sensors feed into the top layers of the neural network.

Such computers may make fewer mistakes than a human on similar tasks, but they are not infallible. They do not tire but they are not conscious.   They can’t extrapolate from experience in the same way that humans can.

Some computer scientists who have invested many years in the pursuit of ‘AI’ agree:

Today’s AI, which we call weak AI, is an optimizer, based on a lot of data in one domain that they learn to do one thing extremely well. It’s a very vertical, single task robot, if you will, but it does only one thing. You cannot teach it many things. You cannot teach it multi-domain. You cannot teach it to have common sense. You cannot give it emotions. It has no self-awareness, and therefore no desire or even understanding” [Lee 2018]

Of course, the entities which display ‘imagination’ in the development of AI are the AI designers.

Imagination is Creation

My purpose is not to define ‘imagination’ exactly. I assert that human beings have such a faculty.

The starting point for a philosophical or scientific investigation is recognition that human beings have the ability to perceive, understand, imagine, communicate and act.

Human beings are able to think creatively to a greater or lesser degree.  Jumbo jets now exist when a century ago they did not.   By any criterion that is creation.

 

 

References

[Hume 1748] Hume D An Enquiry concerning the Human Understanding Sect II (modern version ed E Steinberg)

[Huxley 1912] Huxley T Method and Results Macmillan : p240, p243 available at
http://www.archive.org/details/methodresultsess00huxluoft

[Lee 2018] Lee K-F We are here to create available at   https://www.edge.org/conversation/kai_fu_lee-we-are-here-to-create

Ryle G (1949,2000) The Concept of Mind Penguin p234,237

Skinner B F (1971,1973) Beyond Freedom and Dignity Penguin p30

[Watson 1920] Watson JB Is Thinking Merely The Action Of Language Mechanisms? British Journal of Psychology11, 87-104. available at
http://psychclassics.yorku.ca/Watson/thinking.htm

[Serrano 2017] Serrano L A Friendly Introduction to Neural networks and Image Recognition available at
https://www.youtube.com/watch?v=2-Ol7ZB0MmU

What is Understanding?

What does it mean to say one understands something?

light bulb

The dictionary lists three main meanings for ‘understanding’:

  • the perception and comprehension of the ideas expressed by others
  • the power of forming sound judgment about a course of action
  • something mutually understood or agreed upon.

So understanding requires

  • perception of what is and is not in one’s environment,
  • a familiarity with the subject under discussion and
  • experience of the consequences of actions and events.

With understanding one can think and act flexibly.

Aristotle

Aristotle (384-322BC) discussed ‘What is’ in his book The Categories.

What exists according to Aristotle (simplifying it rather) are entities or things. For example: a rock, a house, an animal, a person. Entities persist over a period of time and can change their properties and react to events.

Entities have properties such as colour, weight, intelligence etc. Properties can be said to ‘exist’ but they cannot exist without the entity. There is no such thing as the colour ‘red’ without some entity which we describe as red.

This was a big departure from the idea of Aristotle’s teacher, Plato (427-347BC). Plato held that the properties of things were the truest form of existence. Thus all the things we describe as red are pale imitations of the true ‘RED’ which exists in the real ‘World of Forms’. The world in which we live is a mere shadow of the ‘World of Forms’.

Natural Law

The ‘World of Forms’ may seem strange and counter-intuitive. But modern science and philosophy holds to the idea of ‘Natural Law’. This idea comes from the laws discovered in the realm of physics.

Physical laws are not universal generalisations about particular things (cats, desks, planets). They are rather statements about universal properties (eg mass, charge, momentum). ‘Red’ isn’t a universal property but energy is.

Even so, physicists tend to think of objects like photons and electrons as carriers of energy etc.

Plato’s legacy is that some people hold that the world we apparently live in is not the real world.

Computers

Those objects in the real world that ‘carry’ understandings are human beings. How they actually do this is the subject of much philosophical debate.

One popular idea is that ‘all thought is computation’. This is known as ‘strong artificial intelligence’ or ‘computationalism’. So understanding is some kind of computation. Computation can be defined in mechanical terms. On this view some configuration of levers and cogwheels can understand something. A thermostat, for example, can understand temperature.

In the 19th century Charles Babbage (1791-1871) published designs for a computer which consisted of cogwheels, cams and so on. It is difficult to believe that such a computer would actually understand anything.

Modern computers do not change this conclusion. They score over Babbage’s computer only in miniaturisation, scale (number of ‘cogwheels’) and speed.

There is a subjective ‘feel’ to understanding something. There is an even more pronounced ‘feel’ to not understanding something (doh!). If humans are mechanical computers of some kind there seems to be no reason why we feel anything. For example, why do we feel pain?

Mathematics

The mathematician Roger Penrose admits he doesn’t really know what ‘understand’ means. He thinks this is because he is a mathematician. Mathematicians do not need precise definitions of things they are talking about. They only need to say something about the connections between them. [Penrose 1993]

Penrose thinks computationalism must be wrong: some mathematical things are not computational. For instance, there is a conjecture due to Goldbach (1690-1764) that every even number can be expressed as the sum of two primes (eg 40 = 17 + 23, 198=97+101). There is no computer program that can verify or refute this conjecture. Trials show that it is true up to about 10^18.

Penrose holds that a specific set of rules do not make up ‘understanding’.

Memes

Another idea is that the mind consists of ‘units of cultural transmission’ called ‘memes’. The biologist Richard Dawkins coined this term by analogy with ‘gene’. Various principles, catch-phrases, fashions, ways of making pots, even tunes are ‘memes’.

Memes propagate themselves in the meme pool by leaping from brain to brain via a process which in the broad sense can be called imitation.” [Dawkins 1976]

Philosopher Daniel Dennett takes this up:
A human mind is itself an artefact created when memes restructure a human brain in order to make it a better habitat for memes” [Dennett 1995.1]

Dennett expands this idea: “A scholar is just a library’s way of making another library” [Dennett 1995.2 ]

Dennett provides no commentary on how memes replicate. Do they blend like cake ingredients or do they have dominant/recessive characteristics like genes? [Orr 1996]

So ‘understanding’ on this view is having the appropriate meme in operation at the appropriate time. Do you have a library?

Scientism

There are some people who think that the only understanding there is is scientific understanding. For instance, the neuroscientist Sam Harris claims that
questions of right and wrong, good and evil are questions about human and animal well-being. The moment we admit this we see that science can, in principle, answer such questions – because the experience of conscious creatures depends on the way the universe is.”

‘Well-being’, according to Harris, includes not only happiness, but also “truth, justice, fairness, intellectual pleasure, courage, creativity and having a clear conscience.” This approach to morality “will completely dislodge religion from the firmament of our concerns. The world religions will land somewhere near astrology, witchcraft and Greek mythology on the scrapheap. In their place we will have a thoroughgoing understanding of human flourishing.” [Gefter 2010]

This view is an example of ‘scientism’ . This is the belief that the methods of the physical and natural sciences are appropriate (or even essential) to all other disciplines. Including philosophy, the humanities and the social sciences. [Burkeman 2013]

So what are the methods of the natural sciences? Some eminent scientists such as Sir Percy Medawar (1915-1987) contend that there is no such method. Consider a ‘scientist’ who set out to find a cure for (say) rheumatoid arthritis. He would fail only because he did not know the ‘scientific method’ or he was too lazy to apply it. In either case he should be fired. [Medawar 1984]

The Sciences

Perhaps the best exemplars of our understanding are the sciences. But this does not mean that the only understanding there is is through science. People understood a lot about a lot of things long before science was formulated as a discipline (if it ever was).

Philosophers Bennett and Hacker contend: “it is absurd to suppose that science … is the primary measure of what does and does not exist. One needs no science to discover…that there is a tree in the garden or that there are no trees in one’s room.” [Bennett & Hacker 2003]

But we can say what scientific theories and discoveries have done for us. They explain phenomena. They enable us to predict new phenomena. They enable us to some degree to control the area under study.

Generally understanding proceeds from explanation through prediction to control, but not always. Steam power was controllable before there was any proper explanatory theory.

Understanding

From this we can define ‘understanding’ in operational terms:
‘Understanding’ is the appreciation of the properties and behaviour of things in the real world to the point where we can

  • explain phenomena and events,
  • predict new ones and – ultimately
  • control them.

It is not necessary to be able to do all three perfectly: there are degrees of understanding.

A thermostat fails the test.

The ‘ability of humans to perceive, understand, imagine, communicate and act’ is the fundamental starting point for any theory of the mind.

You will find this discussed in the first chapter of my book ‘Rethinking the Mind’.
Here: https://www.amazon.co.uk/Rethinking-Mind-1-Historical-Perspective-ebook/dp/B007JYFHVM;

References

[Aristotle c330BC] The Categories (transl WD Ross) available at
www.constitution.org/ari/aristotle-organon+physics.pdf

[Bennett & Hacker 2003] Bennett MR & Hacker PMS The Philosophical Foundations of Neuroscience Blackwell Publishing p374

[Burkeman 2013] Burkeman O “‘Scientism’ wars: there’s an elephant in the room, and its name is Sam Harris” The Guardian 2017/8/27 available at https://www.theguardian.com/news/oliver-burkeman-s-blog/2013/aug/27/scientism-wars-sam-harris-elephant

[Dawkins 1976] Dawkins R (1976,1989) The Selfish Gene Oxford University Press p192

[Dennett 1995.1] Dennett D Darwin’s Dangerous Idea Penguin p365

[Dennett 1995.2 ] Dennett D Darwin’s Dangerous Idea Penguin p346

[Gefter 2010] Gefter A “Crusader for Science (interview with Sam Harris)” New Scientist vol 208 (2782) p46-47

[Medawar 1984] Medawar P The Limits of Science Oxford University Press p51

[Orr 1996] Orr H A “Boston Review: Dennett’s Strange Idea” available at
http://www.bostonreview.net/BR21.3/Orr.html

[Penrose 1993] Penrose R Shadows of the Mind Vintage p68

what is perception?

Image

perception chequerboard illusion
I regard the ‘ability to perceive, understand, imagine, communicate and act’ as the fundamental starting point for any theory of the mind. You will find this discussed in the first chapter of my book ‘Rethinking the Mind’. This post concentrates on perception (the first of the five abilities) and what it is.

Representationalism

The predominant idea in modern neuroscience concerning perception is that what we perceive are images or representations in the brain. In other words the world around us is an hallucination created by the brain. (See for example [Seth2017] ).

This view is known as ‘Representationalism’.

It is wrong.

What we perceive are the various objects, and their attributes and behaviours in the real world.

‘Hallucination’ in common parlance means ‘seeing or hearing things that are not there’. According to psychology professor Benny Shanon of the Hebrew University in Jerusalem, [Shanon 2003] the characteristics of those things we describe as hallucinations are:

  1. Vividness: subjectively the experience is that of a vivid perception.
  2. Non-correspondence: Factually the experience does not correspond to any real objects or state of affairs in the real world.
  3. Ignorance: the cognitive agent however is not cognizant of 2).
  4. False Judgment: hence the hallucinatory experience involves false judgment on the part of the cognitive agent.
  5. Negative evaluation: Thus overall the hallucinatory experience is evaluated pejoratively, and it is assumed that it is of no positive import. Typically experience is taken to be indicative of some psychological impairment.
  6. Dismissal: Implied in all this is the assessment that any person other than the one having the hallucinatory experience will adhere to the negative evaluation indicated in 5).”

So to classify perception as an hallucination is to deny all objective criteria about what is and what is not. This justifies the current philosophical fad called ‘postmodernism’.

Postmodernism is a term applied to certain approaches in the social sciences and philosophy. It is characterised by ethical relativism and subjectivity. It emphasises the social construction of ‘knowledge’. It is generally sceptical towards science. Thus, “if it’s true for you, it’s true”; “there are facts and alternative facts.”; “there is western truth and Russian truthetc.

Perhaps rather than neuroscience validating postmodernism, it is the other way around. Perhaps ‘neuroscience’ as it is currently practised is on rather shaky philosophical ground.

Direct Realism

The Scottish Enlightenment philosopher Thomas Reid (1710-1796) long ago recognised that we cannot deny certain principles consistently. Among these principles is: “Those things that we clearly perceive by our senses really exist and really are what we perceive them to be.”[Reid 1785]

For if we deny this principle we cannot meaningfully converse with others. You might think this thing is a lion but I think it’s a typewriter.

Reid’s principle is known as ‘direct realism’.

perception illusion revealedReid did not deny that our senses can deceive us. Neuroscientists are forever finding new ways in which they can. In the picture above the two areas labelled ‘A’ and ‘B’ appear to be different shades of grey (Adelson’s chequerboard illusion). They are actually the same which is obvious when we block out the rest of the picture as here:

But it does not follow that because our senses can be deceived that they always are. Or even that they are deceived most of the time. We can be caught up in a movie and identify with the characters, but we can still recognise the fact that it is a movie.

Once the chequerboard illusion above is pointed out, we recognise the truth. Only postmodernists and neuroscientists conclude from that that nothing is real.

Neuroscientists could retort that because everything goes on in the brain neuroscience knows that the brain is real. They profess to be good card-carrying materialists. This is an inherent contradiction with the idea that the world is an hallucination.

You perceive by using your senses of sight, hearing, taste and smell. You see with your eyes. You hear with your ears etc. You do not see, hear, taste or smell with your brain. The brain is not an organ of perception, even though you cannot see, hear etc without the brain. You, the person, is what perceives, and this is manifest in the way you act and communicate.

The senses are not infallible. It is possible to find that what you thought you perceived was not actually what was there. In this case you say, “I thought … but it turned out that …”.

JJ Gibson

The psychologist JJ Gibson (1904 – 1979) classified vision in four distinct levels [Gibson 1986]:

  • Snapshot when the eye is stationary and functioning like a camera;
  • Aperture when the eye is able to scan the environment from a fixed position;
  • Ambient when the organism is able to turn its head and look around;
  • Ambulatory when the organism can walk around.

Gibson investigated these types of vision. In one experiment he had subjects look through a camera shutter so that they obtained a ‘snapshot’ wide-angle view of the environment for a fifth of a second. The subject had to find out what objects were on a table in front of him. He could take as many ‘snapshots’ as he liked and he could scan the table by moving his head.

Perception was seriously disturbed and the task was extremely difficult. What took only a few seconds with normal looking required many fixations…there were many errors.

Gibson emphasised the need for a person to move and see things from different perspectives to perceive what is really there. There are several clues in the environment during ambulatory vision such as perspective, parallax, occlusion of one surface by another and so on that locate an organism in its environment. In particular the organism can perceive itself as well as its environment. “Information about the self accompanies information about the environment” because we perceive parts of our own bodies.

Gibson’s experiments reveal that perception is not a passive process whereby photons strike the retina and stimulate various neural processes which are labelled as ‘perception’. It is an active process where the person deliberately changes his viewpoint (by moving his eyes, head and body) to extract from the environment those things that are invariant. Thus we perceive a table top as a rectangular surface even though all the retinal images relating to the table top are trapezoids of varying shape.

Perception does not produce ‘mental representations’. Perception enables the organism to function in its environment through active exploration.

So perception is the process whereby we get knowledge of our environment and the objects in it, their attributes and behaviour, and events.

References

[Gibson 1986] Gibson JJ  (1986) The Ecological Approach to Visual Perception Lawrence Erlbaum Associates

[Reid 1785] Reid T Essays on the Intellectual Powers of Man Essay 6 Chapter 5 p253-263 avaialable at http://www.earlymoderntexts.com

[Seth 2017]  Seth A Your brain hallucinates your conscious reality  TED Talk 18 Jul 2017  available at https://www.youtube.com/watch?v=lyu7v7nWzfo

[Shanon 2003] Shanon B “Hallucinations” Journal of Consciousness Studies vol 10 (2) p3

Milgram Revisited: “Only obeying orders”

Adolf Eichmann (below) was tried in Israel in 1961 for crimes against humanity. Eichmann’s crimes were in his handling of the logistics of transporting millions of Jews to concentration camps built for the purpose of their extermination during WW2.    His defence was ‘only obeying orders’.

only obeying orders

Milgram’s Experiments

Eichmann’s defence inspired Stanley Milgram (1933-1984) a psychologist at Yale University to perform one of the most infamous of social psychology experiments.   He wanted to find out how far a person would proceed in inflicting pain in obedience to the authority figure of the experimenter.

He chose people varying widely in age, occupation and education as subjects. From the subject’s point of view he and another person came to the laboratory to take part in a study of memory and learning.   They were given a scientific sounding rationale for the study.   One of them became a ‘teacher’, the other a ‘learner’.

The ‘teacher’ was shown an electrified chair and given a sample 45 volt shock. The ‘learner’ was then placed in the electrified chair, wired up with electrodes and told that he will be read lists of word pairs.    When he hears the first one again he is supposed to say the second word.   If he makes a mistake he will be given an electric shock.

The ‘teacher’ was then taken to a different room (linked by intercom) where he was placed in front of a control panel with thirty switches labelled 14 to 450 volts with descriptive designations from ‘slight shock’ to ‘danger: severe shock’ and finally ‘xxx’.

The experimenter in a grey lab coat starts the ‘teacher’ off with the word pairs.   He tells the ‘teacher’ to administer the next level of electric shock when the ‘learner’ gets the word pairing wrong.

In fact, the ‘learner’ is an actor who receives no shocks but acts as though he did. The experimenter unemotionally in the face of objections from the ‘teacher’ just encourages him to continue the experiment. When the learner starts to make mistakes the level of electric shock is stepped up. “At 75 volts, he grunts; at 120 volts he complains loudly; at 150 he demands to be released from the experiment… At 285 volts his response can be described only as an agonised scream. Soon thereafter he makes no sound at all.” (Milgram 1973)

Milgram solicited predictions of the result of his experiment from 14 colleagues.   They almost uniformly predicted that the ‘teacher’ would refuse to obey the experimenter at 150 volts where the learner asks to be released from the experiment.   In fact about 60% of the ‘teachers’ went to the end of the experiment administering the full 450 volts.

The subjects (‘teachers’) were usually agitated during the experiment – sweating, trembling, stuttering or laughter fits.   They were much relieved at the end of the experiment to find they had not hurt anyone – though some showed no emotion throughout. Variations of the experiment were tried to find what parameters influenced the result. When the ‘teacher’ was allowed to choose the shock level rather than being told to raise it to the next level, the average shock chosen was less than 60 volts – lower than the point at which the victim showed the first signs of discomfort.   Only 2 out of 40 subjects went as high as 320 volts.

When the experiment was altered so that the experimenter gave his instructions by telephone rather than being in the room with the ‘teacher’, the percentage of ‘teachers’ obedient to the 450 volt level fell to 20%. When the ‘teacher’ was relieved of the responsibility of pulling the lever that administered the shocks, and merely specified the level at which the shock should occur the percentage of ‘teachers’ going all the way to 450 volts went up to 92%.   In that case the subjects claimed that the responsibility rested with the person who actually pulled the lever.

Milgram concluded, “The essence of obedience is that a person comes to view himself as the instrument for carrying out another person’s wishes, and he therefore no longer regards himself as responsible for his actions… The most far-reaching consequence is that the person feels responsible to the authority directing him but feels no responsibility for the actions that the authority prescribes.  Morality does not disappear – it acquires a radically different focus: the subordinate person feels shame or pride depending on how adequately he has performed the actions called for by authority … the most fundamental lesson of our study [is that] ordinary people, simply doing their jobs, and without any particular hostility on their part, can become agents in a terrible destructive process. Moreover, even when the destructive effects of their work become patently clear and they are asked to carry out actions incompatible with fundamental standards of morality, relatively few people have the resources needed to resist authority.” (Milgram 1973)

The experiment has been repeated in various parts of the world with even higher percentages of obedience in some cases. Milgram gave the subjects  personality tests in an attempt to find those aspects of personality or character that would predict how far the subjects would go, but he found no correlation with any of the test results.

New Experiment

Now a slightly different version of Milgram’s experiment has been performed by a group of ‘cognitive neuroscientists’ from University College London and the Free University of Brussels led by Patrick Haggard (Caspar 2016). They wanted to find out to what degree the participants felt ‘in charge’ when they knowingly inflicted pain on each other and when they knew the aim of the experiment.

In the new experiments the participants (all female) were tested in pairs.  They took turns being ‘agent’ and ‘victim’ thus ensuring reciprocity.    Each was initially given £20.  The agent sat facing the ‘victim’ and so could monitor directly the effect of her actions.    In a first group of participants, the agent could freely choose on each trial to increase her own remuneration by taking money (£0.05) from the ‘victim’ (financial harm) or not.   Money transfer occurred in 57% of trials.   In a variation of the experiment the financial harm was accompanied by an electric shock to the ‘victim’ at a level that was tolerable but not pleasant (the electric shock was administered in 52% of trials).

In both of these groups the experimenter stood by and in some cases told the agent to take the money (group 1) or shock the victim (group 2).   In the other cases the experimenter told the agent to exercise her free choice.   There were also a number of trials as controls where the experimenter asked the agent to press the space bar whenever she wanted (‘active’) and where the experimenter pressed the agent’s finger on the space bar (‘passive’).

In order to investigate the agent’s ‘sense of agency’ (“the subjective experience of controlling one’s actions, and, through them, external events”) the key presses caused a tone to sound after a few milliseconds (variously 200, 500 and 800 msec) and the participants were asked to judge the length of the interval. The rationale behind this is that action-result times are perceived as shorter when the person carries out the action voluntarily (such as raising one’s arm) than when the action is done passively (someone else raises the arm).   So if coercion reduces this sense of agency, interval estimates should be longer in the coercive than in the free-choice condition.

Thus there were several comparison sets of data: free choice versus coercion, financial harm versus physical harm and harm versus no harm, as well as the control conditions (active versus passive).   When they were ordered to press a particular key (producing either harm or no-harm), the participants judged their action as more passive than when they had free choice and they perceived the time interval from the tone as longer (p=0.001).   This did not change depending on whether there was a harmful outcome, though it did when the potential harm was greater (ie physical rather than financial).

So the conclusion was that the coercion rather than the severity of the actual outcome was the determining factor in the sense of agency.   The agent experienced less sense of agency when she was coerced than when she freely chose between the same options – regardless of whether harm was actually inflicted.   So the plea “Only obeying orders” might not be just an attempt to avoid blame “but may rather reflect a genuine difference in subjective experience of agency.”

The participants were also given personality tests prior to the experiments to see if there were any predisposing factors. It was found that those scoring higher on empathy showed a greater reduction in the sense of agency when their actions had harmful outcomes.

In a second experiment, the same procedures were used but the agents were also hooked up to an electroencephalogram (EEG) to investigate changes in brain activity associated with the free choice / coercive conditions.   When an unpredictable stimulus such as a tone occurs it is followed by a ‘negative response potential’ approximately 0.1 seconds later in the frontal part of the scalp (usually referred to as the N100).  The expectation was that the N100 would be larger in amplitude when the agent freely chose her action compared with that when she felt coerced. This was indeed the case (amplitude ratio approx 1.3).   So not only the subjective ‘sense of agency’ but also neurophysiological activity is reduced under coercion.

Haggard says people genuinely feel less responsibility for their actions when following commands regardless of whether they are told to do something evil or benign. So the ‘only obeying orders’ excuse shows how a person feels when acting under command.

Before Haggard did these experiments he had (along with the majority of neuroscientists and many modern philosophers) already espoused the philosophical viewpoints of physicalism¹ , epiphenomenalism² and reductionism³ . He claims that mind-body causation is dualist and “incompatible with modern neuroscience” since most neuroscientists believe that “conscious experiences are consequences of brain activity rather than causes.”    “Philosophers studying ‘conscious free will’ have discussed whether conscious intentions could cause actions, but modern neuroscience rejects this idea of mind–body causation.    Instead, recent findings suggest that the conscious experience of intending to act arises from preparation for action in frontal and parietal brain areas. Intentional actions also involve a strong sense of agency, a sense of controlling events in the external world.    Both intention and agency result from the brain processes for predictive motor control….” (Haggard 2005)

And again : “… the cause of our ‘free decisions’ may at least in part, be simply the background stochastic fluctuations of cortical excitability.” (Filevich 2013)

Discussion

These experiments are interesting but care must be taken in their interpretation and in the consequences that may be claimed for jurisprudence. It is not clear whether the neurophysiological activity causes the subjective sense of agency or vice versa. What the experiments do reveal is that coercion causes both reduced sense of agency and reduced neurophysiological activity.

The experiments only concern what Elizabeth Pacherie terms ‘present-directed intentions’ ie those intentions which “trigger the intended action, …sustain it until completion, …guide its unfolding and monitor its effects”.   They do not touch upon ‘future directed intentions’ which are “terminators of practical reasoning about ends, prompters of practical reasoning about means and plans, and intra- and interpersonal coordinators” (Pacherie 2006).

One presumes that Haggard and his colleagues were motivated by future directed intentions when they decided to do the experiments and write their paper. They were not simply acting as the result of ‘stochastic fluctuations of cortical activity.’   If so, then the sweeping general conclusion loses its force.

The 18th century philosopher David Hume (1711-1776) thought that every object of the mind must be either an immediate perception or an ‘idea’ – a faint copy of some earlier perception.(Hume 1748)   This was criticised by his contemporary Thomas Reid (1710-1796) :“It seemed very natural to think that [Hume’s book] required an author and a very ingenious one at that; but now we learn that it is only a set of ideas that came together and arranged themselves by certain associations and attractions.” (Reid 1764)

According to Haggard and his colleagues not even ideas are now involved – only ‘stochastic fluctuations of cortical activity’.

The question of who bears personal responsibility is important to the rule of law. Certainly the person who gives the order to harm is culpable for the consequences.   But this does not absolve the person who actually carries out the order. The degree to which people feel responsible on average does not change the moral responsibility of any individual act.   Nor does it justify the inclusion of such ‘mitigating’ circumstances into criminal law.

Hannah Arendt (1906-1975) wrote a book on Eichmann’s trial (Arendt 1963), in which she coined the phrase “the banality of evil”.   It is not clear exactly what she meant by the phrase.   Milgram thought that she meant that Eichmann was not a “sadistic monster” but “an uninspired bureaucrat who simply sat at his desk and did his job“, and that she “came closer to the truth than one dare imagine.” (Milgram 1973)   It may well be true that in some situations evil is not perpetrated by fanatics and psychopaths but by ordinary people who see their actions as normal (banal = commonplace) within the prevailing conditions.   If so all of us are capable of committing horrendous crimes when the circumstances are right.

It is easy to see how ‘situationism’ (the philosophical belief that people act according to the situation in which they find themselves rather than by virtue of any moral or philosophical outlook they might have) is a credible paradigm.   But it predicts the actions of only 2/3rds of the subjects in the Milgram study. The new study suggests that there are character traits (eg ‘empathy’) that predict some aspect of the results (ie reduced sense of agency where there was a harmful outcome) more accurately.   But we do not excuse criminality on the grounds of character traits.

Evil was a common place in Nazi Europe, but for Arendt that did not render it excusable.   Whilst Arendt saw Eichmann as a cog in the machinery of the Final Solution she did not excuse his crimes nor fail to hold him morally responsible for his actions.   “If the defendant excuses himself on the ground that he acted not as a man but as a mere functionary whose functions could just as easily have been carried out by anyone else, it is as if a criminal pointed to the statistics on crime – which set forth that so-and-so-many crimes per day are committed in such-and-such a place – and declared that he only did what was statistically expected, that it was a mere accident that he did it and not somebody else, since after all somebody had to do it.” (Arendt 1963)

Despite the pressures some people do have the resources to buck authority even when the authority has far more clout than the man (or woman) in the grey lab coat. For example, the US GI Ronald Ridenhour forced the US congress to investigate the My Lai massacre in Vietnam where US servicemen massacred an entire village of 300 or more civilians in 1968. (Ridenhaur 1969) There were many people such as Raul Wallenburg (1912-1947) and Oskar Schindler (1908-1974) who protected Jews from the holocaust despite great personal risk.

If there are attempts to influence the law on the basis that these experiments prove diminished responsibility they should be dismissed.


The above contains passages extracted from the book Rethinking the Mind. Get the first volume here: https://www.amazon.com/Rethinking-Mind-1-Historical-Perspective-ebook/dp/B007JYFHVM

Notes

  1. Physicalism: the doctrine that everything is physical, ie all is matter and energy in its many forms and hence subject to the laws of physics.
  2. Epiphenomenalism: the doctrine that mental events are mere by-products of physical events and that mental events in themselves do not cause anything. In the classic description due to Thomas Huxley (1825-1895) consciousness is simply a collateral product of the working of the body in the same way that a steam whistle accompanies the work of a locomotive engine.
  3. Reductionism: the doctrine that explanations of phenomena are to be found in the smaller entities that comprise it eg) heredity in terms of DNA or in this case, human activities in terms of neural firings.

References

Arendt H (1963) Eichmann in Jerusalem: A Report on the Banality of Evil Penguin

Caspar EA, Christensen JF, Cleeremans A & Haggard P (2016) Coercion changes the Sense of Agency in the Human Brain Current Biology available at http://dx.doi.org/10.1016/j.cub.2015.12.067

Filevich E, Kühn S, Haggard P (2013) Antecedent Brain Activity Predicts Decisions to Inhibit PLOS 1 (February 13, 2013) available at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0053053

Haggard P (2005) Conscious Intention and Motor cognition Trends in Cognitive sciences vol 9(6) p 290-295 available at http://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613%2805%2900119-1

Hume D (1748) An Enquiry Concerning Human Understanding Section II ‘Of the Origin of Ideas’ para 12.

Milgram S (1973) The perils of Obedience Harpers Magazine p62-77 available at http://home.subell.net/revscat/perilsofobedience.html

Pacherie E (2006) Towards a dynamic theory of Intentions in Pockett S, Banks WP & Gallagher S (eds) Does Consciousness cause behavior MIT Press p 145-167 available at http://hal.archives-ouvertes.fr/docs/00/35/39/542/PDF/dynamics-intention-MIT-Pacherie-2006.pdf

Reid T (1764) An Enquiry into the Human Mind chapter 2.6 (ed J Bennett) available at http://www.earlymoderntexts.com/authors/reid

Ridenhour R  (1969) Letter to US Congress available at
http://www.law.umkc.edu/faculty/projects/ftrials/mylai/ridenhour_ltr.html

[amazon_link asins=’B007JYFHVM,B007KNSB24,B008VTB6TQ,0143039881′ template=’ProductCarousel1′ store=’retthemin-21′ marketplace=’US’ link_id=’f316a602-454f-11e7-ac2a-3d9593182904′]

Why Nations Fail review

Book Review

Acemoglu D & Robinson JA (2013) Why Nations Fail. The Origins of Power, Prosperity and Poverty Profile Books

why nations fail review korea
A satellite photograph of Korea at night shows North Korea as dark as – well – night, whilst South Korea blazes forth with light pollution. The South is the 29th richest country in the world with a GDP of $37,000 per head. The North is one of the poorest ($1,800 GDP per head) suffering from periodic famine and desperate poverty. Why is this?

One easy answer is that the North is a dictatorship whereas the South is a democracy. Democracies are good; dictatorships are bad.

It is not so simple.

At the end of WWII Korea was divided between North and South at the 38th parallel. In 1950 the North invaded the South and almost succeeded in overrunning it. At the end of the Korean War (1953) the states were again divided, but both were dictatorships. The South’s GDP increased at 10% per year between 1962 and 1979. It only became a democracy in 1987 with separate executive, legislative and judicial bodies after a succession of 3 dictators (two by coups d’état, one of whom was assassinated).

Figuring out what are the vital factors and what drives the changes occurring in a society is difficult. There are no two identical societies in which one isolated factor can be changed to see what happens. Any theory is liable to have elements of pre-supposing the answer (for example: democracy versus dictatorship). So any theory about how and why some nations become tolerant and prosperous but others become intolerant and poverty stricken is likely to be controversial. Similar problems arise in trying to account for why formerly tolerant and prosperous nations reverse and become repressive and poor.

It is to this problem of why some nations succeed and some fail that Daron Acemoglu, a Turkish-American professor of economics at M.I.T., and James Robinson, a British professor of public policy studies at the University of Chicago (hereafter A&R) have tackled in their book Why Nations Fail: The Origins of Power, Prosperity and Poverty. This book has been generally well received.

I will outline some of the earlier theories and the criticisms by A&R and then their theory and some of the criticisms that have been levelled against it.

Geography

The map of the world shows affluent societies in the temperate areas and poor societies in the tropical areas within 30° of the equator. This is particularly marked in Africa. The idea then is that the great division between rich and poor countries is caused by geography. The reasons for this are the pervasiveness of tropical diseases such as malaria, the scarcity of animals that could be used as cheap labour, and the poverty of the soil. There are exceptions, for example, the rich countries of Singapore and Malaysia, but both of these have access to the sea. This allows trade because it is much cheaper to transport cargo by sea than by land.

A&R criticise this theory on several grounds despite its initial appeal. The Indus Valley civilisation is the first recorded great civilisation and it is situated in what is modern Pakistan, well within the tropics. Central America before the Spanish invasions was richer than the temperate zones. One of the world’s currently poorest nations, Mali (GDP $18 billion), where half of the population of 14.5 million live on less than $1.25 per day, was once ruled by the richest person who has ever lived. Mansa Musa Keita I (c. 1280 – c. 1337) had a fortune of $400 billion in today’s money. His wealth included vast quantities of gold, slaves, salt and a large navy(!).

History … leaves little doubt that there is no simple connection between tropical location and economic success.” (A&R p51) There are also vast differences in wealth within the tropics and temperate regions at the present time. There is a sharp line between poverty and prosperity between North & South Korea, between Mexico and the United States, and between East and West Germany before reunification.

Ignorance

The reason poor nations are poor according to this hypothesis is that their governments are not educated in how a modern economy should be run. Their leaders have mistaken ideas on how to run their countries. Certainly, leaders of central African countries since independence have made bad decisions when viewed from outside. The IMF recommend a list of economic reforms that poor states should undertake including:

  • reduction of the public sector,
  • flexible exchange rates,
  • privatization of state run enterprises,
  • anticorruption measures and
  • central bank independence.

The central bank of Zimbabwe became ‘independent’ in 1995. It was not long before inflation took off reaching 11 million % pa (officially) by 2008 with unemployment around 80%.

But according to A&R it is not ignorance that is the source of bad decisions: “Poor countries are poor because those who have power make choices that create poverty. They get it wrong not by mistake or ignorance but on purpose. To understand this, you have to go beyond economics and expert advice on the best thing to do and, instead, study how decisions actually get made, who gets to make them, and why those people decide to do what they do.” (p68)

Culture

One idea about the rise of Europe from the 17th century was that it was caused by the ‘protestant work ethic’. Alternatively, the relative prosperity of former British colonies like Australia, and the U.S.A. was caused by the superior British culture. Or perhaps it is just European culture that is better than the others. These smug ideas don’t hold much water when you look at China and Japan, or when you look at the conduct of the European powers in their colonies. Some of those colonies are now prosperous and some are not.

At the start of the Industrial Revolution in the 18th and 19th centuries Britain had relative stability.   It had a tolerant clubby society that encouraged individualism.   It protected invention through patents.   It had a market for mass-produced goods. According to A&R this was not culturally caused.   Rather it was the result of definite structures in society and political arrangements. (p56)

Modernisation

According to this theory (also known as the Lipset or Aristotle theory) when countries become more economically developed they head towards pluralism, civil liberties and democracy. There is some evidence that this holds in Africa since 1950. (Anyanwu & Erhijakpor 2013)

But A&R object that US trade with China has not (yet) brought democracy there. The population of Iraq was reasonably well educated before the US-led invasion, and it was believed to be a ripe ground for the development of democracy, but those hopes were dashed. The richness of Japan and Germany did not prevent the rise of militaristic regimes in the 1930’s. (p443)

Other Theories

There are other theories about where and when prosperity will arise or disappear.
Several experts discuss the cause of the Industrial Revolution in The Day the World Took Off (Dugan 2000). They surmise:

  • historical accident (p182);
  • capitalism (p135);
  • the availability of raw materials (p66);
  • consumerism (p64);
  • the habit of drinking tea or beer rather than contaminated water (p18);
  • the need to measure time (p100);
  • the rise of the merchant classes (bourgeoisie) (p141).

Also discussed in The Day the World Took Off is the settlement in the Glorious Revolution of 1688 which brought political stability (p82). Finance became available through the establishment of the Bank of England. Incentives for investment, trade and innovation appeared through the enforcement of property rights and patents for intellectual property.

Acemoglu & Robinson

The political and economic factors exemplified by the Glorious Revolution are what A&R develop in their 500+ page book on why nations fail. A&R make several major points in their analysis.

1. Centralization

The first requirement for economic growth is a centralized political set up. Where a nation is split into factions, as is the case today in Somalia and Afghanistan, it is difficult to centralize power. This is because “any clan, group or politician attempting to centralize power in the state will also be centralizing power in their own hands, and this is likely to meet the ire of other clans, groups and individuals who would be the political losers of this process.”(p87) Only when one group of people is more powerful than the rest can centralization occur.

2. Extractive Economic Institutions

Economic institutions are critical for determining whether a country is poor or prosperous. A&R define extractive economic institutions as those “designed to extract incomes and wealth from one subset of society to benefit a different subset.” (p76) The feudal system that existed in Europe around 1400 and persisted in places into the 20th century was extractive. Wealth flowed upwards from the many serfs to the few lords. In later times, colonialism flowed wealth away from the locals to the colonists. A particular example was King Leopold II (1835-1909) of Belgium who ruled over the Congo Free State from 1885 to1908. He built his personal wealth through copper, ivory and rubber exports supervised by a repressive police force that enforced the local slave labour. A considerable but unknown proportion of the population were murdered or mutilated in the pursuit of Leopold’s wealth. (Bueno de Mesquita 2009)

Economic Growth can occur where there are extractive economic institutions provided there is centralization of power. It is in the interest of the exploiters to increase production for their own gain. A&R claim that this growth cannot continue for ever. It comes to an end “because of the absence of new innovations, because of political infighting generated by the desire to benefit from extraction, or because the nascent inclusive elements were conclusively reversed…” (p184) They thus predict that China’s growth will stall unless it manages somehow to transition to inclusive institutions (p442).

3. Extractive Political Institutions

Extractive economic institutions are set up by whoever it is that has political power. They will be better off if they can extract wealth from the rest of society and use that wealth to increase their power. “[They] have the resources to build their (private) armies and mercenaries, to buy their judges, and to rig their elections in order to remain in power. They also have every interest in defending the system. Therefore, extractive economic institutions create the platform for extractive political institutions to persist. Power is valuable in regimes with extractive political institutions, because power is unchecked and brings economic riches.” (p343)

4. Inclusive Political Institutions

Political institutions that distribute power broadly in society and subject it to constraints are pluralistic. Inclusive political institutions are those that “are sufficiently centralized and pluralistic.” (p81)

This agrees with the American political scientist Bruce Bueno de Mesquita that one of the main factors in having benevolent government is the presence of a large coalition (which he calls the ‘selectorate’) of those who have a say in who rules.(Bueno de Mesquita 2009)
The Glorious Revolution in Britain of 1688 limited the power of the king and gave parliament the power to determine economic institutions. It opened up the political system to a broad cross section of society.   They were able to exert considerable influence over the way the state functioned.

Before 1688 the king had a ‘divine right’ to rule the state by law. Afterwards even the king was subject to the Rule of Law. “[The Rule of Law] is a creation of pluralist political institutions and of the broad coalitions that support such pluralism. It is only when many individuals and groups have a say in decisions, and the political power to have a seat at the table, that the idea that they should all be treated fairly starts making sense.” (p306)
Britain stopped censoring the media after 1688. Property rights were protected. Even ‘intellectual property’ was protected through patents, which enabled innovators and entrepreneurs to gain financially from their ideas. According to A&R it is no accident that the Industrial Revolution followed a few decades after the Glorious Revolution. (p102)

‘Inclusive Political Institutions’ is not the same as democracy. Great Britain after the Glorious Revolution was not a democracy in the modern sense. The franchise was limited and with disproportionate representation. For instance, the constituency of Old Sarum in Wiltshire had 3 houses, 7 voters and 2 MPs. Not until 1832 did the franchise extend to 1 in 5 of the male population. Only in 1928 did all women get the vote. Similarly, the prosperous nation, the United States, did not grant franchise to ‘all’ males until 1868, to ‘all’ females until 1920 and all African Americans until 1965.

There are many examples of countries where democratic voting occurs but few political institutions of an inclusive nature, if any, exist. There ‘democracy’ tends to be a conflict between rival extractive institutions.

According to A&R the reason the Middle East is largely poor is not geography. It is the expansion and consolidation of the Ottoman Empire and its institutional legacy that keeps the Middle East poor. The extractive institutions established under that regime persist to the present day. It is just different people running them.

5. Inclusive Economic institutions

Inclusive economic institutions … are those that allow and encourage participation by the great mass of people in economic activities that make best use of their talents and skills and that enable individuals to make the choices they wish. To be inclusive, economic institutions must feature secure private property, an unbiased system of law, and a provision of public services that provides a level playing field in which people can exchange and contract; it must also permit the entry of new businesses and allow people to choose their careers.” (p74)

These features of society all rely on the state.   It alone can impose the law, enforce contracts and provide the infrastructure whereby economic activity can flourish. The state must provide incentives for parents to educate their children, and find the money to build, finance and support schools.

Economic growth and technological change is what makes human societies prosperous. But this entails what the Austrian-American economist Joseph Schumpeter called ‘creative destruction’. This term describes the process whereby innovative entrepreneurs create economic growth even whilst it endangers or destroys established companies. “[The] process of Creative Destruction is the essential fact about capitalism. It is what capitalism consists in and what every capitalist concern has got to live in.” (Schumpeter 1942)

A&R opine that the fear of creative destruction is often the reason for opposition to inclusive institutions. “Growth… moves forward only if it is not blocked by the economic losers who anticipate that their economic privileges will be lost and by the political losers who fear that their political power will be eroded.” (p86) Opposition to ‘progress’ comes from protecting jobs or income, or protecting the status quo.

The central thesis of this book is that economic growth and prosperity are associated with inclusive economic and political institutions, while extractive institutions typically lead to stagnation and poverty. But this implies neither that extractive institutions can never generate growth nor that all extractive institutions are created equal.” (p91)

7. Critical Junctures

A critical juncture is when some “major event or combination of factors disrupts the existing balance of political or economic power in a nation.” (p106) Similar events such as colonization or decolonization have affected many different nations, but what happens to the society at such critical junctures depends on small institutional differences.

100 years before the Glorious Revolution Britain was ruled by an absolute monarch (Elizabeth I). Spain was ruled by Philip II and France by Henry III. There was not much difference in their powers, except that Elizabeth had to raise money through parliament. Henry and Philip were able to monopolize transAtlantic ‘trade’ for their own benefit. Elizabeth could not because much of the English trade was by privateers, who resented authority. It was these wealthy merchant classes who played a major role in the English Civil War and the Glorious Revolution.

Once a critical juncture happens, the small differences that matter are the initial institutional differences that put in motion very different responses. This is the reason why the relatively small institutional difference led to fundamentally different development paths. The paths resulted from the critical juncture created by the economic opportunities presented to Europeans by Atlantic trade.” (p107)

________________

Criticisms

One of the difficulties with political and social theory is that once a formula has been hit upon, everything then becomes interpreted in the light of that formula. Once Marx had explained economics in terms of labour and its exploitation, there was no room for those who espoused that idea to see anything different. So extractive versus inclusive institutions could be just another seductive idea.

1. Economists Michele Boldrin, David Levine and Salvatore Modica make a similar point in their review (Boldrin, Levine & Modica 2012). They say that if we lack an axiomatic definition of what is ‘inclusive’ and what is ‘extractive’, independent of actual outcomes, then the argument becomes circular and subject to a selection bias. Some of A&R’s examples are “a bit strained”.

For example, after Julius Caesar established the ‘extractive empire’ the ‘fall of Rome’ did not occur for four centuries. The success of South Korea, Taiwan and Chile (which had non-inclusive political institutions but evolved into inclusive ones) might lead one to suppose that “pluralism is the consequence rather than the cause of economic success.” (The Anyanwu study mentioned above in connection with the modernisation theory did find a correlation between economic success with democracy in Africa.   But they also found that the extent of oil reserves in the country tended to stop the development of democracy. This is what you would expect from A&R. I think there are cross-causative factors. The rise of the merchant classes in England was a major factor in the development of English politics as A&R show).

In the case of Italy the political institutions are the same in the North and the South.   But the North is prosperous whereas the south is dependent on handouts from the North. BL&M acknowledge that the south suffers from economic exploitation (Mafia) but this suggests that political institutions are only part of the story since there is no national border. They also say there is a danger in using satellite photographs as economic evidence as in this particular case “the poorest part of Italy is the most brightly lit.” The apparent brightness of parts of photographs depends on several factors including the curvature of the Earth and where the satellite is with respect to the subject. The picture of Italy here shows the north: the Po Valley as the brightest lit.why nations fail review italy

Germany from the mid 19th century until the end of WW2 prospered under extractive institutions, and led the world in its chemical industry. It did have compulsory education and social insurance and an efficient bureaucracy, but it could hardly be thought of as inclusive. Nazi Germany invented and produced the first jet planes and rockets. The “brief period of inclusiveness, the Weimar Republic, was an economic catastrophe.

Again, the Soviet Union “did well under extractive communist institutions,” but floundered after a coup d’état established inclusive political institutions.

According to BL&M, Zimbabwe is a disastrous case of moving towards more inclusive institutions by extending the franchise to a wider population and lifting trade restrictions. (I find it difficult to believe that Zimbabwe can be regarded as consisting of inclusive political and economic institutions).

BL&M suggest that the focus of A&R is on what happens within nations when a great many developments within nations depend on what happens between nations. Not the least of these developments being invasions and war. BL&M perceive that many historical crises, including the current crisis in Greece, stem from debt, yet A&R do not mention this. The French Revolution and the rise of Nazism came from debt crises, as did the English Civil War.

BL&M argue against A&R’s stance that intellectual property rights brought in after the Glorious Revolution was one of the spurs for the Industrial Revolution. They show that patents were barriers to progress. They are passionate advocates of liberalizing copyright, trademark and patent laws which they see as the enemy of competition and ‘creative destruction’. (Boldrin & Levine 2008) I have sympathy with this view, but that’s a different story for another day. (see also Hargreaves 2011).

What BL&M’s cases seem to suggest is that we need stricter criteria for ‘inclusive’ and ‘extractive’. These nations were inclusive in some respects and extractive in others.   It is difficult to decide which were or are the most pertinent factors.

A further complication is the passage of time. How long before an ‘inclusive’ or ‘extractive’ feature starts to make a change to the society? A&R do not suggest that prosperity manifests immediately or immediately disappears when a society transitions from one to the other.

2. One of the principal proponents of the geography theory is Jared Diamond, a professor of Geography at the University of California, Los Angeles. He acknowledges that inclusive institutions are an important factor (perhaps 50%) in determining prosperity but not the overwhelming factor (Diamond 2012). He favours historically long periods of central government and geography as major factors. He also makes the point that why each of us as individuals becomes richer or poorer depends on several factors. These include “inheritance, education, ambition, talent, health, personal connections, opportunities and luck…” So there is no simple answer to why nations become richer or poorer.

3. William Easterly, a professor of economics at New York University, complains that A&R have “dumbed down the material too much” by writing for a general audience. They rely on anecdotes rather than rigorous statistical evidence (when “the authors’ academic work is based on just such evidence” ). So the book “only illustrates the authors’ theories rather than proving them.”

Conclusions

All three of these critical reviews acknowledge that Why Nations Fail is a great book. It should be read by anyone with an interest in politics.

Apart from the central thesis outlined above A&R provide many examples and great historical detail. This alone makes it a good read, even if you have philosophical aversions to the conclusions.

There is no simple solution to the problem of failed states but at least a correct diagnosis might lead to a greater percentage of success. Such explanations as ‘geography’, ‘culture’ and ‘historical accident’ do not offer much hope. Imposing ‘democracy’ on states that are anarchic or repressive does not seem to have worked so far, though it might form part of a solution once the system that has kept the nation repressed has been remedied.

You might think that the people who are in charge of states that extract wealth from their populations and gather power to themselves are psychopaths. They probably are. But it is usually the system that has existed for a considerable time, or is easily adapted to this end, that exists before the person takes power. The system tends to persist longer than any individual. There are more than enough psychopaths around to engineer a revolution or coup that puts them in charge when they see the advantages that may accrue. So getting rid of a dictator is only likely to replace him with another one. Where it does not, the likely consequence is the de-centralisation of the state with warring factions.

A&R make the point that “avoiding the worst mistakes is as important as – and more realistic than – attempting to develop simple solutions.“(p437)

References

Acemoglu D & Robinson JA (2013) Why Nations Fail. The Origins of Power, Prosperity and Poverty Profile Books

Anyanwu JC & Erhijakpor AEO (2013) Does Oil Wealth Affect Democracy in Africa? African Development Bank available at http://www.afdb.org/fileadmin/uploads/afdb/Documents/Publications/Working_Paper_184_-_Does_Oil_Wealth_Affect_Democracy_in_Africa.pdf

Boldrin M and Levine DK (2008) Against Intellectual Monopoly, Cambridge University Press. available at http://www.micheleboldrin.com/research/aim/anew01.pdf

Boldrin M, Levine D & Modica S (2012) A Review of Acemoglu and Robinson’s Why Nations Fail available at http://www.dklevine.com/general/aandrreview.pdf

Bueno de Mesquita B (2009) Predictioneer The Bodley Head (published in the USA as ‘The Predictioneer’s Game’ )

Hargreaves I (2011) Digital Opportunity. A Review of Intellectual Property and Growth (report commissioned by UK government) available at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/32563/ipreview-finalreport.pdf

Schumpeter J A (1942) Capitalism, Socialism and Democracy Harper & Brothers

Taylor F (2013) The Downfall of Money: Germany’s Hyperinflation and the Destruction of the Middle Class Bloomsbury

[amazon_link asins=’B007JYFHVM,B007KNSB24,B008VTB6TQ,0307719227,081297977X,0061561614,1620402378′ template=’ProductCarousel’ store=’retthemin-21′ marketplace=’US’ link_id=’d6c42bfd-4550-11e7-b992-e7ad073bed39′]

Don’t lose your mind for Utopia

by Michael Davidson

Thomas More

Thomas More (1478-1535), Lord Chancellor to Henry VIII (1491–1547) of England, wrote the book ‘Utopia’[1] first published in 1516. The book describes a fictional island and its politics and customs. The word is derived from the Greek ou = not and topos = place, hence utopia = no place. There is also the Greek eu = good which sounds similar, so utopia = good place (the current meaning). It is not clear whether More was presenting this mythical island as the perfect state or whether he was saying no such place could stably exist. Given the political climate of the time he was probably wise to be equivocal on the matter. He eventually lost his head anyway.

There is no private property or money on Utopia. All produced goods are stored in warehouses where people get what they need. All property is communal so houses are periodically rotated between citizens. All meals are communal. There are no private gatherings. All wear similar woollen garments. Premarital sex is punished by enforced lifetime celibacy. Adultery and travel within the island without a passport are both liable to be punished by enslavement.

You might not think that this would be a pleasant place to live, but there has been at least one attempt to implement such a society (Michoacán, Mexico circa 1535) and More was revered by Lenin for promoting the “liberation of humankind from oppression, arbitrariness, and exploitation.” [2]

Plato

Thomas More mentions Plato (427-347BC) favourably, and was obviously well acquainted with Plato’s Republic [3] which is arguably the first attempt to design a ‘perfect state’. In Plato’s republic there are 3 classes of citizen: the rulers, the military and the workers (merchants, carpenters, cobblers, farmers and labourers). The rulers are the philosophers (those devoted to reason); the military (called Guardians) are the spirited or ambitious; and the rest are those who know only their desires. The rulers rule with absolute power, exercise strict censorship so that only good and true ideas prevail, and ensure by appropriate education that they are succeeded by like minded philosophers. All citizens know their place in society and may not change it, for to do so would be to rebel against the institutions.

For the rulers and the potential rulers, family life would be abolished in favour of communal living. All promising children who showed spirit or reason (from whatever class – though Plato advocated a eugenic program of mating the ‘best men’ with the ‘best women’) were to be removed from their families to be educated as potential rulers. Their training would be in gymnastics and military music until the age of twenty. Then mathematics and astronomy for ten years, followed by a thorough study of Plato’s philosophy. Those that didn’t quite make it through the course at any stage were to be assigned to the military. By the time they have successfully finished this study they will be over 50, will have developed such a devotion to Plato’s philosophy that they will rule only through their sense of justice which requires their ruling wisely in recompense for the superb education the state has provided for them. Since the rulers are just, good etc and they have absolute powers there is no need for laws or votes.

Thomas Hobbes

More and Plato were idealists who believed in worlds beyond this one.   But totalitarian states can also be based on a materialist view of Man. Thomas Hobbes (1588-1679) in his book Leviathan (1651) regards the State as something like an artificial man “the sovereign is the soul, the magistrates are artificial joints, reward and punishment are the nerves, wealth and riches are the strength” and so on.

Hobbes thinks that ‘in the state of nature’ Man is or would be in a perpetual state of turmoil. Without “a common power to keep them all in awe”, there would be a war of “every man against every man.”[4] The solution is for men to surrender their liberty to a sovereign power. It does not much matter whether the sovereign power is a monarchy, an aristocracy or a democracy. The essential point according to Hobbes is that the sovereign must have absolute power. Only in this way can the populace have a secure and orderly existence.

Such perfect societies would not be so bad if they were confined to books, but every so often societies built on similar lines spring into being. This is often the result of a revolution or coup in the name of some dream. Whereas these societies tend not to last long, reversing the process to a more libertarian one is often painful. It is not usually possible to impose democracy on what was previously a dictatorship. Although democracies may be born in a coup they also evolve, as is evident in the many different versions of democracy that exist in the world today.

John Locke

Freedom of speech, freedom of enterprise, rule of law, property rights and the ability to remove unpopular governments from power without disrupting society are characteristics of democracies. In popular parlance only voting is seen as distinguishing democracy from other systems. But there is a lot more to it than that.

The above characteristics are attributed in no small part to John Locke (1632–1704) who published Two Treatises of Government in 1689 [5] which seeks to throw light on the basis of political authority. Locke does not reckon much to Hobbes’ absolute sovereign power. He sees the original ‘state of nature’ as happy and tolerant. The State is formed by a social contract which entails a respect for natural rights, liberty of the individual, constitutional law, religious tolerance and general democratic principles. The various institutions form a system of checks and balances. A government must be deposed if it violates natural rights or constitutional law. The state is concerned with procuring, preserving, and advancing the civil interests of the people: life, liberty, health and property through the impartial execution of equal laws. These principles were eventually enshrined in the constitutions of many modern democracies as ‘self evident’. The history of the world shows there was nothing much about them that was evident before Locke. The whole idea of ‘human rights’ (ie those rights arising from being a human being as opposed to sovereign rights, marital rights etc) stems from this era and can be attributed to Locke in no small part.

Montesquieu

Democracies depend for their equilibrium on many interdependent and independent institutions. The doctrine of the ‘Separation of Powers’ due to Montesquieu (1689–1755) was based at least partly on his observation of Locke’s England. This had recently become a constitutional monarchy through the “Glorious Revolution”(1688) which installed William of Orange and his wife Mary on the throne with increased parliamentary authority. According to Montesquieu political liberty is a “tranquillity of mind arising from the opinion each person has of his safety.” In England this was obtained through the separation of the Legislative, Executive and Judicial branches of the administration.[6] The merging of these powers into one body would be a recipe for tyranny, he said. The separation of powers was a major consideration in the drafting of the US Constitution (1788).

According to Montesquieu democracy can be corrupted not only where the principle of equality of all citizens does not exist but also where the citizens fall into a spirit of ‘extreme equality’ where each considers himself on the same level as those who are in charge. People then want to “debate for the senate, to execute for the magistrate, and to decide for the judges. When this is the case virtue can no longer subsist in the republic.”[6] The ideal of a ‘Free Press’ exists so that corruption in high places can be exposed, but it can deteriorate into this ‘extreme equality’. This is shown by the recent scandals in England where certain sections of the press have represented themselves as the conscience of the nation in all matters political and judicial whilst at the same time having so little respect for the truth in high profile cases like Christopher Jefferies¹ and engaging in criminal activity like phone hacking². Continue reading

How to Recognise Philosophical Nonsense

philosophical nonsense

Philosophy is our ideas about morality, politics, how we find out about the world in which we live, and consequently how we think the world works.

Unfortunately philosophers do not agree on what the basic ideas in these fields are.   They have not agreed on any principles whereby they or we can distinguish good and bad philosophy.

Is there a message in there that will enlighten or educate us? The philosopher is often pushing a world view and gathering what he or she considers evidence to bolster it.

Philosophers, like the rest of us, are susceptible to reaching a consensus that ‘everybody knows’ that turns out later to be false.  Great fashions in thought have lasted for centuries but philosophers abandoned them later.   In recent times there have been claims that philosophy is dead and that science can take the place of philosophy.   Such claims are inherently philosophical in nature.

Books written by professional philosophers are usually difficult to understand.   How come they are so hard?

My book considers the various schools of philosophical and scientific thought and demystifies the arguments in a way that most people can understand.  My view is an empirical one: physics, biology, neuroscience, psychology, psychotherapy and so on. But I recognise that philosophical viewpoints underpin these subjects.

I am not dogmatic that mine is the correct view: other books with titles like “How the Mind works”, “How the Brain makes up its Mind”, “Consciousness Explained” push particular views.   There is not enough evidence yet to make such statements either as conclusions or book titles.   You can make up your own mind.

It is difficult to judge whether a particular philosophical position will prove to be true eventually.   But it is possible with some uncertainty to measure how readable and understandable a text is.

Psychologists have researched what makes texts simple or difficult to read.   The obvious ones are sentence construction and vocabulary.   Simple measures of these factors are sentence length (in words) and the length of the words (in syllables).   A measure which uses these easy to count features of a text is the Flesch-Kincaid Grade Level which is worked out as follows:
grade = 0.39 x average sentence length + 11.8 x average word length – 15.59.

Microsoft Word is happy to calculate this for us (menu: tools/spelling and grammar).
The calculated grade is the US school grade (years of schooling) = six plus the numbers of years in school required to be able to understand the text.

There are a number of similar measures of school grade using slightly different formulae. The web site http://readability-score.com will calculate these for any supplied text.   The agreement between the various measures is only about ±3, so they can only be taken as a guide.   It is possible to write philosophical nonsense that gives a good readability score!

Here are the measures of readability (average from 5 different measures) for sample texts of approx 1500 words by a few selected philosophers:

grade
This text 11
Plato (428BC-348BC) 11
Richard Dawkins (b1941) 11
My book “Rethinking the Mind” 12
John Searle (b1932) 13
John Locke (1632-1704) 13
Aristotle (384BC-322BC) 14
Benedict Spinoza (1632-1677) 14
Thomas Hobbes (1588-1679) 15
Dan Dennett (b1942) 15
Thomas Reid (1710-1796) 15
Friedrich Hayek (1899-1992) 16
Georg W F Hegel (1770-1831) 16
Immanuel Kant (1724-1804) 17
John Stuart Mill (1806-1873) 17
David Hume (1711-1776) 17
Robert Almeder (b1939) 18
Rene Descartes (1596-1650) 21

The aim of my book has been to make philosophy understandable to people educated to 12th grade.

The first step in reaching a balanced and consistent personal philosophy that one can live by is to approach philosophical texts with sure principles for evaluating them.   I put forward a set of such principles in chapter 1.   You can download it for free by leaving your email address in the green box at the side of this page.

Armed with these principles you can not only understand the arguments that have occurred but by applying these principles evaluate their merits.

You will then be able to see through many of the facile psychological, philosophical and political arguments that are put forward in the media.   The counter arguments have been locked in obscure and impenetrable texts as the above table hints.

Begin your journey to Understanding. Buy the first of the three volumes here:

https://www.amazon.com/Rethinking-Mind-1-Historical-Perspective-ebook/dp/B007JYFHVM

Do we have Free Will? And why it matters to you.

You might think that after several thousand years of debate we have exhausted all the arguments as to whether we have Free Will, or whether our actions are caused by prior events. So say the protagonists on both sides of the argument: but they still argue! Now there are some new approaches that could throw light on the problem.

Free Will is the idea that we are able to choose between alternative courses of action and actually cause something to happen. For example, I can decide to lift my arm and I consider this to be something I could have decided to do or not.

Determinism is the idea that all events are necessary effects of earlier events: future events are as fixed and unalterable as past events.

Determinism is not quite the same as ‘fatalism’. Fatalism is the doctrine that what is going to happen is going to happen regardless of what you do. For example, you will die of a heart attack on such and such a date, regardless of changing your diet, exercising, medical intervention and so on. Determinism does not predict necessary future outcomes; it merely states that whatever the outcome turns out to be it was the result of prior natural causes.

This idea of determinism is sometimes held to come from Isaac Newton’s discoveries of the laws of motion and gravity. These led us to the idea of a ‘clockwork universe’. The French mathematician Pierre-Simon Laplace (1749-1827) claimed that if we knew all the laws of nature and the position of all the particles in the universe at a particular instant we could know the future (and the past) precisely. He did not, though, say how we could start to verify this.

In fact the idea of determinism is not recent and has roots in ancient Greek philosophy and has come through various brands of Christianity before Newton.

It is only in the last few years that we have realised that Newton’s laws do not imply a clockwork universe. In certain circumstances these laws cause chaotic behaviour. The mathematician James Lighthill (1924 – 1998) even apologised to the lay community for mathematicians giving a false impression for 250 years. (Lighthill 1986)

In the 20th century Newtonian physics was displaced by Quantum Mechanics which showed that determinism of the kind envisaged by Laplace is false. For instance it is not possible to predict when an individual atom of radium will emit an alpha particle and become an atom of radon. All that can be predicted is what proportion of a certain mass of radium will have turned into radon in a certain time.

One philosophical response to quantum mechanics is to insist that indeterminism is true: so our actions must be random, and we don’t cause them anyway. The effort here seems to be to deny free will regardless.

You might think that a determinist would necessarily shun the idea of free will and personal responsibility since our actions are all the product of physical brain activity over which the self (if it exists) has no control. Those who believe that determinism and free will are mutually exclusive – are known as incompatibilists.

Those who believe that free will can be reconciled with determinism are called ‘compatibilists’ and according to the contemporary philosopher John Searle (b1932) this is the majority view among philosophers.

Incompatibilists who believe in determinism are known as ‘hard determinists’.
determinism

There is no word (as yet) for people who believe that both determinism is false and freewill does not exist (randomists, perhaps?).

One hard determinist is the neuroscientist Colin Blakemore (b1945): “… all those things that you do when you feel that you are using your mind (perceiving, thinking, feeling, choosing, and so on) are entirely the result of the physical actions of the myriad cells that make up your brain.” Consequently, “It makes no sense (in scientific terms) to try to distinguish sharply between acts that result from conscious intention and those that are pure reflexes or that are caused by disease or damage to the brain.” It seems to follow that “the addict is not ill and is surely not committing a crime simply by seeking pleasure.” (Blakemore 1988)

Another hard determinist, or perhaps a randomist, since he allows the influence of random events in biological development and behaviour, is the biologist Anthony Cashmore. Even if quantum theory eventually shows that determinism is false “it would do little to support the notion of free will: I cannot be held responsible for my genes and my environment; similarly I can hardly be held responsible for any [random] process that may influence my behaviour.” (Cashmore 2010)

Whether or not determinism is true there are philosophers who believe that free will is impossible on purely logical grounds. The philosopher Arthur Schopenhauer (1788-1860) said “A man can surely do what he wants to do. But he cannot determine what he wants.

Carrying on this theme the philosopher Galen Strawson (b1952) believes that what one wants is “just there, just a given, not something you chose or engineered – it was just there like most of your preferences in food, music, footwear, sex, interior lighting and so on… [Wants] will be just products of your genetic inheritance and upbringing that you had no say in… you did not and cannot make yourself the way you are.” (Strawson G 2003) If you can make yourself the way you are then you must have some nature that enables you to do that; if you can make that nature then you must have that ability built in and so on for an infinite regress. Since there is an infinite regress the idea of free will must be false.

Determinism is of course also tied to an infinite regress which is only terminated by the idea of the ‘big bang’ (but what caused the big bang?).

There are philosophers such as Ayn Rand (1905 – 1982) who believe, contrary to Strawson, that Man is a being of self-made soul. (Rand 1966)

Jean-Paul Sartre (1905 – 1980) claimed we have free-will whether we like it or not: “We are always ready to take refuge in a belief in determinism if this freedom weighs upon us or if we need an excuse.” (Sartre 1956)

The free-will determinism debate is anchored in fixed metaphysical positions which are then dressed up in complex and seemingly incontrovertible arguments.

Compatibilism regards ‘free will’ not as independent agency but, rather, the feeling of independent agency. Thus a person acts freely when they do what they wished to do and they feel they could have done otherwise. One of the earliest compatibilists was Thomas Hobbes (1588 – 1679): “…from the use of the words free will, no liberty can be inferred of the will, desire, or inclination, but the liberty of the man; which consisteth in this, that he finds no stop in doing what he has the will, desire, or inclination to do.” (Hobbes 1690) A person would not do what they wished to do, or do what they did not wish to, except if they were coerced by acute discomfort, threat or torture.

For the compatibilist the wish is determined by the genetic makeup and life history of the person, nature plus nurture, so free will is just being able to act as one wishes without coercion. However, the person, according to determinism, has no power to change his or her future whether he or she is coerced or not. So the feeling of having been able to have done otherwise than what he or she did must be a delusion. Thinking freely must also be an impossibility. In particular, the espousal of the doctrine of determinism must have been determined, and those who defend the opposite, non-determinism, must have been similarly determined.

This leads to an endless debate between non-determinists who believe they can induce the determinist to make a non-determined decision and determinists who believe they can determinedly box in the non-determinist to see his impotence. John Searle was evidently once asked, “If someone could unequivocally prove determinism, would you accept it?” to which Searle replied, “Are you asking me to freely accept or reject such a proposition?” He points out that if “you go into a restaurant and they give you a menu and you have to decide between the veal and the steak. You cannot say to the waiter, ‘Look, I’m a determinist. Que sera sera’ because even doing that is an exercise of freedom. There is no escaping the necessity of exercising your own free choice.” (Searle 2000)

In other words whether we have free will or not, it is a difference that does not make a difference.

Incompatibilists who reject determinism but accept free will are called Libertarians. Libertarianism is the theory about freedom that despite what has happened in the past, and given the present state of affairs and ourselves, just as they are, we can choose or decide differently than we do – act so as to make the future different.

The idea is that the future normally consists of several alternatives and one has the power to choose freely which alternative to pursue.

A modern libertarian is the former New South Wales Supreme Court Judge, David Hodgson (1939-2012). He accepts that some combination of deterministic laws and quantum randomness is one form of causation. But he insists there is another kind of causation operating in the conscious decisions and actions of human beings, and perhaps also of non-human animals, ie ‘volitional causation’ or ‘choice’. He suggests that physical law does not necessarily imply determinism, ie a number of possible futures may all be consistent with physical law. He grants that the choices a person might come to may partly be the result of unconscious reasons and motives codified in the neural mechanisms. But the function of consciousness is to “allow choice from available alternatives on the basis of consciously felt reasons …the rationality and insight of normal adult human beings, even though far from complete or perfect, is generally sufficient for them to be considered as having free will and responsibility.” (Hodgson 1999)

The motive of both libertarians and compatibilists seems to be to justify holding people morally responsible for their actions. The libertarian might also claim that if we are not free agents then there is no basis for morality at all. The fear is that if moral responsibility is a prerequisite for guilt, blame, reward and punishment, and no one can do anything other than what they do, then no one should be rewarded or punished just as hard determinism seems to imply. Some hard determinists claim that reward and punishment is justified on the grounds that people do respond to reward and punishment in a determined way. But this leads to the view that the rewarders and punishers do what they do without grounds or justice, whereas the rewarded or punished are suckers, taken in by the authority of the judgers, continuing to believe in their guilt or worth. (Warnock 1998)

Compatibilists hold that even though people cannot do anything other than what they do, they are nevertheless morally responsible. There is an argument from Donald MacKay (1922-1987) which shows that even if there is a Laplacian demon or God who knows all about the state of my brain and even if He claims to be able to predict my every action I can have no reason to believe any of His predictions (which necessarily must include His knowledge of whether I believe the prediction or not). As I do not know whether He has predicted I will believe or not, He has given me no grounds for believing the prediction or not. (McIntyre 1981)

So according to MacKay, even if the universe is determined the self must regard itself as an agent capable of moral choices and act accordingly. Determinism makes no difference to how we conduct our lives.

Philosophers of a determinist persuasion have stuffed the self into a variety of strait-jackets in an attempt to avoid the dreaded idea of the soul. Personal experience must be denied or at least proscribed at the risk of introducing personal agency. The idea of a responsible self is opposed by the idea of scientific explanation and prediction. On the other hand philosophers of a libertarian conviction try to find in science evidence that the world is not ‘causally closed’. This could allow free will and justify the retention of our jurisprudence, against the revisionist urgings of those determinists who feel all punishment is unjust.

Peter Strawson (1919-2006) thinks that the metaphysical dispute between the compatibilists and the incompatibilists is ill-framed. It can be resolved if each side would relax a little. The compatibilist normally portrays jurisprudence as an objective instrument of social control, excluding the essential element of moral responsibility. The incompatibilist is appalled that if determinism is true then the concepts of moral obligation and responsibility really have no application, and the practices of punishing and moral condemnation etc are really unjustified. (Strawson P 1962)

But both sides, says Strawson, neglect the fact that “it matters to us [a great deal] whether the actions of other people – and particularly of some other people – reflect attitudes towards us of goodwill, affection, or esteem on the one hand or contempt, indifference, or malevolence on the other …The human commitment to participation in ordinary inter-personal relationships …is too thoroughgoing and deeply rooted for us to take seriously the thought that a general theoretical conviction might so change our world that, in it, there were no longer any such things as inter-personal relationships as we normally understand them… The existence of the general framework of attitudes itself is something we are given with the fact of human society. As a whole, it neither calls for, nor permits, an external ‘rational’ justification.”

According to Strawson, determinism does not entail that anyone who caused an injury was ignorant of causing it or had acceptable reasons for reluctantly going along with causing it. Nor does it entail that nobody knows what he’s doing or that everybody’s behaviour is unintelligible in terms of conscious purposes or that everybody lives in a world of delusion or that nobody has a moral sense which is what would be required if determinism was at all relevant. Compressing Strawson’s argument down from his 11,000 words : If determinism is true this would imply that our nature includes the concept of moral responsibility that we apply in our jurisprudence. It would not be rational, even if determinism is true, to change our world to dispense with our moral attitudes.

Nicholas Maxwell recasts the problem from ‘free-will versus determinism’ to ‘wisdom versus physicalism’. (Maxwell 2005) For of all the various constructions that could be placed on the term free-will he considers that the one most worth having is not the ‘capacity to choose‘ but rather, ‘the capacity to realise what is of value in a range of circumstances‘ (in both senses of the word ‘realise’ ie: apprehend and make real). Secondly he characterises physicalism as “the doctrine that the universe is physically comprehensible.” It is not determinism but the idea that the universe is understandable that characterises physicalism. The problem of free will then comes down to how can that which is of value associated with human life (or sentient life more generally) exist embedded in the physical universe? In particular how can understanding and wisdom exist in the physical universe?

Both Peter Strawson’s and Nicholas Maxwell’s reformulation of the free-will debate appear to be compatibilist with respect to moderated concepts of free-will and determinism. Anything that weakens fundamentalist views ought to be welcomed, though how these views can be taken forward into empirical investigation is not apparent.

I think that one of the difficulties with the debate on free will is what it means to talk about ‘moral responsibility’. The usual interpretation of this concept is that when someone has done something reprehensible we hold them to account: we blame them for some situation and punish them. Blame is the attempt to impose shame on the part of the offender so as to inhibit activity. The dictionary definition of ‘responsibility’ is only vaguely related to this scenario. Responsibility is ” (Latin respondeo = to respond) the quality or state of being able to respond to any claim or duty.” Thus a responsible person can set in place those procedures necessary to prevent harm; if he has done wrong he can act to put the situation right; if some situation arises that is perceived as morally wrong he can take the requisite actions. Irresponsibility is where one seeks to evade one’s duty by excuses and inaction. Those who claim that no one is responsible for anything should be asked what they are ashamed of.

It seems to me that what is worth having for one’s self and for people in the society at large is this ability to respond to situations (to take ‘responsibility’) and do whatever is necessary in the circumstances we find ourselves. This means that responsibility is tied in closely with wisdom: it is responsible to acquire wisdom, it is wise to act responsibly.

Whether we have ‘free will’ in some ultimate sense or whether our actions are ultimately ‘determined’ is a metaphysical matter. Such concerns are junior to the fact of ‘moral responsibility’ which we can (hopefully) exercise regardless of our metaphysical leanings.

Libertarianism does not entail the idea that decisions are divorced from circumstances. It does presuppose, I believe, the ability to predict the future with some degree of confidence. “Able to choose otherwise in the same circumstances” restricts the possibilities for ‘free will’ by demanding that free will means nothing more than caprice. Responsible action requires gathering the information relevant to the decision at which time the decision may become ‘necessitated’ by what one now knows. This does not mean that that information caused the decision or that one is relieved of the responsibility for that decision.

Scientific investigation of the questions of free-will and compatibilism are difficult in principle because they are metaphysical issues that science cannot address directly. The particular side of the debate that people take would seem to depend on introspection of their decision making processes. For much of the 20th century psychological investigation of introspective accounts was considered worthless. So there is very little research on the subject.

There are however questions related to the metaphysical problem of free will that can be investigated empirically. For instance, the question of whether one’s attitude to the question of free-will affects one’s moral sense has been investigated.

In one experiment 119 undergraduates were randomly assigned to one of five groups to answer the same set of 15 standard reading-comprehension, mathematical and reasoning problems. (Vohs & Schooler 2008) Participants were told they would receive $1 for each problem they correctly solved. In three of the groups participants marked their own answers and paid themselves after which they shredded their answers. This gave ample opportunity to cheat. The other two groups had no opportunity to cheat. The five groups were treated slightly differently.

The three cheating-possible groups were given a series of 15 statements which they were supposed to think about for one minute each.

One group were given statements that were pro-determinism such as “a belief in free will contradicts the known fact that the universe is governed by lawful principles of science” and “Ultimately we are biological computers – designed by evolution, built through genetics, and programmed by the environment“.

Another group were given statements that were pro-freewill such as “I am able to override the genetic and environmental factors that sometimes influence my behaviour” and “Avoiding temptation requires that I exert my free will.”

The third group were given neutral statements such as “Sugar cane and sugar beets are grown in 12 countries.”

One of the two no-cheating groups was also given the pro-determinism statements to study before doing the test. The other was given the free-will statements. So this gave two groups of interest that could cheat – one primed with determinism, one primed with freewill; and three control groups to act as a ‘base line’. The average reward for the group primed for determinism that were able to cheat was $11 ± 1 whereas the other four groups each obtained approx $7 ± 1 (with non-significant variation).
free will and cheaters
It thus appears that the spreading of deterministic views is liable to increase modest forms of unethical behaviour, a result significant at the 1% level. Whether this generalises to more serious offences and whether the belief in determinism may compensate these minor offences with an increased compassion for the less well off and a decrease in the desire for revenge is not known.

Nevertheless it seems that the question of free-will is not just philosophical but is of great interest in jurisprudence as libertarians such as David Hodgson claimed.

So do we have free will? Well, if this experiment generalises we’d better believe it.

References

Blakemore C 1988 The Mind Machine BBC Books pp7, 270,170

Cashmore AR (2010) The Lucretian Swerve: The biological basis of human behaviour and the criminal justice system Proc Nat Acad Sci USA vol 107(10) p4499-4504

Hobbes T (1690) Leviathan chapter 21

Hodgson D (1999) Hume’s Mistake Journal of Consciousness Studies vol 6 no 8-9 p210

Lighthill J (1986) The Recently Recognised Failure of Predictability in Newtonian Dynamics Proceedings of the Royal Society of London A 407: 35-50.

Maxwell N (2005) Science versus Realization of Value, not Determinism versus Choice Journal of Consciousness Studies vol 12 no 1 p53

McIntyre JA (1981) MacKay’s Argument for Freedom Journal of American Scientific Affiliation 33 (Sept) p169-171

Rand A (1966) Philosophy and a Sense of Life The Romantic Manifesto Signet p 28

Sartre J-P (1956) Being and Nothingness: An essay in phenomenological ontology (transl HE Barnes) New York: Philosophical library p78

Searle JR (2000) Consciousness Free Action and the Brain Journal of Consciousness Studies vol 7 no 10 p11

Strawson G (2003) The Buck Stops – Where? (Interview with T Sommers) The Believer (march 2003)

Strawson P (1962) Freedom & Resentment Proceedings of the British Academy vol 48 p1-25

KD Vohs & Schooler JW (2008) The Value of Believing in Free Will: Encouraging a belief in determinism increases cheating Psychological Science vol 19(1) p 49

Warnock M (1998) An Intelligent Person’s Guide to Ethics Duckworth p 92

[amazon_link asins=’B007JYFHVM,B007KNSB24,B008VTB6TQ,B01F9QU1CQ,0451149165,0671867806,0715628410′ template=’ProductCarousel1′ store=’retthemin-21′ marketplace=’US’ link_id=’90a58256-4552-11e7-ad89-8f9003cb4326′]

Machine Consciousness: What is it like to be a computer?

by Michael Davidson

(response to the article Machine Consciousness: Fact or Fiction?  by Igor Aleksander available at http://footnote1.com/machine-consciousness-fact-or-fiction )

The eminent AI researcher Igor Aleksander tackles the question as to whether a machine could be conscious in the above article posted in February 2014.

Aleksander assumes that being conscious is an advantage to an organism such as a human being.   Consciousness therefore has a function.

This is a step forward from some philosophical positions:

a)   consciousness has no function (epiphenomenalism – originating with Thomas Huxley in the 19th century [Huxley 1912] and espoused typically by some neuroscientists and psychologists today [Soon et al 2008; Wenger & Wheatley 1999]).

b)   “psychology must discard all reference to consciousness… [must] never use the terms consciousness, mental states, mind, content, introspectively verifiable, imagery and the like.” [psychological behaviourism – Watson 1913].

c)   any statement involving mind words (such as consciousness, intentions) can be paraphrased without any loss of meaning into a statement about what behaviour would result if the person considered happened to be in a certain situation. [philosophical behaviourism – Ryle 1949].

d)   consciousness does not exist [Eliminative Materialism eg Churchland 1981].

e)   the self does not exist [New Scientist 23 Feb 2013] see my response here.

Aleksander believes that consciousness can be understood in ‘scientific’ terms, though what he means by ‘scientific’ is not explained (that would require several thousand words on its own and is not a subject devoid of controversy itself).   I suspect when he says ‘scientific’ he means physicalist since the core – metaphysical – assumption of most AI is that all things can ultimately be explained through the theories of physics and computation.

It is Aleksander’s contention that it is not possible to know that people other than ourselves are conscious because we have no tests to tell us, we simply believe they are because they are human.  (Or is it that we only believe they are ‘human’?).  ‘Non-scientific’ ordinary humans simply believe things based upon assumptions.   He invites us to believe that ‘tests’ – by which I presume he means ‘scientific’ tests – could confer upon us some kind of certainty and proof.

This seems to miss the important point that scientific theories are built up from observations by humans.   A theory is not necessarily true; it is just the best idea we have which

1) explains the observations leading to some understanding
2) predicts new observations that can be tested and may be found to be true (it’s the observations that are true not necessarily the theory) and
3) enables us to (ultimately) control the phenomena in question.

The observations are repeatable by anyone who treads the same path of learning.  Thus we can build jumbo jets and hadron colliders.

Aleksander quotes from the report of a conference of neurologists, computer scientists and philosophers on machine consciousness (Swartz Foundation 2001): “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artefacts designed or evolved by humans.”   May be so, but we don’t know of any fundamental law that allows it either.   Nor do we know, if it be allowed, how consciousness manifests from the material of brains.   Metaphysics creeps easily into science at the fringes.

We have had several revolutions in science when theories that seemed to explain everything in their sphere of enquiry were found to be flawed.   The classic example is Newton’s ‘Laws’ which were held to be metaphysically ‘true’ for 250 years.   Newton’s Laws could not be ‘proved’ but we certainly believed them then – and we believe them now in the context for which they were formulated.   Newton’s metaphysical assumptions (absolute space and time ‘flowing’ evenly) and the metaphysical conclusions derived from his works (the ‘clockwork universe’) have been overturned.

The idea that humans are conscious can certainly be subject to qualitative tests. Apart from the obvious observations of whether a person is asleep or awake, we can see to some degree whether a person is alert and aware of his or her surroundings.   The theory that that person is conscious explains his or her bearing and demeanour, enables us to enter meaningful conversation, enables us to discuss what we simultaneously perceive or simultaneously imagine, cooperate in joint ventures and so on.   It is difficult to imagine how science could be conducted if ‘scientists’ were not conscious.

Evidently most people do not doubt their own consciousness and if there are indeed no means to determine whether other people are conscious it is only a small step from there to solipsism – the view that the world and other minds exist only in one’s own mind.  This is contrary to the physicalist position that only physical things exist.

Aleksander’s first step is to define consciousness (a notoriously difficult task).   You can exclude too much or include too much.  You can limit it to simply the difference between a person who is conscious and one who is unconscious, concentrating on anaesthetics and their antidotes.   Or you can broaden it to personal identity – the self – moral intuitions and even ‘cosmic consciousness’.   You can also divide the concept in various ways –

‘phenomenal consciousness’ (awareness of here and now) and social consciousness (which presupposes concepts, abstraction and language) [Guzeldere 1995 ]  or

‘core consciousness’ (awareness of here and now) and ‘extended consciousness’ (an elaborate sense of self) [Damasio 1999] or

‘access consciousness’ (the contents of the mind accessible to thought, speech and action)  and  phenomenal consciousness’ (what it is like to be me) [Bloch 1994]
(‘What it is like’, which I also allude to in my title, is a reference to the seminal article by Thomas Nagel “What is it like to be a bat?” [Nagel 1974])

Here is Aleksander’s definition:
consciousness is a “collection of mental states and capabilities that include:

1. a feeling of presence within an external world,
2. the ability to remember previous experiences accurately, or even imagine events that have not happened,
3. the ability to decide where to direct my focus,
4. knowledge of the options open to me in the future, and
5. the capacity to decide what actions to take.”

If a machine endowed with language could report similar sensations Aleksander reckons then we would have as much reason to assume that it is conscious as we have with another human being.   I suspect the five points are framed in this way as to allow the construction of an artefact that could be said to be ‘conscious’, thus satisfying the metaphysical first principle that humans are machines.

Aleksander has built a machine ‘VisAw’ which he claims satisfy points 1, 2 and 3.

The machine is a set of artificial neural networks with the inputs and outputs displayed on a computer screen.   There is a picture of the screen in the referenced article.

VisAw is based on the architecture of the part of the human brain called the extrastriate cortex – ie Brodmann areas 18, 19 and 37.   The cortex was studied microscopically by K Brodmann (1868-1916) in 1909 and various areas were found to have different structures now given numbers.   The various functions of the cortex are only loosely related to the Brodmann areas.   The parts of the cortex involved in vision are labeled V1, V2, V3, V4 and V5.   The striate cortex (V1 located in B17) is so named because it is striped or striated.

machine consciousness brain map
(The two hemispheres of the brain are similar)

Simplistically and briefly, V1 maps to the retina and detects edges; V2 ‘fills in’ illusory edges; V3 is concerned with motion; V4 is concerned with spatial frequency, shape and colour and is affected by attention; V5 is concerned with the perception of motion and the guiding of some eye movements (the ‘picture’ of our surroundings is built up from many snapshots from the central sensitive area of the retina).   (Damage to one or other of these areas of the brain results in a characteristic neurological disorder such as not being able to perceive motion).

VisAw models V3, V4 and V5 at least to some extent.

Let’s look at Aleksander’s points more closely:

1) a feeling of presence within an external world.

Applying this to humans, most people will certainly regard this as true for themselves, but Aleksander asks how do we know this for others?   One test is to ask them whether they feel they are in an external world.   A reasonable test you might think, but not one that is 100% reliable.   The following simple program can do this but few would consider the answer as reflecting the internal feeling.

INPUT x
IF x = “Do you feel you are present in an external world?” THEN
OUTPUT “of course”

You might think this example is too ridiculous for anyone to be taken in by it, but many simple conversational programs have done so (eg ELIZA [Weizenbaum 1976] fooled some people into revealing intimate secrets and some practicing psychiatrists believed that ELIZA could develop into an automatic form of psychotherapy).   It is only when the mechanism of how the program functions is revealed that the magic disappears.   Here’s an example:

Person: “My dog’s mother died recently.”
ELIZA: Tell me more about your mother
(it doesn’t recognise anything in the person’s statement apart from the word ‘mother’)

Aleksander contends that by suitable manipulation “the content of a machine’s ‘mind’ can be made transparent and observable, [so] attributing consciousness to it may be no more outlandish than attributing consciousness to a living creature.”   Obviously my little program would fail this test.   In the case of a neural network the workings are relatively obscure since they do not depend on rules written by a programmer but they are not necessarily any more magical.   The display of a message “I am imagining” does not presuppose any ‘I’ attached.   If we expose the workings of VisAw it is not clear that there are any feelings connected to it.   So it is difficult to see how we would know that the machine has any feeling at all.   Leibniz made the same point: “Perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions.   And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill.   That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.” (Liebniz 1714)

From what I can tell of VisAw it seems to ‘remember’ and ‘recognise’ two dimensional pictures of human faces.   Humans and other living creatures exist in three dimensions.   According to the psychologist JJ Gibson perception depends on the creature being able to move in the real world and so build up a three dimensional model of the world in which it is located and moves, rather than a world which rotates around the creature, as if on a TV screen.[Gibson 1986]

2. the ability to remember previous experiences accurately, or even imagine events that have not happened.

It could be argued that a DVD recorder has the ability to remember audio visual experiences but it seems unlikely that such a machine would be conscious.   Thus displaying the contents of VisAw’s memory does not illustrate consciousness.   Nor does the recombination of several parts of previous experiences demonstrate imagination, I would contend.   Imagination is creative not merely reconstructive.   A display on the computer screen saying “I am imagining” does not constitute imagining.

On the other hand humans don’t remember accurately anyway.   A robot that was similar cognitively to a human but with an ‘accurate memory’ would probably illustrate the phenomenon of the ‘uncanny valley’.   This refers to the revulsion humans typically experience when a robot is visually almost indistinguishable from a human being.   Such robots are experienced as distinctly ‘creepy’.

The criterion seems somewhat divorced from consciousness as such.   People with serious damage to the hippocampus (part of the brain associated with certain aspects of location and memory) typically cannot remember anything before a few minutes ago (though they can remember things before the injury occurred).   Such people can nevertheless learn skills such as playing the piano but cannot remember actually learning to do so, though they are definitely conscious according to the people who meet them.

3. the ability to decide where to direct my focus.

This seems to postulate ‘free will’ although most modern philosophers consider this to be an illusion and argue that all our actions are determined by prior causes.  This is the consequence of the predominance of the metaphysics of physicalism.   So I suppose that the machine must simply say that it has this ability for it to be considered that it has it (just like humans are supposed to).   The question of ‘free will’ is another thorny issue that requires several thousand words to summarise.

In so far as VisAw actually models the extrastriate cortex and produces something which seems to work in the way the cortex does, this is a big achievement.  But the extrastriate cortex is concerned with vision, and although vision is arguably a central part of consciousness (tell that to a blind man) it is not the whole of it.

Neuroscientists have not come out and claimed that the centre of consciousness is in the extrastriate cortex.   Some have identified the thalamus as the centre of consciousness, since most of the sensory nerves converge on it.   Others have suggested B40 is the seat of the sense of self; B24 the seat of free will; B9, B10, B11 and B12 the centre of the executive functions.   Others have said that consciousness is a function of the entire brain or even the whole body.

It is too much to require that tentative steps to what might become ‘machine consciousness’ should demonstrate the whole phenomenon immediately.   But I suspect that even a partial demonstration is a way off.

The Turing Test suggested by Alan Turing in 1950 was to test whether a computer and a human being could be distinguished if the medium was only a text message.   Turing estimated that a suitable program could be written in about 3000 man years by about the year 2000 and it would fool humans 70% of the time. [Turing 1950]   By contrast the program for the NASA space shuttle took about 22,000 man years to develop and I have not yet heard a claim that the shuttle is conscious.   Optimism is endemic in artificial intelligence research.

Arguably the program IDA [Franklin 2003] mentioned by Aleksander in his article has passed the Turing test in the very narrow domain of allocating sailors to new assignments by communicating in natural language by e-mail.    IDA does not understand the communications of the sailors in the same way a human would: it matches the content of the incoming e-mail to one of a few dozen templates egplease find job“.   IDA would fail on general knowledge or anything outside its templates.   Yet it is claimed to be conscious by ‘machine consciousness’ advocates.

As a tougher test for machine consciousness I propose the ‘Reid Test’.   The Scottish Enlightenment philosopher Thomas Reid (1710-1796) defined common sense as “that degree of judgment which is common to men with whom we can converse and transact business.” (Reid 1785).   Consciousness and common sense are not quite the same concept but I believe consciousness is a prerequisite for common sense.  Anything that demonstrates common sense must be conscious, I think. I contend that consciousness requires the abilities to perceive (external objects in 3 dimensions and one’s location in the world), understand (words and abstract concepts), imagine (conceive novel situations), communicate (in some natural language) and act on the environment.   Consciousness without emotion would seem paradoxical (the uncanny valley) for emotion reveals the degree of involvement of the subject with the object.   Clearly the scope of these abilities would be limited in the initial stages of development of any conscious machine.   In addition the machine must be open to internal inspection so that any ‘magic’ is exposed.   History shows that humans can be deceived by determined trickery.

Animals other than humans show abilities in these domains to the point where we believe them to be conscious.   For example dogs can remember the layout of their environment, understand a limited number of commands and the task at hand, imagine certain situations and behaviours (think sheep dogs), and bark and whine in appropriate places.   They obviously display emotions.   The behaviour of trained chimps and orang-utans with sign language is even more impressive.   What we don’t have are scales of achievement in these domains that would allow us to measure the degree of consciousness.   It seems to be an all-or-nothing attribute that is confused with and by other attributes such as intelligence, social context and trained response on the part of the animal, and by empathy on our part.   Consciousness itself must be subject to more investigation before we can definitively ascribe consciousness to lower animals such as bees.

Developing a conscious machine requires a lot of effort in physics, computer science, psychology, neuroscience, philosophy and so on. It also requires a notable lack of arrogance not merely assertions. It also requires good luck in finding that metaphysical reality actually allows machine consciousness.

So to answer Aleksander’s query: Is Machine Consciousness  fact or fiction?
For the moment and the forseeable future: fiction.   But keep trying.

 References

Block N (1994) Consciousness in A companion to Philosophy of Mind ed S Guttenplan, Blackwell

Churchland PM  (1981) Eliminative materialism and the propositional attitudes Journal of philosophy vol 78 no 2 section 1

Damasio A (1999) The Feeling of what happens Heinemann p16

Franklin S (2003) IDA: A conscious artifact? Journal of Consciousness Studies  vol 10 no 4-5 p47-66

Gibson JJ (1986) The Ecological Approach to Visual Perception Lawrence Erlbaum Associates

Guzeldere G ( 1995) Problems of Consciousness: a perspective on contemporary issues, current debates Journal Of Consciousness Studies vol2 no 2 p118

Huxley T  (1912) Method and Results Macmillan : p240, p243 available at http://www.archive.org/details/methodresultsess00huxluoft

Leibniz G  (1714) Monadologie sec 17 transl R Latta available at http://philosophy.eserver.org/leibniz-monadology.txt

Nagel T (1974) What is it like to be a bat? Philosophical Review vol 83, 4 p435 available at http://members.aol.com/NeoNoetics/Nagel_Bat.html

Reid T  (1785) Essays on the Intellectual Powers of Man, Essay 6 Chapter 2  (p229)  available at http://www.earlymoderntexts.com/pdfs/reid1785essay6_1.pdf

Ryle G (1949,2000) The Concept of Mind Penguin

Soon CS, Brass M, Heinze H-J & Haynes J-D  (2008) Unconscious determinants of free decisions in the human brain. Nature Neuroscience  11, p543 – 545

Swartz Foundation (2001) Can a Machine be Conscious? available at http://www.theswartzfoundation.org/banbury_e.asp

Turing A (1950) Computing Machinery and Intelligence Mind 59  (236) p433-460  available http://www.loebner.net/Prizef/TuringArticle.html

Watson JB  (1913) Psychology as the Behaviorist Views it. Psychological Review, vol 20, p158-177 available at http://psychclassics.yorku.ca/Watson/views.htm

Wegner D & Wheatley T (1999) Apparent Mental Causation Source of the Experience of Will American Psychologist vol 54 no 7 p480-492 available at http://www.wjh.harvard.edu/~wegner/pdfs/Wegner&Wheatley1999.pdf

Weizenbaum J  (1976) Computer Power and Human Reason Penguin p188

[amazon_link asins=’B007JYFHVM,B007KNSB24,B008VTB6TQ,0631179534,0156010755,1848725787,0226732967,0140179119′ template=’ProductCarousel1′ store=’retthemin-21′ marketplace=’US’ link_id=’793863a7-4553-11e7-81d3-7314770c8f6c’]

Memories of Historical Sexual Abuse

by Michael Davidson

A number of allegations of historical sexual abuse have recently hit the courts in the UK. This follows revelations of sexual abuse on the part of the celebrity DJ Jimmy Savile who died in 2011. Following his death there were 450 complaints against Savile including 34 rapes and 28 sexual assaults on children under 10 years of age at the time, some offences dating back to the 1950’s. Savile was associated with numerous charities and good causes during his lifetime, all of which are now embarrassed by their association with him.

The publicity surrounding this scandal has encouraged people to come forward with allegations concerning sexual misconduct by other celebrities and non-celebrities many years ago, some of which are now proceeding through the courts with attendant publicity. I do not envy the jurors their task of deciding on the guilt or innocence of the defendants in these cases. The defence is often that the alleged perpetrator was not in the stated place at the stated time, or had never met the alleged victim, or the sexual relations were consensual, so it boils down to one person’s word against another’s. Clearly, rape and sexual assault where proved should be punished according to the law. But imagine for the moment that you are suddenly confronted by a policeman who states that an allegation, or worse several allegations, of sexual assault perpetrated perhaps decades ago have been made against you. How to defend yourself? Unless you kept a detailed diary which says where you were and who you were with at the time in question, and can then call on witnesses or other documentation, you are in a sticky situation. You can easily start to doubt your own memory.

The key questions are “How reliable is memory?” and “How can memories be assessed for truth?”

Even if we were able to report precisely what we see, our accounts of what occurred some time ago rely on our memories that can be subject to error in many ways. Psychologist Daniel Schacter (Ref 1) has listed seven sins that our memories are subject to:

  • 1) transience
    forgetting things gradually over time
  • 2) absentmindedness
    forgetting because of not paying attention to things that we should have
  • 3) blocking
    The temporary inability to remember something that is known when you need it (it may pop into consciousness some time later)
  • 4) misattribution
    assigning a memory to the wrong source, such as attributing someone’s statements to another
  • 5) suggestibility
    developing false memories for events that did not happen
  • 6) bias
    changing past events in support of current attitudes and beliefs
  • 7) persistence
    remembering past episodes that we would prefer to forget.

The phenomenon of false memory is regarded as particularly problematical for any notion of objective reporting (Hall, McFeaters & Loftus Ref 2).   In a well known experiment into the reliability of eye witness testimony (Loftus & Palmer Ref 3) 45 students were each shown seven short film clips of car accidents. The students were asked to describe the accident they had just seen and then answer a number of questions. The 45 students were divided into 5 groups who answered a slightly different set of questions. The key difference was the question: “About how fast were the cars going when they xxx into each other?” where xxx was substituted by one of the verbs “smashed”, “collided”, “bumped”, “hit “, and “contacted “. When the word “smashed ” was used the mean estimated speed was 41mph and when “contacted ” the mean estimated speed was 32mph, with the other verbs somewhere in between (“hit” was 34mph). This result could be due to distortion of memory by the verb used, or it could be caused by distorted report based on what the student thinks is the expected answer.

In a follow-up experiment 150 students were shown a short film with a 4 second multiple car accident. The students were divided into 3 groups and asked several questions, one of which was asked the ‘smashed’ question and another, the ‘hit’ question. The third was not queried about the speed. A week later all 150 students were asked a number of other questions including the critical question: “Did you see any broken glass?” (there was none in the film). Here is the result:

  • Broken Glass seen
    smashed: 16 (55%); hit: 7; control: 6.
  • Broken Glass not seen
    smashed: 34 (28%); hit: 43; control: 44.

Considering the fact that the difference in response appears to have occurred because of one word in one question among several asked one week earlier (p < 1%), this calls for some explanation. The ‘reconstructive hypothesis’ is that two types of information go to make up a memory of an event. One is the information obtained from the perception of the event and another is the information supplied after the event (in this case the suggestion of ‘hit’ or ‘smashed’). These may become integrated as one memory. When the question about broken glass is asked, the subject who thought he saw a smash rather than a mere hit, reasons that there must have been glass and adds it to the memory.

The phenomenon that memory can be changed after the fact is not confined to episodic memory. There is evidence that the brain converts short term memory into long-term memory by a hypothetical process known as ‘memory consolidation’. Experiments with conditioning in different animals as divergent as bees, snails and rats show that consolidated memories are not fixed for all time, but can become malleable when they are reactivated. This is known as ‘reconsolidation’ and it only occurs in a ‘window’ of time after the reactivation of the memory.

The Russian physiologist Ivan Pavlov (1849-1936) developed the idea of ‘conditioning’ around 1900.   This was mainly through his work with the salivary response of dogs to food.   As a physiologist Pavlov was interested in the chemistry of digestion and was collecting saliva from dogs when they were presented with food.   The salivation response is automatic and not learned.    When he rang a bell just before giving the dog food a few times he found the bell on its own would cause the saliva to flow.   Pavlov called the stimulus that caused the physiological response on its own (the presentation of food) ‘the unconditional stimulus’.   This was because it always gave rise to ‘the unconditional response’ or ‘the unconditional reflex’ (saliva).   The ‘conditional stimulus’ was the originally neutral stimulus (bell) which through enough pairings with the ‘unconditional stimulus’ (food) would display the same response (the ‘conditional response’ – saliva).   In other words, the animal had learnt a connection between the two stimuli and it responded in the same way to either of the two.   Pavlovian conditioning of a fear response gradually diminishes when the conditional stimulus is not associated with the unconditional stimulus (a process known as ‘deconditioning’), but the fear response is liable to return later (so-called ‘spontaneous recovery’).

An experiment to investigate the reconsolidation window in humans was performed in 2009 in which the unconditional stimulus was an electric shock and the conditional stimulus an image of a coloured square. The ‘fear’ stimulated by the coloured square was measured by the change in the electrical resistance of the palms. The experiment found that if the deconditioning is performed inside the reconsolidation window (say 10 minutes after a ‘reminder’ conditional stimulus) there does not appear to be spontaneous recovery even a year later. On the other hand where the deconditioning occurred outside the reconsolidation window (say 6 hours later) spontaneous recovery occurred the following day and a year later (Schiller et al Ref 4).   It might be thought that the deconditioning process would itself establish a reconsolidation window, but this does not appear to be the case. The conclusion is that the original fear memory has been changed.

In later experiments, Elizabeth Loftus attempted to create completely false memories in 24 individuals aged 18 to 53. The subjects were given a booklet containing three one-paragraph accounts of events in their childhood as recounted by a parent or older sibling. Into the booklet was inserted a plausible but false event that the subject had been lost in a shopping mall for an extended period and had been comforted by an elderly woman before finally being reunited with the family. The subjects were asked what they remembered about each event. About two thirds of the factual events were recalled immediately on reading the accounts and a quarter of the subjects partially or fully ‘remembered’ the false event (p < 1%). “Statistically, there were some differences between the true memories and the false ones: participants used more words to describe true memories and they rated the true memories as somewhat more clear. But if an onlooker were to observe many of our participants describe an event, it would be difficult indeed to tell whether the account was a true or false memory.” (Loftus Ref 5)
As far as I know no attempt has been made to establish any personality or character differences between the people who did not ‘get lost in the mall’ and those who did. The former outnumber the latter by a factor of 3 to 1. Personality may be a factor but it seems that the more trusted the source of the false information the more likely that a false memory is implanted. For instance, a contrived photograph of a childhood flight in a hot-air balloon gave rise to a false memory of such an event in 50% of 20 subjects aged 18 to 28 (Wade et al Ref 6).

In the last decade there have been several attempts to distinguish true memories from false ones, using EEG, PET and MRI but although group differences have been found there is no reason yet to modify Loftus’ 1997 conclusion: “Without corroboration, there is little that can be done to help even the most experienced evaluator to differentiate true memories from ones that were suggestively planted.”

There is every reason to be cautious about first person reports, particularly with reports of satanic and sexual abuse ‘just remembered’, but if it were possible to dispense with memory all together, much more is at risk than mere reminiscence. Indeed, science itself would collapse.

There are some memories that are particularly vivid – a so called ‘flash bulb memory’. This is where particularly surprising and emotional events are apparently etched into our memories, such as when you first heard that John F Kennedy was assassinated (if you are that old); Princess Diana had died; the twin towers of the World Trade Centre in New York had collapsed and so on. These mentioned ones are of international importance but such memories are not necessarily so e.g. a rape or when you heard of the death of a close relative for instance. But are these memories as accurate and as detailed as we think?

I remember being in a cinema watching a Japanese film ‘Close to Life’ by the Japanese director Kurasawa when the lights went up and the cinema manager came in to say that Kennedy had been assassinated. I only know the name of the film now because I identified it from the plot some years later. I remember that one of my two friends there with me immediately left. I do remember their names. The lights went down and the film continued. I can see the manager now, but not in any photographic detail. I could not say at what point in the film the interruption came, nor what I or my friends were wearing at the time. In my mental image picture there is nothing much to distinguish this cinema from any other though I have the impression of a number of rows with a certain width in front and behind me, but I could not count them. Although I knew the cinema manager at the time I do not remember his name nor could I say with any certainty what he was wearing. Such memories do not appear to be as accurate as we think they are.

Psychologists Neisser and Harsch (Ref 7) interviewed 106 college students less than 24 hours after the Challenger disaster and then again after 2½ years. The later recollections were much less detailed than the original. Neisser thinks that the persistence of the memories and their clarity is due to the frequent consideration the memories receive after the event, rather than to the original impact. The flash bulb events are the link between our own personal life stories and the life stories of our friends and acquaintances; or as Neisser puts it, “with ‘history'”. Flash bulb memories are prone to confusions and omissions and insertion of people who were not there.

People who survive traumatic events sometimes develop the condition known as Post Traumatic Stress Disorder (PTSD). Sometimes the condition arises a considerable time after the traumatic event. PTSD entails such things as disturbing and recurring flashbacks, avoidance of reminders of the event, and high levels of anxiety. Surely these kinds of memory are more reliable? In a 1990’s study 59 Gulf War veterans were asked about their war experiences a month after return and again 2 years later. (Southwick et al Ref 8) 70% recalled at least one traumatic event after 2 years that they had not mentioned before. This does not necessarily mean they were confabulated, but those recounting the most ‘new’ memories also reported the most PTSD symptoms. This suggests to some that the vets were attributing symptoms of depression and anxiety to a memory given new significance or even unconsciously fabricated.

Victims of rape and other traumas are often offered psychotherapy (ie counselling and talking cures). There are somewhere between 400 and 800 different brands of ‘psychotherapy’ depending to which list you refer, and some are of a bizarre nature and dubious efficacy. That there are so many versions is testimony to the lack of good theory based on evidence. Some of these have extensive training periods and/or accreditation procedures and/or are backed by some academic background and/or are government sanctioned and/or are heavily promoted. However, the sometime recommendation that psychotherapies be licensed and validated by the government has little going for it. In view of the wide definition of psychotherapy [HH Strupp defines psychotherapy as “the systematic use of a human relationship to effect enduring changes in a person’s cognition, feelings and behaviour.” (Ref 9) ] it is difficult to separate ‘psychotherapy’ from those social interactions which the government currently has, and should have, no business in. See Dawes (Ref 10) for the American experience with licensing of psychologists from the point of view of a professor of psychology. The usual rationale for such licensing is that the public should be protected from charlatans, and quality assurance of the techniques. According to Dawes, licensing is more oriented to protecting the status and income of practitioners and does little to protect the public, rather it sanctions the practice of dubious procedures such as alien abduction ‘therapy’, the application of invalid diagnostic tests such as the Rorschach and the public recognition of being an ‘expert witness’ in court.

Psychoanalytically-oriented therapists think that the reason an incident apparently causing PTSD was not a problem for a period of time was because the individual repressed the memory (pushed it into his or her ‘unconscious mind’) and was unable to recall it. The repression is expressed in unhealthy emotions and behaviour. When the memory is recovered the individual is restored to ‘health’. This idea gave rise to ‘recovered memory therapy’ in which therapists sought to uncover repressed memories of traumas of all kinds. Memories of ‘sexual abuse’, ‘satanic rites’, ‘ritual murders’ not to mention ‘alien abductions’ were recovered in the 1980’s and 1990’s which owed more to the imagination of the therapists than the experiences of the individuals. A number of high-profile court cases where fathers were wrongfully imprisoned for sexual abuse of their daughters based on memories ‘recovered’ by hypnosis and suggestion, tempered the courts with caution at least for a while, if not the therapists.

Furthermore those who were subjected to therapy of this kind were evidently more upset afterwards than they were before and the therapists may well have actually created the PTSD they were trying to prevent. See (McNally Ref 11) for some accounts of these false memory cases. One of the more interesting (for outsiders) was the case of a man whose daughters recovered ‘memories’ of him having abused them. In an “intensive quasi-hypnotic interrogation” of this man he recovered memories of having raped his own children repeatedly, having led a satanic cult for nearly 20 years and been involved in the sacrificial murder of hundreds of babies. He confessed to the crimes and was jailed despite there being no evidence of missing babies or bodies or a satanic cult. Evidently patients who recover memories of ritual abuse often develop PTSD during the course of therapy – rates have ranged from 28% to 100%. Survivors of ‘recovered memory therapy’ seem to be intent on revenge against their alleged abusers. The problem with ‘recovered memory therapy’ was that the ‘memories’ were false.

It is not just ‘psychotherapists’ who can instil false memories. Leading questions by social workers, and police can contaminate and corrupt children’s (and adult’s) memories. This evidently occurred in the notorious cases of sexual abuse ‘epidemics’ in Scotland in 1991 (BBC News Ref 12) and in England in 1987 (Pragnell Ref 13)
Not all psychological investigation of memory is negative in the sense that it throws doubt on the validity of memory. But care has to be taken not to make inadvertent suggestions. Based on the challenges mentioned above, Geiselman and Fisher (Ref 14), developed a means of improving the validity of recall, for example in forensic investigations, called the ‘cognitive interview’. This technique is based on the idea that a memory consists of many different elements and that the more context that can be recalled or reinstated the more reliable the recall is likely to be. (Evidently, this is the factual basis of the old Hollywood joke in which a person can only recall certain information when he is drunk. One experiment with deep-sea divers asked them to recall a list of words given to them either under water or on land. This showed that words given underwater were best recalled under water and words given on land were best recalled on land.) Secondly, memory can be retrieved in several ways, so what is not retrieved by one means may be retrieved in another. The technique includes four instructions that interviewers could use to get more reliable accounts from witnesses:

  • (1) try to picture in your mind the circumstances that surrounded the crime event including what the environment looked like, and also think about your feelings and reactions to the event.
  • (2) Report everything that you can remember; do not leave anything out of your description, even things you may consider unimportant.
  • (3) Report the events in different orders: forward, backward, or starting from the middle.
  • (4) Try to recall the different perspectives you may have had during the event or think about what some other prominent person at the event would have seen.

In addition to these general instructions, the cognitive interview also contains specific prompts to facilitate recall of particular kinds of information… (eg ‘Did he remind you of anyone you know? If so, why?’… ‘Was there anything unusual about the voice? What were your reactions to what was said?’)”. Evidently in laboratory experiments this technique produced 25%-30% more facts than standard interviews without increasing the number of false details and it may decrease the contaminating effect of misleading post-event information. Field studies with police interviewers trained in the technique showed a 47% increase in information gleaned.

Now that police interviews are routinely videoed and the videos shown to the jury, the jury should be able to assess to what degree the interrogation was in line with the cognitive interview technique. Unfortunately juries can only judge cases on the evidence before them and unless evidence such as that discussed here is presented to them the average juror will be ignorant of it.

It is not sufficient to accept an allegation of rape or sexual abuse without obtaining such information as time and place, details of how the offence was perpetrated in all the embarrassing details, the size and shape of the offender’s member, and so on. Is the account consistent from telling to telling or does it show evidence of successive ‘embroidery’?

The difficulty of assessing the truth of a ‘recalled event’ by witnesses has given rise to the idea that lies can be detected by physiological measurements as used in the ‘lie detector’ or ‘polygraph’. The theory behind the polygraph is that a deceptive answer to a pertinent question causes an emotional response such as fear of detection or heightened arousal which shows up in the physiological recordings. Although the result of a polygraph test appears to be a purely physiological measurement, in fact the result is a product of the examinee’s motivation, the interrogation technique and the interpretation of the physiological measurements, as well as the physiological effects themselves [Orne et al Ref 15].   Accordingly polygraph operators generally demonstrate to the subject how the polygraph can detect emotional responses and instil a belief that the polygraph can detect lies. Faced with an infallible witness such as this many interviewees confess, believing resistance is useless. On the other hand accused persons sometimes volunteer for polygraph tests believing their innocence can be proved by their physiological responses. Physiological responses to the various questions can vary according to whether the examiner is friendly or aggressive, and whether the examiner is acting for the prosecution or the defence. Sometimes the examiner concludes that the subject is deceptive because of suspected “counter-measures” during the session. The subject’s psychological profile such as his attitude to lying or his considerations concerning the alleged offences must be pertinent. The result of a polygraph examination thus depends on a number of factors apart from the actual physiological responses. Results are therefore difficult to replicate. Scientific opinion on the accuracy of the polygraph in detecting lies is generally unenthusiastic [Fienberg et al Ref 16]. There is no direct causative chain that leads from lying to the physiological responses. The physiological responses can be caused by other factors than lying and it is therefore impossible to decide on the basis of the physiological response that a lie has occurred.

In most cases there is no independent measure of deception therefore the incidence of false positives and false negatives cannot be ascertained. In one experiment a number of students were given information on a forthcoming test. When wired up to what they believed were lie-detectors that were in fact dummies, 13 out of 20 confessed to receiving the information, whereas only 1 in 20 confessed when not wired up[Quigley-Fernandez & Tedeschi Ref 17]. Therefore even spectacular true positives do not prove the effectiveness of polygraph testing per se. Also, false confessions do occur [Meyer & Youngjohn Ref 18].  A confession by a naïve subject is no doubt counted as a success for the polygraph, but this success does not automatically carry over into the case where a more sophisticated and possibly trained subject deliberately wants to evade detection. Polygraph evidence is not admissible in UK criminal courts.

The perceived infallibility of technology is carried over even more convincingly into brain scanning techniques, so-called ‘brain fingerprinting’. In this technique the subject is hooked up to an EEG (electroencephalograph) that records electrical potentials in the scalp. The particular potential that is considered significant is a positive potential that occurs roughly 300 milliseconds after a stimulus that the subject recognises. This P300 potential is thought to occur where the subject recognises the stimulus as familiar or meaningful, but not otherwise. Thus a number of pictures, phrases or words, including some relevant to the enquiry, are shown to the subject and the P300 potential looked for. Thus when a P300 response occurs on a picture of (say) the murder weapon when the subject would otherwise have no knowledge that this was the case, this could be taken by a jury as an indication of guilt. But brain fingerprinting only reveals what information is stored in the subject’s brain. It does not show how or why the information got there [Farwell Ref 19]. Therefore the selection of the various phrases and pictures is critical. The degree to which such memory traces are reliably indicated under the conditions where memory is subject to the seven sins mentioned above requires investigation. According to Farwell, no questions are asked or statements made during the test, so it is not in any sense detecting ‘lies’. In the case of an alleged rape, the intent of the parties, which may be the vital piece of evidence, is not revealed. Brain fingerprinting has therefore limitations, and is only one more piece of evidence to be weighed by the jury, if indeed such evidence is produced and admissible. The method of producing such evidence must also be subject to scrutiny to prevent abuse and even ‘fitting up’. In addition the reliability of brain fingerprinting has been seriously questioned [Rosenfeld Ref 20].

As far as I know no brain fingerprinting evidence has been produced in a British court and is not likely to do so in the near future. There are also ethical considerations in the use of such techniques, for example: does an individual have a right to their own thoughts? Under what circumstances should such a right be waived? What level of certainty is required for a person’s statement is to be classified as a lie based on the output of a machine?

At this time of writing a prominent case of alleged rapes and sexual assaults by a celebrity has been resolved with a ‘not guilty’ verdict. There was a high cost to the tax payer of this and other trials, not to mention the stress to the accused, the witnesses and their families. The onus is therefore on the Crown Prosecution Service to make sure there is a reasonable chance of conviction. It should never be the case that every prosecution brought by the CPS should result in a guilty verdict. However it is up to the CPS to vet the police evidence and assess the reliability of witnesses. There are standard psychological techniques to measure to what degree a witness is suggestible or indeed fantasy prone [Gudjonnson Ref 21]. The main use of this kind of evidence in court has been in assessing whether a confession by a defendant was the result of suggestion and coercion. Such evidence has resulted in the reversal of ‘guilty’ verdicts in several high-profile cases. It would be too much to suggest that all witnesses be subjected to such tests, but in the case of historical abuse cases with little or no evidence beyond the witness testimony, the CPS should consider using the tests to assess the reliability of witnesses. If the main prosecution witness proves reliable when subject to such tests, the CPS would strengthen their case. In the contrary eventuality, the CPS would save a great deal of tax payers’ money and not lose face by bringing a weak case.

What do you think?

 

 

References

[Ref 1] Schacter D L (1999) The seven sins of memory: Insights from psychology and cognitive neuroscience American Psychologist vol 54 p182-203
[Ref 2] Hall DF, McFeaters SJ & Loftus EF (1987) Alterations in Recollection of Unusual and Unexpected Events Journal of Scientific Exploration, Vol 1 (1) p3-10 available at http://www.scientificexploration.org/journal/jse_01_1_hall.pdf
[Ref 3] Loftus EF & Palmer JC (1974) Reconstruction of automobile destruction: An example of the interaction between language and memory Journal of Verbal Learning and Verbal Behaviour vol 13 p585-589
[Ref 4] Schiller D, Monfils M-H, Raio C, Johnson DC, LeDoux JE & Phelps EA (2010) Preventing the return of fear in humans using reconsolidation update mechanisms. Nature vol 463 (8637) p49-53
[Ref 5] Loftus EF (1997) Creating False Memories Scientific American vol 277 no 3 p70-75 available at http://faculty.washington.edu/eloftus/articles/sciam.htm
[Ref 6] Wade KA, Garry M, Read JD & Lindsay DS (2002) A picture is worth a thousand lies. PsychonomicBulletin and Review vol 9 p597–603 available at http://web.uvic.ca/psyc/lindsay/publications/2002WadGarReadLind.pdf
[Ref 7] Neisser U & Harsch N (1992) Phantom Flashbulbs: false recollections of hearing the news about Challenger in Winograd E & Neisser U (eds) Affect and Accuracy in Recall: studies in Flashbulb memories Cambridge p9-31
[Ref 8] Southwick SM, Morgan CA, Nicolaou AL & Charney DS (1997) Consistency of memory for combat related traumatic events in veterans of Operation Desert Storm American Journal of Psychiatry vol 154 p173-177 abstract available at  http://ajp.psychiatryonline.org/doi/abs/10.1176/ajp.154.2.173
[Ref 9] Strupp HH (1986) The non-specific hypothesis of therapeutic effectiveness: a current assessment American Journal of Orthopsychiatry vol 56 (4) p513-520
[Ref 10] Dawes, RM (1994) House of Cards: Psychology and Psychotherapy Built on Myth New York: The Free Press p133-177
[Ref 11] McNally RJ (2003) Remembering Trauma Belknap Press chapter 8 p240-246
[Ref 12] BBC News (1991) “1991: Orkney ‘abuse’ children go home”. On This Day 4 April 1991. available at http://news.bbc.co.uk/onthisday/hi/dates/stories/april/4/newsid_2521000/2521067.stm
[Ref 13] Pragnell C (2002) The Cleveland Child Sexual Abuse Scandal: An Abuse and Misuse of Professional Power available at http://www.davidlane.org/children/choct2002/choct2002/pragnell%20cleveland%20abuse.html
[Ref 14] Geiselman RE & Fisher RP (1988) The Cognitive Interview: An Innovative Technique for questioning witness of crime Journal of Police and Criminal Psychology vol 4 (2) p2-5
[Ref 15] Orne MT, Thakray RI & Paskewitz DA (1972) On the detection of deception: A model for the study of physiological effects of psychological stimuli in Greenfield NS & Sternbach RA (eds) Handbook of Psychophysiology New York: Holt, Rinehart and Winston
[Ref 16] Fienberg SE, Blascovich JJ, Cacioppo JT, Davidson RJ, Ekman P, Faigman DL, Grambsch PL, Imrey PB, Keeler EB, Laskey KB, McCutchen SR, Murphy KR, Raichle ME, Shiffrin RM, Slavkovic A & Stern PC (2003) The Polygraph and Lie detection National Academies Press p288
[Ref 17] Quigley-Fernandez B and Tedeschi JT (1978) The bogus pipeline as lie detector: Two validity studies Journal of Personality and Social Psychology 36 p247-256
[Ref 18] Meyer RG & Youngjohn JB (1991) Effects of feedback and validity expectancy on responses in a lie detector interview Forensic Reports 4 p235-244
[Ref 19] Farwell LA (2004) PBS Innovation Series – Brain Fingerprinting: Ask the Experts available at http://www.pbs.org/wnet/innovation/experts_qa8.html
[Ref 20] Rosenfeld JP (2005) Brain Fingerprinting: A Critical Analysis The Scientific Review of Mental Health Practice vol 4 no 1
[Ref 21] Gudjonsson GH (1984) Interrogative Suggestibility: Comparison between ‘False Confessors’ and ‘Deniers’ in Criminal Trials. Med Science Law vol 24 no 1 p56-60 available at http://www.roughjusticetv.co.uk/suggestion.pdf

[amazon_link asins=’B007JYFHVM,B007KNSB24,B008VTB6TQ,0684830914,0674018028′ template=’ProductCarousel1′ store=’retthemin-21′ marketplace=’US’ link_id=’09a3f9a2-4554-11e7-8f9b-7b116b57fa56′]