Saturday 23 November 2019

Thatcher and the rise of New Public Management

The Target Culture arrives

Today we can see the limitations in Simon’s philosophy. But we have to remember that at the time, people were in the grips of a kind of computer worship. I clearly remember meeting otherwise sensible people in the 1970s and 1980s who sincerely believed that data that had passed through a computer was inherently more useful, was better and was certainly more believable than anything else. It was as if computers produced results that that could not be questioned.


I seem to remember an experiment in the late 1970s in which people were shown similar pieces of text in four different formats: hand-written, typewritten, typeset and as output by a dot-matrix printer on fanfold paper. When they were asked which text was more credible, the huge majority of people said, as one might expect, that the handwritten text was the least credible, followed by typewritten text. But most credible of all  – in the 1970-s – was invariably the text output by a dot-matrix printer. It seems that the fact that its appeared to have emerged from a computer gave it a special kudos.

I would argue that it was the spirit of the times as well as Simon's reputation as an economist and computer scientist, that lay behind the extraordinary and uncritical acceptance and amplification of Simon’s ideas by scholars.

The New Public Administrators went on to create a new vocabulary and research methodology, promoting a belief that public services like health, policing and state education suffered and became inefficient because they were not subject to market forces and competition: business was efficient because companies were subject to a Darwinian survival of the most efficient
The theory went that if you wanted public services to become efficient, you would need to find something that would do for them what competition did for business. The way to do this, went the theory, was to begin by studying what the service did, collecting evidence (looking of course only at what could be measured numerically) and thereby to establish a baseline, a benchmark. Then you could set targets in strategic areas, and create incentives to push public services to meet them to perform better, and to achieve greater efficiencies.This was the beginning of the target culture.

In  a later post, I'll looks at the target culture in the context of Goodhart's law, “When a measure becomes a target, it ceases to be a good measure.”

Thatcherism Mark I and Mark II


It was under Margaret Thatcher here in the UK, that these ideas were put into practice for the first time. It is not hard to see why. In his biography of Thatcher, Charles Moore quotes her:
“‘We were Methodist,’ she liked to say of her Grantham childhood, ‘and Methodist means method’ … she was much more proud of being the first prime minister with a science degree than she was to be the first woman prime minister”[1] 
She could claim that all this would lead to less bureaucracy:
“We have moved from large state-owned bureaucracies … to networks of organisations which can operate with a fair degree of autonomy provided they meet specified performance targets.”[2]
Politicians liked the introduction of performance indicators and numerical targets into the public sector. They could be seen to be doing something. It gave them power, and critically, it gave them power without responsibility (“We set the targets: it will be your fault if you can’t keep up”).

As the Audit Commission wrote later,
“Poor public sector performance is a product of poor public sector management and the solution to these problems is the creation of frameworks which mirror the private sector.” [3]
The first step was for Government to measure the work done by the NHS, the police and education and then to devise suitable targets and suitable incentives.

The New Public Managers acted like good disciples of Herbert Simon. Knowing that they couldn't measure all the work that the public services did, they excluded all the ‘soft’ issues of quality, ethics, motivation and cultural context that had no numbers attached.

But now they went further than even Simon had suggested. They set out to transform each area they were studying in such a way that it naturally produced numerical data as the key signifiers. When the very nature of an organisations’ work made it hard to set nationally valid targets (as, for example, when the Schools Inspectorate took into account local situations and contexts), then they would reconstruct the organisation so that it would produce national targets – in this case by replacing the Schools Inspectorate with a new inspectorate called Ofsted, that would be required to ignore local situations and contexts.

Even Simon had not suggested that you should change what you measure to make measurement easier.

Not all of this came from central government. The Committee of University Vice-Chancellors and Principals themselves commissioned Alex Jarratt (a businessman with little experience of university management) to chair an investigation into their work. It was almost certainly an attempt to curry favour with Thatcher, and to persuade her to stop some of the cutbacks she was imposing,

The report has been described as mischievous and malevolent, and one of the most damaging inquiries into higher education over the last half-century. [4]

It recommended that Universities abandon the idea of education for its own sake, to think of themselves as learning-factories with layers of managers and customers rather than students, all working towards the numerical targets and performance indicators would let them take up a role in Thatcher’s New World Order. Amazingly, the Universities adopted the recommendations in their entirety.

Later, when Tony Blair came to power, he and his advisors embraced New Public Management so warmly that some people called his policies ‘Thatcherism Mark II [5]

The system appealed to New Labour with its claim to be rational, new, and modern, and perhaps also because it centralised power on the one hand, while devolving responsibility on the other.

Here is Blair’s Home Secretary David Blunkett talking the talk in terms of policing: “It is vital to measure crime accurately if we are to be able to tackle it effectively.” [6]

But it was not only the crime figures and the police. New Public Administrators wanted to counter the power of professional groups and trade unions, by attacking the practice of self-regulation, bringing in performance assessment and increasing external inspections and audits – particularly in large professions such as health and education.

[ NextThe Target Culture: Evidence based management ]

[1]   Runciman, David. "Rat-a-tat-a-tat-a-tat-a-tat" London Review of Books Vol 35 Issue 11, London, 6 June 2013
[2]   idem
[3]   Adcrodt, Andy; Willis, Robert. "The (un)Intended Outcome of Performance Measurement in the Public Sector". International Journal of Public Sector Management, Vol 18 Issue 5, University of Surrey, UK 2005
[4]   Alderman, Geoffrey  "A review of Malcolm Tight's "Higher Education in the United Kingdom since 1945"http://www.timeshighereducation.co.uk/407560.article, Times Higher Education 20-09 accessed March 2015
[5]   Driver, Stephen; Martell, Luke.  "Blair's Britain", Polity Press, London 2002
[6]   Simmons, Jon; Legg, Clarissa; Hosking, Rachel.  "National Crime Recording Standard (NCRS): an analysis of the impact on recorded crime" at http://webarchive.nationalarchives.gov.uk/20110218135832/http:/rds.homeoffice.gov.uk/rds/pdfs2/rdsolr3103.pdf on  Webarchives.nationalarchives.gov.uk, The Home Office 2003, accessed February 2015

Thursday 21 November 2019

The target culture: origins - (2)

Herbert Simon and Rational Decision-Making

IBM 360 installation c.1962


In the 1950s computers were at the cutting edge of thinking: a computer installation was the very image of clean, modern rationality. In 1956, Herbert Simon, a young associate dean of the Graduate School of Industrial Administration at Carnegie Mellon University established the first Computation Center.  (Later this would evolve into The School of Computer Science, the first such school devoted solely to computer science in the United States and a model for others that followed.)

Herbert Simon was an eccentric polymath who would later win a Nobel Prize for Economics (20 years after he stopped working in the field) for his work on bounded rationality. He claimed not to watch television, listen to the radio or pay attention to newspaper headlines, “First,” he said, “A lot of what’s in the paper today was in the paper yesterday. Second, most of the things that are in the papers today that weren’t yesterday I can predict, at least in general.”

As a mathematician and computer scientist, Simon was comfortable in a world in which knowledge came from collecting and analysing data. Just as Cochrane had found in medical education and practice, so Simon found in the social sciences that precedent, habit and tradition were trusted more than any evidence that could be gleaned from whatever data was available.

In a 1978 interview for UBS Bank, Simon explained, 
“Before you can have mathematical structures in a science, you have to have data, you have to understand the phenomenon," he said. "Before biology became modern molecular biology, with exact knowledge of genes and of chemistry, many people had to go out and collect countless plants to find out how they were put together. We haven’t done that yet in the social sciences."

Herbert Simon with an IBM 650  c.1958.  Courtesy of Carnegie Mellon University

Both in his doctoral dissertation and in his 1947 book, Administrative Behaviour, written when he was in his early thirties, Simon attacked what he described as haphazard decision-making in public administration.
“Our work led us to feel increasingly the need for a more adequate theory of human problem-solving if we were to understand decisions. Allen Newell, whom I had met at the Rand Corporation in 1952, held similar views. About 1954, he and I conceived the idea that the right way to study problem-solving was to simulate it with computer programs. Gradually, computer simulation of human cognition became my central research.” [2]
Simon accused administrators of basing their decision-making on “Inconsistent proverbs drawn from common sense, and handed down expertise, completely lacking in scientific rigour”.[3] He said that public administration must be founded on rigorous and scientific observation and on laws of human behaviour.

But where Cochrane always viewed evidence-based medicine as an adjunct (but a hugely important one) to existing medical expertise, knowledge and common-sense, Simon went further. He insisted that public administration must be founded exclusively on information derived from data gathered for the purpose.

In his writing and his lectures, Simon used the buzzwords of the time, behaviour, decisions, computers and organization to suggest that decision-makers could achieve the most rational outcomes if – and only if – they thought logically and used the fabulous processing power of computers.[4]

He seems to have treated decision-making as a logical exercise in which you begin by analysing the situation and deciding what outcome you want. You then list all the possible different directions in which you could go and calculate the consequences of each, and then simply (!) choose the alternative that most closely gets you where you want to be.

Simon realised that it would not be possible to consider all possible courses of action let alone all their consequences. He accepted, too, that in many cases we are simply unable to gather all the evidence we need to select the outcome we want.

But he argued that this should not stop us from making a rational decision. What we needed to do, he said, was to look only at the options where we do possess all the evidence. And then we should look only at those options where this evidence can easily be expressed as computable numbers.[5] Significantly, in terms of Artificial Stupidity, Simon said we should ignore any ‘messy’ and ‘unreliable’ evidence based on common sense or experience.

He said that by separating facts from value judgments, we would allow objective scientific knowledge to control the social environment. Qualitative data that deals with complicated ideas like ethics and emotions, beliefs and motivation and cultural contexts and obligations (i.e. any data that could not be expressed in numbers) had to be overlooked in the name of rationality.

In this, I would like to suggest, Simon is asking us to behave like the cheat in our version of the Turing Test.

But it helps to see his rather radical position in the context of the social turmoil and talk of revolution in the 1960s.


New Public Administration

In 1968, Dwight Waldo, one the most respected American political scientists of the time, organised a conference at Syracuse University. No one aged over 35 was allowed to speak. The subject was public administration  (i.e the work of the Civil Service) in the context of the revolutionary feel of the times.
The conference participants raged against a society that was, in their eyes, full of discrimination, injustice, and inequality, and they argued that public administration supported – both practically and theoretically – this unbearable status quo. [1]
They wanted a civil service that was more democratic, accountable, and modern. In particular, they wanted public administrators to have an agenda: to do good for society. The conference was a jolt for political scientists. It was as if up to now everyone had accepted that public administration existed like an old boy’s club, dedicated to preserving the status quo. What was important about this conference was that from now on, more and more, people began to believe that public administrators should have the goal of serving the people and being transparent about how they did so, and above all, working hard to become more efficient. Articles, books and more conferences followed under the banner of New Public Administration.

For those who thought career administrators old fashioned and inflexible, Herbert Simon became a hero.

But somehow during the recession of the 1970s, the emphasis of New Public Administration shifted subtly. Its proponents still attacked the old guard who just wanted to preserve the status quo; they still wanted public administration to be transparent and ever more efficient, but the idea of striving to do good for society somehow became subsumed into the search for a more rational, business-like and accountable way of running the public sector.

Why and how Simon's ideas (as modified by New Public Administrators) became the basis for real policy-making in the UK, will be the subject of another post.

[Next: Thatcher and the rise of New Public Management]


[1] Gruening, Gernod,  "Origin and theoretical basis of New Public Management" in International Public management Journal, Elsevier. April 2001
[2] Simon, Herbert A.  "Nobel Lectures, Economics 1969-1980" ed. Assar, Lindbeck. World Scientific Publishing Co., Singapore 1992. 
[3] ] Simon, Herbert A.  "Administrative behavior: a study of decision-making in administrative organization" The Free Press New York NY USA 1976
[4] Simon, H A; Smithburg, D W; Thompson, V A, "Public administration" Alfred A. Knopf, New York NY USA 1950
[5]  Simon, Herbert A.  " A behavioral model of rational choice"  in  The Quarterly Journal of Economics , vol. 69 issue 1 Oxford Journals, Oxford 1955)

Tuesday 19 November 2019

The problem with problems


[Go to Introduction ]


There is more than one kind of problem



Half a century after Herbert Simon's death, it seems extraordinary that someone of his intelligence should have proposed a methodology that, to our eyes, seems frankly silly. But perhaps we should beware. Are there areas in our lives where we make similar errors?

The aspect of Simon’s work that I’ve been writing about is his use of a logical methodology to help make rational decisions when faced with problems, particularly in the field of public administration. Here there is an unspoken assumption in his thinking, writing and teaching that all problems are of the same type, and that therefore all problems can be successfully navigated by using one methodology and logic.

One of the fundamental arguments of this blog, is that not all problems are of the same type. In fact in our daily lives we use the one word ‘problem’ to identify two entirely different types of issue, that I call Closed Problems and Open Problems. * It may be worth while to look at examples of each, together with their characteristics, in order to see that they not only differ in type, but also differ in how they can be successfully navigated.


Closed problems

Let us begin with closed problems.

Here is a typical one:
(If you want to know the answer and how to get it, you can look here).

What is typical in a closed problem like this, is that it does not matter

  • who is doing the solving – you, me or anyone else
  • when we are solving it – yesterday or six years ago
  • where we are when we try to solve it.
And typically, it is the kind of problem that
  • has a solution (as we shall see, this is not true of all problems)
  • contains the solution within it – it is a kind of tautology
  • and has a methodology to be applied to reach that solution. This methodology always depends on your analysis of the starting position.

Closed problems are not all algebraic. Sometime in the 1960s, a Russian computer scientist called Mikhail Moiseevich Bongard devised a kind of puzzle game as part of his research into pattern recognition.

His puzzles, now called Bongard problems, typically show you two sets of simple images, say A and B. What you have to do is to put into words the characteristics that all the images in A have, that the images in B do not – and vice versa.

Here is an example:

And you will probably have found that all the images in B have a dot which is royal blue, and the images in A do not.

If you look back at the characteristics of closed problems listed above, you will see that they work with Bongard problems as they do with algebra. Both are what Van Bertalanffy would call ‘closed systems’.

In this they stand in complete contrast to open problems.

Open Problems

Here are three more problems that I've come across recently:

 - How does one reconcile a desire to travel with a need to reduce one’s carbon footprint?
 - How does one best get home from here?
 - How should one vote in the General Election?

Now in each of these, the response will depend on
  • who is doing the responding – you, me or anyone else
  • when we are responding it – yesterday or six years ago
  • where we are when we respond
And typically, these are the kind of problems that
  • do not have a solution. Rather, they require a response 
  • they do not contain their own response: they are not tautologies. They need a response which is appropriate to their context.
  • Insofar as there is a methodology for making that response, it has to do with an understanding of the contexts which you inhabit. 
The characteristics of open problems listed above exist in what Van Bertalanffy would call 'open systems '.

Closed v. Open

The two kinds of problems are of different  types with absolutely contradictory characteristics:

Closed Problem Requires an answer There’s a method to get the answer Answer does not depend on who/when/where You know the start point; you don’t know the endpoint  versus Closed Problem Requires a response There is no one way to respond Useful response depends on who/when/where You don’t know the start point; you do know the end point

Herbert Simon and the New Public Management devotees assumed that all problems were of one kind: closed problems. And it is true that his methodology works for closed problems.

But when faced with such open systems as the National Health Service, the police force and national education, we can see that his methodologies for decision-making will fail.



* In the language of General Systems Theory,  Closed problems exist in closed systems, Open problems in Open systems.


Sunday 17 November 2019

The target culture: origins (1)

[Go to Introduction ]

Cochrane and the rise of evidence-based medicine






Only the most paranoid would believe that there are people who deliberately set out to corrupt the nation’s education systems, health service or police force. Even if someone wanted to do this, it would be very hard to plan it and carry it out. But if we go back to the start of the journey, I hope that we will be able to see how in almost imperceptible steps, we moved from an inspired and humanitarian experiment to a profoundly damaging instrument for social manipulation.
It began with a very successful strategy to improve the education of doctors.

Archie Cochrane was born in Scotland in 1909. After gaining a Double First at Cambridge, and volunteering as a doctor in the International Brigade in the Spanish Civil War, (where he got involved with Ernest Hemingway in a bar ("an alcoholic bore") he joined the British army at the start of the second world war.


Cochrane was captured during the fall of Crete in 1941 and he found himself one of 8,000 prisoners of war living in overcrowded converted barracks. They were demoralised and hungry, living on a diet of some 650 calories a day (less than a quarter of the calories that most guidelines suggest that men need). They were given.a mug of ersatz coffee at breakfast, a bowl of soup and two slices of bread in the evening.[1]

It was in this wretched camp that the Germans appointed Cochrane as chief medical officer – in spite of his lack of qualifications: unlike most of his colleagues, Cochrane had some knowledge of medicine. His sick-bay had just three treatments: aspirin, a weak skin disinfectant and a mild defence against diarrhoea.

Sickness was rife in the camp. In August 1941, Cochrane was faced with an epidemic of oedema, the swelling of the legs that used to be called dropsy. The German camp authorities were reluctant to do anything.

Cochrane had a hunch that the cause of the epidemic was diet, and in particular vitamin deficiencies. He hoped that if he could prove this by providing evidence, the authorities would be moved to make changes.

To set this up, Cochrane bribed a guard to buy some yeast on the black-market. He then set up a test with twenty patients in two separate wards. He gave two spoonfuls of yeast to those in one ward and a placebo to those in the other. It was one of the first attempts at a randomised control trial[2]. Although the sample was not very large and could not be supervised very closely, the results were conclusive. After four days, the patients who were taking yeast were measurably better, while those in the control group showed no change. Cochrane wrote up the results carefully and presented the evidence with numbers, graphs and tables to the German officers. The response to this presentation  was dramatic. As Cochrane says, “The next morning, a large amount of yeast arrived; in a few days the rations were increased to provide about 800 calories a day.” 
By mid September, the epidemic was over. 

It is now considered to have been one of the first modern attempts to use scientific methodology to gather evidence to determine whether a particular clinical approach was effective[3]. Although his methodology was far from perfect, the experience opened Cochrane’s eyes. He recognised that he had stumbled on something that had the potential to transform the basis on which doctors took clinical decisions.

Later in the war, a contrasting experience brought this home. By now he was in a prison camp in Germany, working in a tuberculosis ward alongside medical experts who had completed their training. Cochrane wrote, 

“I knew that there was no real evidence that anything we had to offer had any effect on tuberculosis, and I was afraid that I shortened the lives of some of my friends by unnecessary intervention.”[4]
His colleagues were following the standard practices that all doctors were taught at the time. They did not question whether or not it worked.

When the war was over, Archie Cochrane found that the medical profession in general, and medical education in particular, was still a culture that the British Medical Journal would later call “expert-based medicine”.  By this they meant that experts taught trainee doctors dogmatic ‘facts’, and the trainees were expected to believe them unquestioningly and regurgitate them in their exams.  
The character of Sir Lancelot Spratt in the 1954 film "Doctor in the House" caricatures this very well:



All Cochrane's experience made him challenge this. He welcomed the new National Health Service and believed it had a responsibility to be cost effective and efficient. So he continued to explore his idea of collecting data from meticulously run trials, information which other doctors could then use to support their decision-making. 

His work took him to South Wales to a unit concerned with lung disease among miners. He devoted himself for some twenty years to collecting evidence about the levels of coal dust in the pits and how this related to illnesses suffered by the coalminers. Professor Peter Elwood worked with Cochrane and he told the BBC,
“Archie Cochrane took research methods into the community and he used to refer to the general community as his laboratory. And so he got answers to do with very early disease, to do with the predictors of disease, factors that increased the risks of living with a disease, rather than just helping people live with the disease which is the main area of work in clinical practice.”[5]
So successful was this approach, that Cochrane saw no reason why it could not be applied right across medical research. He even began to think of ways it could be used outside medicine. But certainly by the late 1950s and 60s, his work and reputation were making major contributions to a change in culture among doctors. By the late 1960s, fewer graduates in any field were willing to accept expert advice on trust. It was becoming clear that the ways that doctors and surgeons were making decisions – in all good faith – that were often questionable[6], and meanwhile modern organisations were beginning to use computers to collect and collate data. The paradigm was changing.

Cochrane continued to campaign for the medical community to adopt a more scientific methodology. His book  Effectiveness and Efficiency: Random reflections on health services  published in1972 highlighted the lack of reliable evidence behind many healthcare interventions. The methodology he was proposing was eventually defined as,

“Evidence collected from a wide range of sources to help make better decisions, used alongside the expertise of the individual doctor all for the benefit of the individual patient.[7]”
The idea was that doctors could make better decisions as to how best to help their patients by using evidence from many sources, adding to what they had learnt in training, and supplementing their own experience. (This idea was so successful in practice that after Cochrane’s death, former colleagues set up the Cochrane Collaboration, with 10,000 people in more than a dozen centres around the world, who continue to this day to prepare and maintain reviews of randomised trials provide evidence for medical policy making).Evidence-Based Medicine became the established paradigm. It was so successful and became such a buzzword, that there was a tendency to forget that his original aim was for research-based evidence to support doctors’ decision-making by being used alongside their own experience, their knowledge of their patients, and their own common sense.

When the concept was borrowed for evidence based decision-making in other fields, this omission was just as dangerous.



[Next: The problem with problems ]


[1]Cochrane, Archie L; Blythe, M. One Man’s Medicine: an autobiography of Professor Archie Cochrane BMJ London, 1989
[2]Holme, Chris, Archie Cochrane, father of evidence based medicine The History Company 2013 http://historycompany.co.uk/.
[3]Although The first reported clinical trial was conducted by James Lind in 1747 to identify treatment for scurvy, it was not until 1948 that the first paper on a randomised controlled trial was published in a medical journal.
[6]“Expert opinion, experience, and authoritarian judgment were the foundation for decision making. The use of scientific methodology, as in biomedical research, and statistical analysis, as in epidemiology, were rare in the world of medicine.”  (Sur, Roger L; Dahm, Philipp, History of evidence-based medicine Indian Journal of Urology, 2011 Vol 27 Issue 4)
 [7]“the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. ... [It] means integrating individual clinical expertise with the best available external clinical evidence from systematic research." This branch of evidence-based medicine aims to make individual decision making more structured and objective by better reflecting the evidence from research. It requires the application of population-based data to the care of an individual patient,while respecting the fact that practitioners have clinical expertise reflected in effective and efficient diagnosis and thoughtful identification and compassionate use of individual patients' predicaments, rights, and preferences. (Sackett, D L; Rosenberg, W M; Gray, J A; Haynes, R B; Richardson, W SEvidence based medicine: what it is and what it isn't BMJ 1996 Vol 312 issue 7023)

Why this blog?



When I was an adolescent, there were three situations for which my parents insisted I wear smart clothes and brush my hair: when going with them to visit the doctor, the bank manager, or my teachers. For us and for everyone we knew, the doctor, the bank manager and teachers were the living, breathing symbols of uprightness.

In 2018 a MORI poll  showed that nurses, doctors and teachers were still the most respected professions (the financial crisis had put paid to bankers’ reputations by then), but this may not continue.

Since then we have seen so many stories in the media which undermine their reputations. I will look at them in later posts.

One of these stories was told by a teacher – let’s call her Anne – who works in a secondary school. She has always considered teaching to be a vocation. She thinks of herself as a committed class teacher working for the good of her pupils and of her school. She does not consider herself a cheat. But she tells how on more than one occasion she has artificially raised all the children in her class a full grade, (although she kept them in ability order to avoid suspicion). And Anne goes on to admit that recently, when her less able pupils’ coursework would give them low grades, she has completed their work for them, passing it off as their own, in order to make sure that they got better grades.

In another story, a Chief Inspector of Police coming up to retirement is being interviewed. He says how proud he is that his work has been used by politicians to claim that crime is falling and that detection rates are rising. He is proud too, that his work has helped his colleagues receive performance-related bonuses. He admits that he did this by getting the policemen and women under his command to fake the figures, but says that that’s what he had to do. He would tell his men and women to downgrade the seriousness of the offences they were recording, so that the number of serious crimes would seem to be going down. As an example, he tells of a villain who shot at another man at close range but missed and broke a window. The police recorded this as ‘criminal damage’ rather than attempted murder. It wasn’t just that, he goes on. His officers would boost their official detection rates by bribing convicted offenders to ‘admit’ to whole series of crimes they had not committed, in return for the charge against them being changed to a lesser offence.

A story from an NHS Hospital Trust told of a respected surgeon who was asked by her managers to postpone an urgently needed but complicated and time-consuming operation on one patient, to free up her timetable to do routine minor operations on four other patients. The reason she was given was that the hospital needed to meet waiting time targets. The surgeon did as she was asked and performed the four minor operations. Her first patient died soon after.

We, the general public still find these stories shocking. But for those who work in those sectors, the stories are all too familiar. Those in the know may not approve, but they are aware that all this has been going on for a long time.

I started to research and write this blog because I wanted to know why people in professions that we trust so highly are being asked to lie and cheat. Why they are being asked to act so anti-socially and to break the modern equivalents of the Hippocratic oath. I wanted to know why so many who are asked, acquiesce. And I wanted to know why so many policy makers, managers and people on the front line continue to support a system that is clearly failing to achieve the targets for which they are being asked to sacrifice their integrity. Why do they go along with this? Why does no one call a halt?


What do I mean by ‘Cult’?

The reason that the title of this blog is The Cult of Artificial Stupidity, is that its subject is not artificial stupidity itself, but rather the irrational belief that policies that result from artificially stupid behaviour, such as those imposed by the target culture – policies that clearly do not work – should be imposed on people who must then be made to follow them unquestioningly.

When I ask what makes a cult, I'm not asking why someone starts a cult. Nor am I qualified to explore how a cult ensnares and holds onto its adherents. In this blog all I need  to ask is, "How can we characterise cults? When can we call something a cult rather than a philosophy or a modus vivendi?

Michael Langone is Executive Director of the International Cultic Studies Association. The ICSA is a not for profit network with a mission to “provide information, education, and help to those adversely affected by or interested in cultic and other high-control groups and relationships.” He may or may not have his axes to grind, but he and his colleagues certainly have encountered many different cults in  the forty years of ICSA's existence.

He has written a checklist intended as an analytical tool to describe what makes a cult [1] . It runs to some 15 bullet points.  I would pick out five to support my case that the proponents of artificial stupidity are a cult:
  • Questioning, doubt, and dissent are discouraged or even punished.
  • The group is elitist, claiming a special, exalted status for itself …(i.e.) the group is on a special mission to save humanity.
  • The group has a polarized us-versus-them mentality, which may cause conflict with the wider society.
  • The group teaches or implies that its supposedly exalted ends justify whatever means it deems necessary. This may result in members’ participating in behaviors or activities they would have considered reprehensible or unethical before they joined the group 
  • The group is preoccupied with bringing in new members.
I hope that later posts in this blog will justify notion that the only way to explain how some of the behaviour and policies we will look at, is by understanding its cultish nature..



[1]   Langone, Michael  D.  "Characteristics Associated with Cultic Groups" in https://www.icsahome.com/articles/characteristics ICSA 1015. Accessed 25/11/2019

[Next: The target culture – origins (1) ]

Cheating at the Turing test




In the middle of the twentieth century people played a parlour game called the Imitation Game. It was for three people: a Man (A) a Woman (B) and an Interrogator (C) The interrogator wasn’t allowed to see the other players or know who they were. The point of the game was for the Interrogator to ask the Man and the Woman a series of questions and to work out from the answers which of the people was the Man and which the Woman. The aim for the Man and the Woman was to outwit the Interrogator and stop him or her from guessing correctly. The only form of communication they could use was writing on paper.


In 1950, Turing proposed a variation of the game. In Turing’s version, he wanted to replace the Man and the Woman with a human (of either sex) and a computer. The point of his game was for the Interrogator to ask questions to find out which of the two with whom he was communicating was the human and which the computer.



He argued that if the Interrogator couldn’t tell which was machine and which was human, then you could fairly say that the machine had shown human-like intelligent behaviour, or what we have come to call Artificial Intelligence.

Turing thought of this as a way of avoiding having to define what human intelligence was while still being able to compare it with a computer’s. He thought of it more as a test than a game.

But this assumes that the Interrogator will only be confused if the computer is very clever. But what if the human designs her answers so as to seem unaware of the bigger picture, unaware of the need for a moral compass, and without either a sense of humour or of compassion? The Interrogator would surely be fooled and the human would have an even chance of winning. And the Interrogator might complain that the human had cheated, by acting with Artificial Stupidity.

[Next: Why this blog ]