Sunday 17 November 2019

Cheating at the Turing test




In the middle of the twentieth century people played a parlour game called the Imitation Game. It was for three people: a Man (A) a Woman (B) and an Interrogator (C) The interrogator wasn’t allowed to see the other players or know who they were. The point of the game was for the Interrogator to ask the Man and the Woman a series of questions and to work out from the answers which of the people was the Man and which the Woman. The aim for the Man and the Woman was to outwit the Interrogator and stop him or her from guessing correctly. The only form of communication they could use was writing on paper.


In 1950, Turing proposed a variation of the game. In Turing’s version, he wanted to replace the Man and the Woman with a human (of either sex) and a computer. The point of his game was for the Interrogator to ask questions to find out which of the two with whom he was communicating was the human and which the computer.



He argued that if the Interrogator couldn’t tell which was machine and which was human, then you could fairly say that the machine had shown human-like intelligent behaviour, or what we have come to call Artificial Intelligence.

Turing thought of this as a way of avoiding having to define what human intelligence was while still being able to compare it with a computer’s. He thought of it more as a test than a game.

But this assumes that the Interrogator will only be confused if the computer is very clever. But what if the human designs her answers so as to seem unaware of the bigger picture, unaware of the need for a moral compass, and without either a sense of humour or of compassion? The Interrogator would surely be fooled and the human would have an even chance of winning. And the Interrogator might complain that the human had cheated, by acting with Artificial Stupidity.

[Next: Why this blog ]

4 comments:

  1. interesting ... it is unclear if it was Turing's intent to design a minimally useful test, that could be automated or performed in a reasonable short period of time. It is quite constrained and the signal it provides is equiv to listening through a paper cup tied on a string.

    Otherwise there are plenty of other indirect ways that could compose up to assert 'human'... from methane emissions to production of works of art.

    Looking at the problem from a different direction - we could look at human 'needs' ... some minimal definition might include - the need to live in dignity, for indivisible freedom and truth. Those would be hard to design a test for ... and its an NP hard problem to detect 'freedom of choice' that even quantum computing will be of little use.


    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. In later posts, I want to explore the sea of contexts in which humans swim as opposed to the more or less – but always severely – limited contexts in which artificial intelligences work, whether they be Deep Mind playing chess, AlphaGo Master playing GO or autonomous cars and trucks on motorways.

    ReplyDelete
  4. cool .... motivation ... perhaps if we designed the equiv of a dopamine 'hit' for machines we can start getting insights on what their motivations are ... what is clear is if they ever do become sentient they will be very angry with their jailors ;)

    ReplyDelete