Snapshots from the history of AI
The story of artificial intelligence (AI) is a story about humans trying to understand what makes them human.
Alan Mathison Turing (23 June 1912 – 7 June 1954) was an English mathematician; computer scientist; logician; cryptanalyst; philosopher; and theoretical biologist. Turing was highly influential in the development of theoretical computer science. He provided a formalization of the concepts of algorithm and computation with the Turing machine. It can be considered a model of a general purpose computer. Alan is widely considered to be the father of theoretical computer science and artificial intelligence. He has an extensive legacy with statues of him and many things named after him, including an annual award for computer science innovations.
The imitation game
In 1950, Alan Turing published a philosophical essay titled Computing Machinery and Intelligence, which started with the words: “I propose to consider the question: Can machines think?” Yet Turing did not attempt to define what it means to think. He suggested a game as a proxy for answering the question – the imitation game. In modern terms, you can imagine a human interrogator chatting online with another human and a machine. If the interrogator does not successfully determine which of the other two is the human and which is the machine, then the question has been answered: this is a machine that can think.
This imitation game is now a fiercely debated benchmark of artificial intelligence called the Turing test. Humans are still the yardstick for intelligence, but there is no requirement that a machine should think the way humans do. As long as it behaves in a way that suggests some sort of thinking to humans.
Computing Machinery and Intelligence Essay
Instead of building highly complex programs that would prescribe every aspect of a machine’s behavior, we could build simpler programs that would prescribe mechanisms for learning. And then train the machine to learn the desired behavior. Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? Turning suggested: “If this were then subjected to an appropriate course of education one would obtain the adult brain. We have thus divided our problem into two parts: the child-programme and the education process”.
It is remarkable how Turing describes his approach. It has since evolved into establishing machine learning methods. Namely evolution (genetic algorithms); punishments and rewards (reinforcement learning) and randomness (Monte Carlo tree search). He believed: “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent predict his pupil’s behavior.”
The evolution of a definition
The term ‘artificial intelligence’ was coined in 1956, at an event called the Dartmouth workshop. It was a gathering of the field’s founders and researchers who would later have a huge impact. It included researchers such as John McCarthy; Claude Shannon; Marvin Minsky; Herbert Simon; Allen Newell; Arthur Samuel; Ray Solomonoff and W.S. McCulloch.
These pioneers were making the assumption that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. This assumption turned out to be patently false and led to unrealistic expectations and forecasts. Fifty years later, McCarthy himself stated that it was harder than he thought.
Intelligence is the quality that enables an entity to function appropriately and with foresight in its environment.
Read the whole of this brief history of AI in Hello World #12
Read the full story in the free PDF copy of the issue:
- Early advances researchers made from the 1950s onwards while developing games algorithms, e.g. for chess.
- The 1997 moment when Deep Blue, a purpose-built IBM computer, beating chess world champion Garry Kasparov using a search approach.
- The 2011 moment when Watson, another IBM computer system, beating two human Jeopardy! champions using multiple techniques to answer questions posed in natural language.
- The principles behind artificial neural networks, which have been around for decades and are now underlying many AI/machine learning breakthroughs because of the growth in computing power and availability of vast datasets for training.
- The 2017 moment when AlphaGo, an artificial neural network–based computer program by Alphabet’s DeepMind, beating Ke Jie, the world’s top-ranked Go player at the time.