The marketing landscape is ever-changing, with new technologies and innovations being introduced every day. From print, to broadcast, to digital media, we are progressing at an unprecedented rate which today is being shaped by artificial intelligence.

Media consumption and behaviour has shifted from more traditional platforms towards online devices due to advances in technology and computer science. This has created huge opportunities for businesses to collect and collate information about their customers, and tailor their communications accordingly. Winning brands look towards personalisation to achieve increased engagement and relevancy of message.

We’ve put together a brief history of artificial intelligence, a 5-minute read to summarise a few of these big changes over time to get you up to speed.

When was artificial intelligence invented?

The term ‘artificial intelligence’ was first coined at a conference at Dartmouth College, in Hanover, New Hampshire in 1956. There, John McCarthy laid out the definition of AI as ‘the science and engineering of making intelligent machines’. This definition can also be extended to the development of computer systems that are capable of performing tasks that require human intelligence, such as decision-making, object detection, solving complex problems and so on.

However, the concept of AI can be traced back as far as ancient Greece, when classical philosophers tried to describe human thinking as a symbolic system. Even at the time of antiquity, the idea of robots was ingrained in myth. The first real incarnation of AI, it is argued, was invented by the British mathematician and computer scientist Alan Turing, often referred to as the father of artificial intelligence. During the Second World War, Turing used his mathematical genius to help develop the Bombe, an automated electromagnetic machine developed from an earlier Polish model that was used to help break the Nazi enigma code, contributing significantly towards Allied victory.

The Turing Test

The Turning Test was devised by Alan Turing as a way to test ‘a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human‘.

Originally named the Imitation Game, this test has a human interrogator ask a series of questions to a human being and a computer without knowing which is which. Through a text-only channel, such as a computer screen, the human and computer would answer these questions for the human evaluator to distinguish between the real human response and the computer’s response programmed to generate a human-like response. The computer would be said to pass the test if its natural language processing and machine learning made its responses indistinguishable from that of an actual human.

Once the gold standard in AI development, there is debate today on whether the Turing Test is still up to the task considering the sophistication of modern AI. With the recent resounding success of ChatGPT, today’s AI may require more intelligent computer programs with improved human speech recognition abilities.

Post-war

After the war, research and development in computing and AI continued to gain momentum until the so-called ‘AI Winter‘ of the 1970s, which marked a loss in confidence and a slowdown of investment in AI research. This stagnation is often linked with the publication of Marvin Minsky’s and Seymour Paypet’s Perceptrons, a work which outlined previously unrealised limitations in the field. From 1974 – 1980, research continued steadily but at a far less aggressive rate than before. However, a resurgence in interest was sparked in the 1980s when the British government started funding research to compete with Japan’s Fifth Generation Computer Systems project (FGCS) initiative, begun in 1982 in an attempt to create a fifth-generation computer. This resurgence was short-lived as another significant ‘winter’ took hold from 1987 to 1993 due to the market’s collapse for some early general-purpose computers.

1986 saw the first use of the term ‘deep learning‘. Deep learning, a type of machine learning, is when AI can refine its function automatically over time by absorbing huge amounts of unstructured data. To do this, deep learning uses neural networks. A neural network allows artificial intelligence to attempt to mimic human intelligence. Neural networks are based on algorithms that are designed to identify underlying relationships and patterns in sets of data in much the same way that the human brain does.

Since the early-mid 1990s, research has continued to accelerate exponentially, and the years leading up to the present have been regularly punctuated with significant landmark achievements. Some notable examples would be IBM’s ‘Deep Blue’, a computer designed to play chess, beating then reigning world chess champion Garry Kasparov in 1997. A later question-answering IBM model named ‘Watson’ would go on to beat the reigning champions of the American quiz show Jeopardy in 2011.

Types of artificial intelligence

There are three main types of artificial intelligence. These are:

  1. Artificial Narrow Intelligence
  2. Artificial General Intelligence
  3. Artificial Super Intelligence

Whilst artificial narrow intelligence is fairly common throughout society today, we are still a way off artificial general intelligence. Artificial super intelligence, on the other hand, belongs exclusively to a distant future, if at all.

Artificial Narrow Intelligence

The lowest and most common level of artificial intelligence is known as ‘artificial narrow intelligence’. This AI uses machine learning algorithms that help it to complete a single task without the need for human beings to intervene. For example, this type of AI is what powers everyday virtual assistants such as Apple’s Siri and Amazon’s Alexa. Self driving cars similarly use this technology.

Artificial General Intelligence

Also known as artificial ‘strong’ intelligence, this type of AI is intended to perform any task that a human being is capable of, including finding solutions to unfamiliar tasks. However, there is some dispute over the proper definition of human intelligence; psychologists lean towards a definition that is based on survival and adaptability whilst computer scientists argue that it should be defined by the ability to achieve targets.

Regardless of this ongoing debate, no such AI exists as of yet that could match up to the vast and complex capabilities of the human mind.

Artificial Super Intelligence

This type of AI would supersede human intelligence and is currently only hypothetical, the sort of thing that exists in science fiction only. Whilst this is currently way beyond the realms of possibility, it is not inconceivable that one day, considering the rapid advancement in AI learning, a super-intelligent system should come into being. This is a type of artificial intelligence that would far surpass even the brightest of human minds, incorporating cognitive skills and developing thinking skills that are entirely it’s own.

The abilities that would come to define super AI are distinctly human, such as the ability to develop emotions, beliefs and desires.

So, there you have it. Digital Willow’s Brief History of Artificial Intelligence. At Digital Willow, we stay at the cutting edge of all the latest technological developments, seeking new ways to help improve the digital marketing campaigns of our clients. If you want to work with a team that is always one step ahead of the trends, why not get in touch today?