Categories
Insights

A Brief History of Artificial Intelligence

The concept of machines functioning and thinking like humans isn’t new. The roots of our fascination with artificial intelligence (AI) can be noticed in ancient history, with myths, stories, and rumours of mechanical men being bestowed with human-like intelligence or consciousness. 

Eventually, myths turned into inspiration and the seeds of modern AI were planted by classical philosophers. They attempted to describe the process of human thinking as the mechanical manipulation of symbols. 

The result? Their influence enabled the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning.

The beginning of a revolution

In 1950, the idea of artificial intelligence and its possibilities was pioneered by Alan Turing, a British polymath. He proposed the concept that machines can also use available information and make sense of it in order to solve problems and make decisions. This was the basis of his 1950 paper, ‘Computing Machinery and Intelligence’.

However, even though Alan Turning inspired many others like him, he wasn’t able to move forward with his research and develop a proof of concept. 

So what exactly stopped him?

Firstly, computers, as they existed, needed a fundamental change. Before the 1950s, computers lacked an essential prerequisite for intelligence: they weren’t built to store commands but only execute them. Secondly, computing was extremely expensive. Only prestigious universities and big tech companies could afford to gamble with such a new concept.

The conference that pushed the change

Five years after Alan Turning’s paper was published, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s program: Logic Theorist.

Logic Theorist was designed to mimic the problem-solving skills of a human and was funded by Research and Development (RAND) Corporation. It was presented at the historic Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. 

The conference helped everyone to align on the probability of AI being achievable. It accelerated the next 20 years of AI research.

The booms and the winters

From 1957 to 1974, AI flourished. Computers could store more information and were faster, cheaper, and more accessible than ever before. Machine learning algorithms improved and researchers got better at understanding the algorithms.

Expectations from AI grew stronger. But, while the basic proof of concept existed, there was still a long path towards natural language processing, abstract thinking, and self-recognition.

Researchers were still functioning with computers that simply did not have substantial computing power. They could not store data efficiently or process it fast enough. Eventually, funding receded, and research slowed down for ten years. 

In the 1980s, research sparked again. John Hopfield and David Rumelhart popularized ‘deep learning’ techniques that allowed computers to learn using experience. Simultaneously, Edward Feigenbaum introduced expert systems which mimicked the decision-making process of human experts. 

However, the results were still slow. Investors and the government stopped funding for AI research due to high costs resulting in the second AI winter from 1987 to 1993.

Satirically, AI thrived in the absence of funding and public hype. After 1993, AI was able to achieve significant landmarks. In 1997, reigning world chess champion and grandmaster Gary Kasparov was defeated by IBM’s Deep Blue, a chess-playing computer program. Simultaneously, speech recognition software, developed by Dragon Systems, was implemented on Windows. Even displaying human emotion was achievable as evidenced by ‘Kismet’, the robot developed by Cynthia Breazeal that could recognize and display emotions.

A steady road

It’s important to point out that the way we code AI hasn’t changed. It turns out, our computers are now equipped to store large quantities of information as well as process it at a much faster rate than 30 years ago.

What awaits us

We’re currently experiencing the age of ‘big data’, where we are continuously collecting huge sums of information too cumbersome for humans to process. The application of artificial intelligence has already started to show its benefits in several industries such as technology, banking, marketing, and entertainment.

So what is in store for the future? 

In the immediate future, AI language is the next big thing. In the long term, the goal is general intelligence, which is a machine that embraces human cognitive abilities in all tasks.