A complete history of Artificial Intelligence: From Turing to Today.

What’s the history of artificial intelligence?

The idea of inanimate objects coming to life has always been around. Inanimate objects refer to things that are not alive such as a book, chairs, and machines.

Don’t believe it?

Ask Historians or Archaeologists and they will tell you that the first “robot” to walk the earth—in ancient Greek mythology—was a bronze giant called Talos. Talos was an animated statue that was commissioned by Zeus, for his son, Minos, the legendary first king of Crete.

This bronze giant statue was created to protect the island of Crete. Historians tend to trace the idea of the automaton back to the primitive craftsmen who developed self-moving machines.

However, if we cast our nets back even further, more than 2000 years ago in fact, we will find an extraordinary set of ideas and imaginings of the concepts of robots, automata, human enhancements, and Artificial Intelligence actually arose from mythology millions of years ago.

The field of artificial intelligence, however, was not officially created until 1956 at a conference at Dartmouth College. In this article, we shall retrace the brief history of computers and artificial intelligence.

The history of artificial Intelligence

Artificial intelligence, as a discipline has been around for 60 years. Artificial intelligence is made up of a set of sciences, theories, and techniques that include computer science, mathematical logic, statistics, and more.

These theories, sciences, and techniques aim to imitate the cognitive abilities of human beings. It is because of these theories and techniques that have allowed computers to perform progressively complex tasks, which could previously only be delegated to a human.

Artificial intelligence first started in myths and stories. As we mentioned in the first paragraphs, artificial intelligence existed in myths, stories, and rumors. This is what planted the seed of artificial intelligence but philosophers also helped to drive this belief.

By the 1940s, a programmable digital computer had been created. This device inspired many scientists to begin seriously experimenting with the possibility of building an electronic brain that could think and reason like a human being.

At this time, there still wasn’t any field known as artificial intelligence. During the 1940s one of the greatest thinkers to explore the possibility of intelligent machines was British scientist Alan Turing.

Today, Alan Turing is regarded as the father of modern computer science. Turing’s work was mainly theoretical but he also went ahead to create several applications, particularly in the field of cryptography. Thanks to his work in cryptography, Turing helped the British allies of the Second World War crack the code behind German military transmissions.

Turing wrote a paper in 1936 that was titled “On Computable Numbers, With An Application To The Entscheidungsproblem”. It is this paper that laid down the foundation of concepts like computability, calculability, and the Turing Machine.

The Turing Machine was not a physical machine but a theoretical model that could execute any type of computable sequence. For the first time, the Turing Machine helped to introduce people to today’s concept of computer software.

In 1950 Turing wrote yet another paper which he titled, “Computing machinery and intelligence”. In this paper, Turing described something known as the Turing Test. This test, which Turing called the Imitation Game was created to assess the presence or absence of “human” intelligence in a machine.

Turing died 4 years later in 1954 of Cyanide poisoning. But his work played s significant role in the Dartmouth Conference, which was held 2 years after his passing.

The birthplace of artificial intelligence

The field of Artificial Intelligence wasn’t formally founded until the summer of 1956, at a conference at Dartmouth College, in Hanover, New Hampshire. The conference was titled Dartmouth Summer Research Project on Artificial Intelligence” and was funded by the Rockefeller Foundation.

During this conference, the term “artificial intelligence” was coined for the first time. This conference was organized to determine if every aspect of learning or any other feature of intelligence could, in principle, be so precisely described that a machine could copy it.

The conference would make an attempt to find out if any systems or ways could make machines use human language, improve themselves and solve various kinds of problems that were only reserved for humans.

During this conference, the attendees would work together for 2 months to make a practical leap toward creating artificial intelligence programs or algorithms. Working collectively and relying on each other’s expertise in various subjects, the scientists and programmers in attendance hoped to make substantial advances in the 2 months of the workshop.

The 10 researchers that showed up to the conference included:

  • John McCarthy- a professor at Dartmouth and a big promoter of the conference.
  • Marvin Minsky, a researcher in mathematics and neurology at Harvard
  • Nathaniel Rochester, director of information research at an IBM research center
  • Claude Shannon, a mathematician who was already known for information theory from the Bell telephone laboratories.
  • Dr. Julian Bigelow, a pioneer in computer engineering
  • Professor D.M. Mackay
  • Mr. Ray Solomonoff, the inventor of algorithmic probability and a founder of algorithmic information theory
  • Mr. John Holland, a pioneer in what became known as genetic algorithms.

These people in attendance during this workshop are largely regarded as the fathers of artificial intelligence.

Although this conference helped to uncover different questions and ideological positions, the scientists in attendance also discovered that the machinery of the time did not have suitable computational capacity. Still, this conference played unquestionable historical role in the AI industry.

Many of the experts in attendance met in Dartmouth for the first time and became lifelong friends. After the conference ended, some of the biggest achievements that have ever been recorded in the field of AI have been obtained by these same scientists or by their students.

This conference helped to inspire commercial developers and researchers to start investing in AI research. Between 1957 and 1974, there were more investments made in the study of AI than ever before. By this time, computers had become faster and could store more information. Machine learning algorithms had also advanced.

But by 1974, progress in the field wasn’t as fast as some investors had hoped for. This cased major funding partners in the U.S. and British Governments to stop funding research into artificial intelligence due to ongoing pressure to produce tangible results and continued criticism from naysayers. The period that followed the lack of funding in the field is today commonly known as AI Winter.

Seven years after the funding stopped flowing, a visionary initiative by the Japanese Government inspired governments and industry to start providing the field of AI with billions of dollars for research once again.

It is because of these investments that the study of artificial intelligence was able to advance. During the first decades of the 21st century, machine learning was successfully applied for the first time.

Between the 1990s and 2000s, many of the breakthrough goals that the founding fathers of artificial intelligence envisioned during that small conference in Dartmouth back in 1956 had already been achieved.

For instance, in 1997, the best chess player in the world Gary Kasparov was defeated by IBM’s Deep Blue, a chess-playing computer program. This event was highly publicized as it was the first time a chess world chess champion lost to a computer.

This chess tournament served as a huge step toward building an artificially intelligent decision-making program. That same year, speech recognition software, developed by Dragon Systems, was implemented on Windows.

Final Thoughts

Since 2010, the field of artificial intelligence has experienced a new boom, primarily caused by improvements in computing power, as well as access to massive amounts of data.

We now live in the age of “big data,” an age in which we can accumulate huge sums of information that are otherwise too cumbersome for a single person to process. The application of artificial intelligence in this respect has already been quite productive in several industries such as technology, banking, marketing, and entertainment.

So, what is in store for the future?

In the immediate future, artificial intelligence will continue to control most aspects of modern life. In the long term, the goal is to attain general intelligence, which is a machine that exceeds human intellectual abilities in all ways and forms.

Ready to dive deeper into the world of artificial intelligence? Sign up for our free artificial intelligence course for Africa.

Learn and stay up-to-date on the latest trends, technologies, and applications of AI. Whether you’re a business leader, student, or simply curious about the power of AI, our beginner’s course will provide you with valuable insights and resources to help you harness the potential of this transformative technology. Don’t miss out – sign up now and join the AI revolution!

Scroll to Top
×