The Historical Development and Evolution of Artificial Intelligence (AI)

Admin / December 15, 2023

Blog Image
Artificial Intelligence has gone through a massive boom in the past decade, especially in recent years.

After ChatGPT and Bard, we now have Google's most capable AI model called Gemini. We are witnessing new AI developments with time, and there is a lot to come.

AI has taken the world by storm, but it didn't happen all of a sudden. There is a long history of AI development and evolution.

The Birth of AI—1956

The desire to create machines that can function on their own without human cognition has been there since ancient times.
However, AI took birth in 1956, when it was officially named "Artificial Intelligence" by John McCarthy at Dartmouth Workshop.
Before 1956, there were theoretical foundations of AI, especially in the 1940s and 1950s.
Some of the valuable theoretical contributions before 1956 are:
  • Warren McCulloch and Walter Pits proposed a model called artificial neurons in 1943. Their model could perform simple logical functions.
  • Donald Hebb proposed the famous Hebbian Learning Rule in 1949, which is considered a fundamental rule or concept of Artificial Neural Networks.
  • In 1950, Alan Turing published his renowned paper. He discussed the thinking capability of machines and their intelligent behavior and introduced the Turing test.
Two practical implementations of AI before 1956 are:
  • Marvin Minsky, along with Dean Edmonds, created the first neural network machine called SNARC in 1951. It consisted of 3,000 vacuum tubes to imitate 40 neurons.
  • In 1951, Christopher Strachey wrote a program for checkers. In the same year, Deitrich Prinz wrote a program for chess.

The Era of AI Maturation: 1956-1979

After the naming ceremony of AI, the researchers gained a new track and recognition. That's when AI matured with continuous developments.
1958: The first programming language for AI, List Processing (LISP), was created by John McCarthy.
1959: The first self-learning program for checkers was created by Arthur Samuel. The name of his paper was "Some Studies in Machine Learning Using the Game of Checkers." This was the first time when the term "Machine Learning" was used.
1964: Daniel G. Bobrow wrote an AI program to solve word problems in algebra. His program was named "STUDENT."
1965: Edward Feigenbaum, with a team, introduced Dendral, which was an AI project. It created the famous AI software, Expert System. Dendral was used in identifying organic molecules. It was the first expert system in AI history.

What is an Expert System?

An Expert System is a computer system capable of imitating and emulating the decision-making ability of a human. It is the first successful AI software that delivered promising results, such as solving complex problems.
An Expert System is further divided into two subsystems. The first subsystem is the Knowledge Base, which contains the facts and basic rules. The second subsystem is the Interference Engine, which uses the rules on the facts to come up with new facts.
1964-1967: Joseph Weizenbaum created a natural language processing computer program called ELIZA. It was the first chatterbot (chatbot) in the history of AI. However, its response was vague and canned.
1968-70: Terry Winograd wrote a program called SHRDLU. It was an incredible invention of the time that could talk in common English and execute operations.
1967-63: Marvin Minsky and Seymour Papert built a robot arm (Minsky Arm) that could stack blocks.
1972: WABOT-1, the first intelligent humanoid robot in AI history, was created in Japan. It had limb control, an artificial mouth to speak in Japanese, artificial eyes & ears, etc.
1972: MYCIN was developed, an early expert system derived from Dendral. It used AI to identify bacteria and diagnose blood clotting disease. It also suggested antibiotics along with the dosage.

First AI Winter: 1974-1980

The machine translation failed badly in 1966, and the whole project was discontinued. It didn't work according to the expectations set by the AI researchers and developers.
In 1969, Marvin Minsky and Papert published a book, Perceptrons. They highlighted the limitations of neural networks and perceptrons. It caused a shutdown of single-layer artificial neural networks.
The final blow was the Lighthill report, "Artificial Intelligence: A General Survey," published by James Lighthill in 1973. He evaluated the academic research done in the field of AI. According to him, the results were not satisfactory, and the impact of AI was not according to the expectations and promises of the researchers.
All the circumstances impacted funding for AI research and development. It diverted the interest of people because of criticism and disappointment with the results. Therefore, this era is called the First AI Winter.

AI Boom: 1980-1987

This period is known as the AI boom because of the frequent developments and research in the field of AI. It was an era of a new AI industry in which various software and hardware companies, including Symbolics, Lisp Machines, and Aion, spent over a billion dollars in in-house AI research and development.
1980: An expert system XCON, which was a sort of AI program, was adopted by corporations. It was commercially adopted to answer questions and solve problems.
1981: The Japanese government allocated a massive amount of $850 million for the sake of AI and its fifth-generation computer project.
1982: John Hopfield proved that the Hopfield net, a form of neural network, had the ability to learn and process information.
1986: Parallel Distributed Processing was published by James McClelland and David Rumelhart. They proposed improvements to the perceptron idea.
1989: HiTech and Deep Thought were incredibly intelligent chess-playing programs that defeated some chess masters.
1989: A book, "Analog VLSI Implementation of Neural System" was written by Carver Mead and Mohammed Ismail.

Second AI Winter: 1987-1993

Even though there were promising results from some AI inventions, like XCON and Lisp Machines, AI faced a serious setback in the second AI winter.
The main reason was the introduction of desktop computers by Apple and IBM. These were more promising in terms of results compared to AI inventions. Due to lower cost and maintenance, desktop computers gained more popularity, and AI systems were left behind.
The massive and quick success of desktop computers resulted in a cut down of AI funding. Now, the main goal of DARPA and other investors was to fund advanced computer hardware and development.
Note: DARPA is a research and development body of the US Department of Defense. It has been funding AI since its beginning.
AI did not meet the expectations, and there were limited uses at that time. The Japanese fifth-generation computer project was also a failure in 1991.
All these things led to the shutdown of AI research, companies, and development. More than 300 companies were shut down by 1993.
The period was a setback for AI researchers and developments. Due to a lack of investment, the commercial development of AI stopped.

Rise of AI: 1993-2011

AI suffered from funding cuts for more than a decade in the previous AI winters, and it halted the progress and innovation in the field. It was difficult for AI systems to compete with the existing desktop computers and bring intelligent and low-cost alternatives.
After all these setbacks, AI started to gain popularity in the mid-1990s. The research and inventions in this era helped AI rise again, and it served as a foundation for the AI development we have today.
Researchers and AI developers realized the mistakes and reasons that didn't let AI meet expectations. They preferred to focus on specific problems and prove AI to be beneficial for mankind. Computers also had better processing power and capabilities, so it was much easier to run complex AI programs.
1997: Deep Blue introduced the first computer that beat the world chess champion of that time. It proved that machines can think like humans.
1999: Sony introduced the first consumer model of AIBO, which is a series of robotic dogs. It was an AI robot, and Sony continued manufacturing and developing new models until 2006.
2000: Kismet, the first AI robot that could recognize and simulate emotions, was developed by Cynthia Breazeal in the late 1990s or in 2000. It had ears, lips, eyebrows, jaw, and head.
2002: The first-generation Roomba (an automatic cleaning robot) was introduced to the market. It could detect obstacles and dirty areas on the floor.
2005: A Stanford University robot drove 131 miles autonomously. It didn't require human intervention and could complete any track without any rehearsal.
2006: Facebook, Netflix, and other companies started using AI to gain more users through optimal advertising and improve the user experience of users on the platform.
2007: A CMU-made robot completed 55 miles successfully in an urban environment. It could understand the traffic on the road and drive autonomously by considering the safety of other vehicles and following traffic rules.
2011: IBM presented its question-answering machine, Watson, which defeated two former champions of Jeopardy!, an American television game show. The Watson project was led by David Ferrucci.

The Current Era: 2011-Present

AI developed a lot in the 1990s and 2000s, but it was still named as an achievement of computer science. The reason was the fear among developers that the name of AI could cause issues in the development, funding, and future use.
AI entered various industries, including data mining, search engines, social media platforms, banking, and a lot more. This rise of AI made the developers confident enough to use AI as the name of their invention and technology.
In the 21st century, companies began to realize the importance of data and its processing. Secondly, the accessibility to affordable computers, faster processing, high-speed internet, large storage devices, and other developments helped AI to grow to the next level.
2011: Siri was released by Apple, which proved to be one of the most successful voice-powered personal assistants. Later, it was adopted by different devices to understand and act according to users' commands.
2012: Andrew Ng and Jeff Dean introduced a neural network that could recognize cats. The neural network looked at 10 million random YouTube videos consisting of around 20,000 items. It was able to recognize cats on its own without feeding any information.
2013: DeepMind trained a neural network through a deep RL algorithm. The network could play Atari video games and learn how to play the game and get rewards. It only needed a little prior knowledge, and then it learned 49 games on its own. It could play the games better than human experts.
In the same year, Thomas Mikolov, with co-authors, patented and published Word2vec. It could produce learn word embeddings on its own. It could show the relation of different words in the form of a graph.
2014: Facebook introduced its face recognition system to identify humans in their images. It could easily detect if there is a person in two different images.
Baidu invested in AI and expanded its research in deep learning to compete with Google.
Microsoft started research for its virtual assistant Cortana.
2015: Over 1,000 AI experts signed an open letter warning of a military AI arms race. They also called for a ban on autonomous weapons.
The original AlphaGo defeated a professional Go player, Fan Hui.
2016: A social humanoid robot, Sophia, developed by Hanson Robotics. It had the ability to understand and imitate emotions. It was a breathtaking invention that took the world by storm.
2017: AlphaGo recreated itself after learning from random inputs, and it was named AlphaGo Zero. It used reinforcement learning and learned the game just by knowing the rules of the game.
Nvidia created its AI program that could generate realistic images for celebrities that didn't exist. It used a large database of real celebrities and created fake celebrities in images.
Google introduced AutoML, which could create machine learning software at a low cost and with minimal human involvement.
Governments started using facial recognition systems to detect criminals and wanted persons.
2018: Google introduced Duplex, which could make calls on behalf of users and book appointments.
Google disclosed its involvement in developing AI drones for the US. It sparked anger among the employees and AI researchers. Later, Google declared that it won't be renewing its contract.
OpenAI introduced OpenAI Five, which could play Dota 2 and beat amateur players. It also competed against professional players but lost.
OpenAI created Dactyl to solve the Rubik's cube in any situation.
2019: Samsung introduced Deepfake, which could create fake videos just by taking a picture. The video looked realistic and difficult to identify.
GPT-2 was introduced by OpenAI in late 2019. It could generate text by taking a few sentences.
2020: According to a survey, over 50% of the respondents started using AI in product development, manufacturing, marketing, supply-chain management, service operations, risk, finance, etc.
AI helped COVID researchers analyze large amounts of data to deal with coronavirus.
Baidu demonstrated the automated driving capability of its AI system. It could drive in busy cities with safety. Secondly, it introduced a 5G Remote Driving Service to drive a car remotely in case of an emergency.
OpenAI GPT-3 took the world by storm. It used Deep Learning to code and generate text, poetry, and other things.
2021: OpenAI revealed DALL-E 1 early in the year. The powerful AI could generate images from text.
Google released TensorFlow 3D to develop and train AI models according to 3D.
IBM launched a cloud-based AI platform that could invent new molecular structures.
GitHub launched Copilot, an AI pair programming tool to help developers code better and with less hassle.
2022: DALL-E 2 was launched by OpenAI, and it was a much-improved version. It could generate better images four times faster than the original one.
DeepMind created AlphaCode, which could beat 72% of human coders even in complex problems.
OpenAI released the famous chatbot, ChatGPT. It surprised common people and gave them an idea of what AI could do.
2023: OpenAI released GPT-4, which is now available as ChatGPT Plus. It is much better and faster than GPT-3.
Google launched its most capable AI model, Gemini. It is hoped that it could take over GPT-4.
We have seen the rise of AI tools, such as ChatGPT, Jasper.ai, AIaaS (AI-as-a-Service), AutoML, etc. We have witnessed powerful tools like Gemini at the end of the year, and the next year could be more exciting in terms of AI development and research.

AI Future: 2024 and The Years to Come

We have had a deep look at AI history from its birth. It is evident that the process in recent years has become fast-paced and result-driven.
With companies like IBM, Baidu, Google, Microsoft, OpenAI, and others, the future of AI is brighter. There is a lot to come for the benefit of mankind and entering various sectors of life.
But along with that, there are risks of autonomous weapons, breach of privacy, job loss, deepfakes, social manipulation, socioeconomic inequality, etc.
That is the reason AI researchers and experts want to regulate the research and development of AI. In 2023, Elon Musk, Steve Wozniak, and others called for a pause on AI. There are risks to society and humanity.
We will see more developments in AI, but it is likely to be regulated by governments in the near future.