History Of Artificial Intelligence

by Josh Biggs in Tech on 6th February 2021

Artificial Intelligence (AI) has been around us as a scientific and academic discipline since the 1950s, but this technology has gained a lot of traction in the last few years. The present surge in AI researches, investment, and real business applications is unprecedented. It is a prediction by Market Intelligence Firm IDC, that the worldwide spending on cognitive and Artificial Intelligence systems would reach $77.6B by 2022. In the next five years, it is also expected that AI’s industry growth will start to explode and will affect business and society to emerge. From crucial life-saving medical gear to self-driving vehicles, AI is going to be infused into virtually every program and apparatus. 

Artificial Intelligence can improve the human experience as a whole. It will create significant business opportunities and social values. It is the upcoming major thing of this high-tech business world. AI offers smart assistance, robot advisors in the field of insurance, finance, legal, journalism, and media. AI is also beneficial in improving efficiencies in R&D projects by taking less time to market, observing supply and transport chain networks, and helping governance by the improved decision-making process. 

The research and innovation directed by the best tech organizations are improving and affecting the industry verticals such as auto, healthcare, finance, manufacturing, and retail. Several major companies like Apple, Amazon, Google, Facebook, IBM, and Microsoft are working towards creating AI more accessible for companies. So, it definitely makes sense to know about the history of AI. This article first describes artificial intelligence basics and then explores more about its history.

What Is Artificial Intelligence?

Artificial Intelligence (AI) is a technology that has the potential to impact the way humans interact with the digital world through their work and other socio-economic institutions. It is an advanced technology that traditionally refers to an artificial creation of human-like intelligence that can learn, reason, perceive, plan, along with processing natural language that allows bringing immense socio-economic opportunities with ethical challenges. It is also known as machine intelligence that aims to imbue software with the ability to analyze its environment using search algorithms and predetermined rules or pattern recognizing machine learning models and making decisions based on those analyses.

Ai is a science with multiple approaches but advancements in deep learning and machine learning are creating a paradigm shift in virtually every sector. It is genuinely a revolutionary feat of computer science, designed to become a core component of all advanced and modern software over the coming years. It also refers to computational tools that can substitute for human intelligence in performing some tasks. It is presently advancing like the exponential growth experienced by database technology in the late twentieth century. 

Artificial Intelligence demonstrates some of the human behaviors associated with intelligence such as learning, planning, problem-solving, reasoning, perception motion, knowledge representation, and manipulation along with creativity and social intelligence.

Categories Of Artificial intelligence

There are four distinct categories of AI, mentioned below.

  • Reactive AI- It can react only to existing situations and problems, not past experiences.
  • Limited Memory AI- It is based on and relies on stored data to learn from recent experience to make decisions.
  • Theory Of Mind AI-  It has the ability to comprehend conversational speech, non-verbal cues, emotions, and other intuitive elements.
  • Self-Aware AI- It involves human-level consciousness with its own goals, desires, and objectives.

Artificial intelligence is probably the most outstanding and complex creation of human beings yet and remains largely unexplored. It means every amazing AI application that we see today represents the tip of the AI iceberg. Its powerful capabilities and rapid growth have made people paranoid about the inevitability and proximity of an AI takeover. 

History Of Artificial Intelligence

Artificial intelligence is not a new technology and a new word for researchers. It is much older than we imagine. Following are some stages/milestones in the history of AI which represents the journey from the AI generation to date development.

Maturation Of Artificial Intelligence ( 1943-19420)

  • The first work which is recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
  • In the year 1949, Donald Hebb demonstrated a rule called Hebbian learning, for modifying the connection strength between neurons.
  • In the year 1950, an English mathematician Alan Turing pioneered machine learning 1950. He published “Computing Machinery And Intelligence” to exhibit intelligent behavior equivalent to human intelligence called the “Turing Test”.

The Birth Of Artificial Intelligence (1952-1956)

  • Year 1955- Herbert A. Simon and Allen Newell created the “first Artificial Programm”  named Logic Theorist. It had proved 38 mathematics theorems and found more elegant proofs for some theorems.
  • Year 1956- The world “Artificial Intelligence” first opted by an American scientist John McCarthy at a conference, and AI coined as an academic field.

At that time computer languages such as LISP, FORTRAN, and COBOL were introduced and the enthusiasm for AI was very high.

The Golden Years-Early Enthusiasm (1956-1974)

  • Year 1966- The researchers developed algorithms that can solve the mathematical problem. Joseph Weizenbaum created the first chatbot in the year1966, named ELIZA.
  • Year 1972- In this year, the first intelligent humanoid robot was created in Japan, named WABOT-1.

The First AI Winter (1974-1980)

  • The duration between these years (1974-1980) was the first winter duration that refers to  the time where computer scientists dealt with a severe shortage of funding from the government for research.

A Boom Of AI (1980-1987)

  • Year 1980- After winter duration, AI came back with an “Expert system” that was programmed to emulate the decision-making ability of a human expert.
  • In that same year, the first national conference of the American Association of Artificial Intelligence took place at Stanford University.

The Second AI Winter (1987-1993)

  • The duration between these years known as the second AI winter duration(1987-1993)
  •  It was again, governments and investors stopped funding for AI research due to high costs. An expert system like XCON was very cost-effective.

The Emergence of Intelligent Agents (1993-2011)

  • Year 1997- In this year, IBM Deep Blue beats world chess champion, Gary Kasparov, who becomes the first computer to defeat a world chess champion.
  • Year 2002-  In this year, for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
  • Year 2006- By this year, AI entered the business world, and several big companies such as Twitter, Facebook, and Netflix started using AI.

Big Data, Deep Learning, and Artificial General Intelligence (2011-Present)

From the year 2011, there are several inventions and discoveries we have witnessed till present such as Google Now, Eugene Goostman, Project Debater, Duplex, and many more through deep learning, big data, and general artificial intelligence. Nowadays AI has reached a remarkable level and is trending like a boom. Several market giant companies are working with AI to create amazing devices. So we can say the future of artificial intelligence is inspiring and will come with high-level intelligence.

Categories: Tech

Cart (0)

No products in the cart.