OZ Digital, LLC

  1. Home
  2. /
  3. Resources
  4. /
  5. Blog
  6. /
  7. AI’s Past Holds the...

AI’s Past Holds the Key to Its Future

In retelling the origins of artificial intelligence (AI), a story from the Japanese anime book series, Doraemon comes to mind. The story goes like this: Nobita Noki, the main character in the Doraemon series, is upset after being scolded by his parents. Nobita approaches Doraemon, complaining, “They’re always watching my every move. I just want to be left all alone.” Hearing this, Doraemon pulls a secret gadget, the Pebble Hat, out of his fourth-dimensional pocket and explains: “When you wear this hat, you’ll be like a pebble on the ground—unnoticed.” In other words, you’ll exist, but no one will notice.  

Like the Pebble Hat, AI has always been around, except we never noticed it. As Murray Izenwasser, SVP of Digital Strategy at OZ Digital Consulting, in his compelling keynote at the recently concluded AI Future Summit 2024, points out, “AI isn’t new. AI started in the 1940s” and “We’re now in the 80th year of AI research.”

AI Is Not New, So Why Does It Matter Now?

Knowing the history of AI is important in understanding AI’s current state and where it’s headed. In this article, we cover all the major milestones in AI, right down to the most recent advancements.  Without studying the past, it’s hard to understand the present or what the future holds.  

If AI sometimes feels like a recent development in technology, it’s because, as Murray notes, “Before 2015, AI was called Big Data. There were no conversations online or very few; maybe, a couple of academic papers. It was all about big data. In 2015, Open AI was founded and there started to be a mainstreaming of the concepts that we still kind of talk about today.”

[Watch Murray Izenwasser’s AI Future Summit 2024 keynote: Before We Look to the Future, Let’s Take a Quick Glance at the Past] 

A Brief History of AI

Before we dive into AI’s history, what even is artificial intelligence or AI? Artificial intelligence is a branch within computer science that continuously learns from human intelligence by ingesting data, processing, and learning from it. 

The idea of “artificial intelligence” goes back thousands of years, when people tried to build machines that could work without human intervention. The idea of a machine doing what humans do is as old as mankind. 

1950 – 1959: The Birth of AI 

  • 1950: The British mathematician Alan Turing published his work “Computer Machinery and Intelligence,” that eventually came to be known as the Turing Test, which made it possible to determine whether the behavior of a machine was indistinguishable from a human. 
  • 1952:  Arthur Samuel developed a chess-playing computer program.
  • 1956: Over the summer of 1956, a small group gathered at Dartmouth College in New Hampshire — that included John McCarthy, Claude Shannon and Herb Simon — to discuss how machines could use language to solve problems only humans could solve until then.

1959 – 1979: AI Takes Off

Between 1959 and 1974, artificial intelligence flourished as computers began to store more information and became cheaper and faster. Government agencies such as the Defense Advanced Research Projects Agency (DARPA) began to pour money into AI research.  

  • 1961: The first industrial robot, Unimate, began transporting die casings and welding parts on cars on an assembly line at General Motors.
  • 1965: The first “expert system,” a form of AI, mimicked the thinking and decision-making of human experts. 
  • 1966: ELIZA, the first “chatterbot” (later shortened to chatbot), using natural language processing (NLP), was born.  
  • 1968: A new approach to AI that would later become “Deep Learning” was proposed. 
  • 1979: James L. Adams created The Standford Cart, one of the earliest examples of an autonomous vehicle.  

1980-1987: AI’s Rapid Growth 

AI advanced rapidly in the 1980s as more government funding poured in. Around this time, deep learning and expert systems also became increasingly popular. 

  • 1980: The first expert system, XCON (expert configurer), entered the commercial market.  
  • 1981: Efforts were underway to create computers that could converse and reason like humans. 
  • 1986: The first driverless or robot car that could drive up to 55 mph on roads without obstacles or human drivers was created. 

1987-1993: The AI Winter

By this time, public interest in AI waned. By the late 1980s, funding dried up and there were fewer breakthroughs. This period eventually came to be known as the “AI Winter”.   

1993-2013: AI Picks Up

The early 90s saw rapid strides in AI research, including the introduction of the first AI system that could beat a reigning chess Grandmaster. AI began to sneak into everyday life, helping us with mundane chores like hoovering carpets. It’s an era that will be known for the first Roomba and the first commercially available speech recognition software on Windows computers.  

The renewed interest was followed by a surge in research funding, which led to more breakthroughs:  

  • 1997: Windows releases speech recognition software. 
  • 2000: Kismet, the first robot ever to be built to simulate human emotions.  
  • 2002: The first Roomba was released. 
  • 2003: NASA lands two rovers, Spirit and Opportunity, on Mars without human intervention. 
  • 2006: Companies begin to embed AI into their advertising algorithms. 
  • 2010: Microsoft launched the Xbox 360 Kinect, the first gaming hardware to track body movement and translate it into gaming directions. 

2012-present: AI Is Here to Stay  

This brings us to the most recent developments in AI where we’re seeing an uptick in commonly used AI tools, such as virtual assistants and search engines. This period also marked the rise of Deep Learning and Big Data.

Murray Izenwasser explains, “In 2015, we saw breakthroughs not only in technology—with the founding of OpenAI—but also in consumer integration. We started to get Nests. We started to get Alexas. We started to get Google Homes. We started to get all these devices that had what we would consider today AI components as part of their operating systems.”

He continues, “We started to talk about AI’s technical advancements, consumer products, and the transformation of data. Previously, we used to talk about turning data into information. Now we talk about turning data into knowledge. How can we use data to truly give us knowledge?”  

By 2015 almost everyone in the image-recognition field was using deep learning, the mapping of one type of thing onto another: speech recognition (mapping sound to text), face-recognition (mapping faces to names) and translation. 

In all these applications, the huge amounts of data that could be accessed was integral to their success. Plus, the bigger and deeper the networks got, the more training data they were given, and the more their performance improved.

Key milestones include:  

  • 2012: Two Google researchers, Jeff Dean and Andrew Ng train a neural network to recognize cats by showing it unlabeled images and no background information. 
  • 2016: Sophia, the first “robot citizen” by Hanson Robotics is born. 
  • 2017: Self-supervised learning became a reality with the help of transformers.  
  • 2020: OpenAI started beta testing GPT-3, a deep learning model.  
  • 2022: Personalization became a primary focus with the rise of recommendation engines. 
  • 2023: In January, Chat GPT was released to the general public. For the first time, we had code that spoke in plain language, not the language of programmers.  

What Comes Next?  

So, what’s ahead for AI? Murray says, “What we’re starting to see around AI is strategy,” adding that “organizations are starting to get very serious about strategically implementing AI to enhance customer interactions, business outcomes, and productivity.” “The discussions are all around the customer,” he notes.

Another thing organizations are realizing is that AI is only as good as the data it relies on. As Murray says, “Before you can invite this AI party to your house, you have to go around and clean up a little bit (or a lot).”

There are a myriad solutions today—countless choices and options—but whatever solution you choose, there’s no escaping that AI will soon become foundational in both business and daily life.  

As a business leader, you might be feeling overwhelmed, unsure of where to start or how to kickstart your AI journey. We’re here to help you explore AI’s possibilities, tackle its challenges, and unlock its potential.  

Together, let’s shape the future of business.

Ready to get started with AI?

CTA: TAKE OUR FREE AI ASSESSMENT 

REACH OUT TODAY FOR A FREE AI READINESS ASSESSMENT