top of page

What is Artificial Intelligence (AI)?





In the world of computer science, the term artificial intelligence (AI) is used when a computer, robot, or other machine displays human-like intelligence. More generally, artificial intelligence refers to a computer or machine that imitates how a human behaves or thinks. This includes very human-like functions such as learning from experience, recognizing objects, listening to someone speak and then replying to them, decision making, solving a problem, and so on. These can culminate further into carrying out tasks that are typically only executed by people, such as driving a car or greeting a hotel guest.

Thanks to the simultaneous development of and access to computer systems that can process large data sets quickly and accurately plus access to vast amounts of data, AI’s development has surged.



No longer the stuff of bad science fiction films, AI is calmly, quietly, and efficiently already at work amongst us.



From finishing your sentences for you when you search online, to recommending what you might like to buy or watch next, AI technology is very much part of our everyday lives.

Contrary to the fear that AI and machines will replace people’s jobs, AI technology’s primary purpose is to help people do their work faster and with greater success.




Understanding Machine Intelligence (AI)

Artificial intelligence (AI), also known as machine intelligence, is a branch of computer science that seeks to infuse software with the ability to analyze and understand its environment. The software utilizes predefined rules and search algorithms, or pattern-recognizing machine learning models, in order to make decisions about the analyses.


In this way, AI aspires to simulate biological intelligence so that the software application or system can act with various levels of autonomy. This actively reduces the need for manual human intervention for a wide range of functions and gives the human talent or resource freedom to focus on more important or meaningful tasks.


The Origins and Milestones of Artificial Intelligence

Where did it all start? The notion of a predictive computer is not a new idea, in fact, it dates back to the ancient Greeks. Here are the events and milestones in AI’s evolution since the launch of modern electronic computing:

  • 1950: Alan Turing, famous for cracking the Nazi’s ENIGMA code during WWII, wrote and published Computing Machinery and Intelligence. In this paper, Turing sought to answer the question, 'can machines think?' and introduced the Turing Test. The goal of the test is to determine if a computer can pass as a human, and the value of the Turing test has been debated ever since.

  • 1956: We see the first appearance of the term ‘artificial intelligence’ when John McCarthy, the inventor of the Lisp language, coined it at the first-ever AI conference at Dartmouth College. Later that year, Allen Newell, J.C. Shaw, and Herbert Simon created the Logic Theorist, the first-ever AI software program.

  • 1967: Frank Rosenblatt built the Mark 1 Perceptron, the first computer system based on a biological neural network that 'learned' through trial and error. Shortly thereafter, Marvin Minsky and Seymour Papert published a book titled Perceptrons, which became a cornerstone work on neural networks.

  • 1980s: The use of backpropagation or algorithms that train the network, in AI applications’ neural networks became more widespread.

  • 1997: IBM's Deep Blue beat the then-reigning world chess champion, Garry Kasparov, in a chess match (and rematch).

  • 2011: IBM Watson beat champions Ken Jennings and Brad Rutter at Jeopardy!

  • 2015: Baidu's Minwa supercomputer identified and categorized images more accurately than the average person.

  • 2016: DeepMind's AlphaGo program beat Lee Sodol, the world champion Go player, in a five-game match. Shortly thereafter DeepMind was purchased by Google.

  • 2018: We see the emergence of deep fake AI-based technology, where video and audio files are altered using AI technology to spread fake news. Ironically, the only way to successfully detect these fake videos is through AI-enabled applications.

The Evolving Stages of Artificial Intelligence


Narrow or Weak AI

Weak AI, more accurately called Narrow AI as it is far from weak, is capable of performing predetermined tasks and is responsible for most of the AI-enabled applications we use today. Think Apple’s Siri, Amazon’s Alexa, self-driving cars, and the IBM Watson computer that was victorious in Jeopardy.

General or Strong AI

Strong AI, or Artificial General Intelligence (AGI) can be compared to the human mind’s ability to function autonomously within a wide range of stimuli. This AI can solve different types or classes of problems and even decide which problem it wants to solve, without human intervention. Strong AI is still only a theoretical technology, with no practical systems in use today.

Super AI

Super AI, or Artificial Superintelligence, is a yet-to-be-invented AI technology with processing powers that far surpass the human mind. This AI is typically what is imagined when people think about robots or computers taking over the world, however, it is unlikely that this technology will exist any time soon.

The Four Categories of Artificial Intelligence

There are four categories in which AI falls:


Reactive AI only responds to immediate, current situations, and makes no reference to past experiences.





Limited Memory AI uses stored data to learn from recent experiences to aid decision-making.





Theory of Mind AI can understand conversational speech, emotions, non-verbal cues and other intuitive elements.




Self-Aware AI displays human-level consciousness with its own desires, goals and objectives.


To understand these categories better, imagine they represent four different personality types sitting around a dinner table.




Reactive AI understands that it is noon, which means it is lunch time and so eats the food in front of it. Limited Memory AI remembers that food can be hot, so it first checks to see if the food is ready to be eaten.

Theory of Mind AI reads the expression of the other diners at the table to determine if the food tastes good before it decides to eat. Self-Aware AI decides it would rather not eat and leaves to do something else.


What is the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?


When it comes to understanding how AI, machine learning (ML), and deep learning (DL) are connected, it may be easier to think in terms of Russian dolls, with one stacked within the other. Consider artificial intelligence as the entire scope of computer technology that displays any semblance of human intelligence. ML is a subset of AI applications that reprograms itself, or learns, as it processes more data, in order to carry out a task it is designed to do with ever-growing accuracy. DL is a subset of ML applications that learns to perform a task with increasing accuracy, without human intervention. Let's take a closer look at machine learning and deep learning, and how they differ.

Machine Learning (ML)

Machine learning applications or models are applications that can learn, adjust their decisions, or make predictions based on patterns in historical data, and as they process new data. These applications are designed to emulate a neural network.

This is a network of algorithmic calculations that tries to ape the thought processes and simulation of data of the human brain. At its most fundamental level, a neural network is made up of the following:

  • Data enters the network at the input level.

  • Machine learning algorithms exist in at least one hidden level. This is where inputs are processed and weights, biases, and thresholds are applied to the inputs.

  • The network produces various conclusions with varying degrees of confidence in an output layer.




Machine learning applications that are not deep learning models have artificial neural networks with just one hidden layer.

ML models are given labeled data, which means the data is input with identifiers that help the model better identify and understand the data. Machine learning applications are capable of supervised learning (learning that requires human input or supervision). This means that a person will adjust the algorithms in the model from time to time in order to aid learning.


Deep Learning (DL)


Deep learning models form part of the machine learning spectrum but are based on neural networks that have many hidden layers. These are known as deep neural networks. Each of these hidden layers serves to further refine the conclusions made by the previous layer of the network.

The forward movement of calculations through the hidden layers towards the final output layer is known as forward propagation. An additional process, called backpropagation, is where errors in calculations are identified, weights are assigned, and the calculation is then pushed back to previous layers for further refinement or training of the model.




When it comes to labeled data, some deep learning models do work with it, however many DL applications work with unlabeled data, and vast amounts of it. Additionally, in contrast to ML, DL models are capable of unsupervised learning. This means that they can detect features and patterns in data with very little human supervision.


The Practical Applications of Artificial Intelligence


AI is currently used successfully to support a wide array of functions across many industries, both in the lab and in commercial and consumer sectors.

Speech Recognition


AI systems are used to convert the spoken word into text or code. Think Amazon’s Alexa, Apple’s Siri, Google’s Google Assistant, and Microsoft’s Cortana. Speech recognition AI promises to reduce paperwork and admin, improves customer experiences, and streamline daily activities. Further applications can be found in healthcare, where fast, hands-free access to important medical records can positively impact a patient’s treatment, health, and wellbeing.


Natural Language Processing


This still-evolving subset of speech recognition sees computers and people conversing. Natural language processing (NLP) is able to withdraw meaning from human language - be it written or spoken - so that it can make decisions and conclusions based on the information. NLP is already used in everyday life. It determines what emails should go to your spam folder and which should go to your inbox. It is the friendly chatbot on your favorite online shopping site, and it is the power behind your web search functionality that seemingly knows what you want to find before you finish typing it.


Computer Vision


This AI allows computers to ‘see’ the world. Computer vision (CV) technology gives machines the ability to scan an image or video and then identify what it is, and what decision it should make next, by using comparative analysis. Why is this important? With over 3 billion images shared online every day, there is a vast amount of untapped visual data that can be processed and analyzed. In the healthcare industry alone CV is successfully used for faster, more accurate diagnosis from patient scans and mammograms, for X-ray analysis, and for early illness detection.

Data and Predictive Analytics


Organisations have access to more data than ever before. However, not many know how to put it to good use. Enter data analytics and predictive analytic