Have you had a conversation with Siri, Google Assistant, Cortana or Alexa? They do not understand you but convert the words into text, break that text into characters, then match them against their database of terms, and produce an answer. Plenty of algorithms, bundled up and applied, mean that computers can fake listening. Such software is grouped together under the common name Artificial Intelligence (AI).

What is Artificial Intelligence (AI)?

Artificial intelligence encompasses a broad range of technologies, such as traditional logic and systems that are rules-based, that enable computers and robots to solve problems in ways resemble thinking. AI attempts to understand intelligence in humans and other animals and to apply that knowledge to build intelligent entities. The latter is the engineering side of AI while the former is empirical science. These two diverging fields have brought a lot of change to the way we design and implement things of lately. Machines with logic were imagined and created a very long time ago. The formal concepts of logic were incorporated into machines by the works of Wilhelm Leibniz and Charles Babbage. However, it was the work of Alan Turing and others in the twentieth century built modern computers that brought significant progress in computational intelligence.

 Hardware is all around us. And the software that controls these machines are everywhere. When you get money from an ATM or use Facebook to share your moments, it is software that plays behind the effect. Software is made of codes, sets of instructions to the hardware. Codes call upon functions made of suitable algorithms to perform various mathematical routines. Mathematical models were then developed to imitate the human cognition, and resulting genetic algorithms were used to modify computer algorithms. Turing in Britain and John von Neumann in the United States made significant contributions to formalize AI as a major branch.

 A Brief History of Artificial Intelligence

 A balance needs to be there between the logical reasoning and the human behavior when developing AI. The first logical rules of AI were designed by Alan Turing using the humanistic approach, often known as the Turing test. Turing’s 1950 paper titled “Computing Machinery and Intelligence,” proposed the Turing test an operational definition of intelligence. The Turing test provides and operational definition of intelligence and helps to define an intelligent machine as one which an interacting person cannot distinguish from a person. Such a machine requires to possess the following capabilities:

  • Natural language processing to communicate in each language of choice;
  • Knowledge representation to store available information;
  • Automated reasoning to reply to questions and to arrive at conclusions using the stored information;
  • Machine learning to adjust to new circumstances and to perceive and extrapolate patterns;
  • Computer vision to recognize and distinguish between objects;
  • Robotics to move and manipulate objects.

Further, Gödel’s theorem denied the existence of completeness in formals systems, and there were other objections raised about the possibility of consciousness in computing systems. Turing managed to separate intelligent behavior from consciousness, which remains a mystery even today. Thus, intelligent behavior in computing systems remains the pillar of AI.

A formal meeting in 1956 held organized by John McCarthy in Dartmouth was the breakthrough moment for AI. Four papers presented at the conference established AI as a formidable field of study. Allen Newel and Herb Simon presented the most important work in this meeting, a paper in symbolic cognitive modeling which became one of the principal influences on cognitive psychology and information-processing psychology. Their IPL languages were the first symbolic programming languages. McCarthy’s LISP language developed slightly later, became the standard programming language of the AI community. Noam Chomsky’s work on linguistics led to the mathematical modeling of mental structures. Norbert Wiener developed the field of cybernetics that provided mathematical tools for the analysis and synthesis of physical control systems.

However, the development of Artificial Intelligence entered a dormant state by the 1980s. AI was reborn in the 1990s by the academic revival of interest in the neural network, and the commercial interest in fields such as data mining. Applied machine learning, expert system technologies that relied on probabilistic inference and the Bayesian approach have given led to the new impetus to AI. Today, AI has adopted the scientific method and rely on strong statistical tools to make decisions more powerful. All systems now follow the intelligent agent architecture.

Scientists have differed over the evolutionary course and priorities of AI development. Some people think that AI should put less emphasis on creating ever-improved versions of applications that are suitable for a task, such as driving a car, playing chess, or recognizing speech. Such an emphasis on machines that think, that learn and that create has resulted in human-level AI or HLAI, an effort that requires large knowledge bases. A related idea is the subfield of Artificial General Intelligence or AGI which looks for a universal algorithm for learning in any environment. This field also attempts to ensure that the AI we create is a Friendly AI or not. The emergence of big data has given a boost to AI research.

Fields that benefit

Over the past sixty years, AI has grown from a field of hobbyists and enthusiasts to a fully functional field of study. AI functions as the heart and brain of the current hot topics of pursuits like data mining, database management, networking, geometric computing, computational biology, language computing, and robotics. Computational intelligence draws its core principles from cognitive modeling, logic, philosophy, mathematics, computer engineering, economics, neuroscience, psychology, linguistics, control theory, and cybernetics. AI research that uses a potent mix of science, engineering, and mathematics has led to the growth of several branches, notably:

  • Robotic vehicles: Studies predict that all cars will be electric within the next ten years. Almost all car making company in the world and technology giants are working on autonomous navigation in which AI is the brain.
  • Speech recognition: AI enabled personal assistants are now available on all digital platforms and with the arrival of the internet of things, there are no limits to where AI can go.
  • Autonomous planning and scheduling: This is becoming a big thing in space programs and remote station management. AI systems will manage future manned or unmanned missions to Mars and beyond.
  • Gaming: The power of AI became visible when the IBM Deep Blue beat Garry Kasparov in an exhibition chess match in 1997. Today computers can beat any human player not only in chess but also in several other board games.
  • Cyber security: Each day, learning algorithms filter over a billion spam messages, that could comprise most all messages, if not classified away by algorithms. Artificial intelligence systems are essential to adapt to the different tactics of hackers and cyber bullies.
  • Logistics planning: Worldwide transportation and logistics businesses, both civilian and defense, are benefitting from the strategic planning tools provided by SI systems.
  • Robotics: Robots are now everywhere from home services to corporate customer care and in industrial production. They can handle several missions not possible by humans ranging from simple delivery to the handling of hazardous materials, secure explosives, and identification of the location of snipers.
  • Machine translation: Computer programs can now translate several languages using statistical models built from examples of translation with the help of machine learning.

Machine Learning and Deep Learning

One area that deserves special mention in the family of artificial intelligence (AI) techniques is deep learning, which most scientists call by its original name: deep neural networks. Deep learning is a subset of machine learning which is the collective term for a whole toolbox of mathematical and statistical techniques that enable computers to improve at performing tasks with experience.  Deep learning is composed of algorithms that permit software to train itself to perform tasks, like speech and image recognition, by exposing multilayered neural networks to vast amounts of data. Deep learning, in principle, can transform almost any industry. However, deep learning algorithms are not substituents for human reason.

Machine translation has become very convincing, with all leading internet companies such as Google, Facebook, Microsoft, and China’s Baidu offering plenty of new tools every month. Google Translate can convert spoken sentences for 32 pairs of languages while providing text translations for 103 tongues. Google’s Inbox app offers three instant replies for many incoming emails. The advances in image recognition hope to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists and to diagnose cancer earlier and less invasively. Better image recognition is important in robotics, and, in autonomous vehicles. Deep learning, armed with big data, has become the most powerful tool of all technology companies that matter today. Many of the headline-making AI news, from the IBM Watson, that beat two champions at Jeopardy to Deepmind’s AlphaGo that beat the world champion in the game of Go, were powered by deep learning algorithms. If the very recent signs are correct, we are going to witness more immerse platforms, such as the Google Lens, where machine learning blends with augmented and virtual reality experiences. For business houses, it is going to be `AI first.’


Machines can outperform human beings when it comes to the general workflow required for several skilled jobs such as data gathering, analysis, interpretation and a course of action by this. AI and machine learning will quickly surpass our abilities in data gathering and analysis. The lives of workers will be transformed by Artificial Intelligence  as they need to acquire a new to stay in these careers. Social networking, people development, coaching, and collaboration are going to be the fields where machines cannot match humans soon. These areas, which are essential to administrative coordination and control in any organization, are the ones where humans will retain their superiority in an AI dominated future. Human values such as creativity, empathy, collaboration, and judgment will gain the upper hand in such a world.

About the author

 Jijo P UlahannanPhysicist, Educator, and TED Fellow