Artificial Intelligence - When Machine Learning Unites Human Expertise

 

Artificial Intelligence - When Machine Learning Unites Human Expertise
Artificial Intelligence

In today's era, it is very hard to talk about technology without artificial intelligence or AI, entering into the conversation. It has been influencing our lives directly or indirectly. Even though we may not be aware of it, AI is everywhere.

But what exactly is Artificial Intelligence (AI)? To be simply put, Artificial Intelligence is intelligence displayed by the machines, unlike the natural tendency of intelligence shown by humans. Machines are programmed to think and behave like humans and mimic their actions. AI can be referred to any machine that exhibits traits associated with a human mind such as, learning and problem-solving. Leading AI textbooks call it the study of "intelligent agents": any machine that understands its environment and takes actions to maximize its chance of successfully achieving its objectives.

hr.new2 {

  border-top: 1px dashed red;

}

The World of Artificial Intelligence -When Machines Think Like Humans

One can see various examples of AI around us - from chess-playing computers to self-driven cars – all rely heavily on deep human learning and language processing. An interesting everyday basic example of machine learning is when you check your mail; they get automatically sorted into social, public, private and so on.  The machine mind chooses the best possible website according to your search and the choice of keywords use and gives you the best websites as per your search.

With businesses using it to operate more efficiently, AI is resulting in safer and more useful products; allowing us to personalize our worlds as devices learn our choices and preferences.


Also Read | Can AI Technology Revolutionize Disease Diagnostics?


The origin and evolution of Artificial Intelligence (AI) has been an exciting journey with its share of ups and downs.

At the outset,  thought-capable artificial beings were discussed  first time in an 1818 novel written by English author Mary Shelley (1797–1851) that tells the story of Victor Frankenstein,  a young scientist who develops a sapient creature in an unorthodox scientific experiment.

The study of "formal" reasoning started with philosophers and mathematicians in antiquity. The study of mathematical logic is directly linked to Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. His theory of computation, which explained that a machine, by shuffling symbols as simple as "0" and "1", could simulate any possible act of mathematical deduction. This insight, that machines can simulate any process of formal reasoning, is called as the Church–Turing thesis.

This was the main thought behind logical framework of his 1950 paper Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence. He basically asked the golden question whether machines can think like humans, or alternatively, can machines imitate a human in a test that now is famously known as the imitation game?  But what stopped & hindered Turing from getting to work right then what the fact that computers needed to fundamentally change. Before 1949 computers lacked the basic prerequisite for intelligence: they could not store commands but only execute them. In other words, they could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive.


Artificial Intelligence - When Fiction Meets Reality

Five years later, Dartmouth College hosted the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI)- a workshop considered by many to be the first artificial intelligence program. Considered to be the birthplace of the field of artificial intelligence itself, the event organizer John McCarthy, a scientist at Dartmouth, coined the term AI around that time. In this historic event in 1956, McCarthy brought together top researchers and scientist from various fields for an open ended discussion on different ways that machines could potentially exhibit intelligence. Sadly, the conference fell short of McCarthy’s expectations and he felt there was failure to agree on standard methods for the field. Despite this, everyone agreed on the sentiment that AI was achievable.

The importance of this conference cannot be undermined as it catalyzed the next twenty years of AI study and research.

In the following years since then, AI witnessed a sudden wave of optimism, followed by discontent and the loss of funding (known as an "AI winter"),followed by new approaches, success and renewed funding.

In the early 1980s, AI research was renewed by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the AI market was achieving heights and reached over a billion dollars. During the same time, Japan's fifth generation computer project inspired the U.S and British governments to revive funding for academic research. However, beginning with the downfall of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.

In the late 1990s and early 21st century, AI came into bigger picture and began to be used for logistics, data miningmedical diagnosis and other areas. In 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion and grand master Gary Kasparov. In the same year, NaturallySpeaking became the first publicly available speech recognition system developed by Dragon Systems and implemented on Windows.

The last few years demonstrated an absolute explosion in data and computational ability. Some renowned examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people. Some of the most mind boggling developments of AI like Apple introducing Siri, machines seeing cats in images, and also AlphaGo, an AI powered system that went on to beat the world champion at the game Go completely revolutionize our way of working.   In a survey conducted in 2017, one in five companies reported that they had successfully "incorporated AI in some offerings or processes".

Eyes set on the achieving heights in future, Artificial intelligence is going to change every industry, but we still have to understand its limits.

The principle limitation of AI remains that it learns from the data which is fed to it. There is no other way in which it can generate knowledge by itself. That means any inaccuracies and glitches in the data will be reflected in the results. Stakes remain high in expensive projects which are data-driven.


Artificially Intelligent Machines Augmenting Human Capacities
Artificially Intelligent Machines Augmenting Human Capacities


Artificially Intelligent Machines Augmenting Human Capacities


There is one interesting myth that continues to prevail in relation to AI. Hollywood movies and science fiction novels depict Artificial intelligence as human-like robots that take over the world and create chaos but the current evolution of AI technologies isn’t that scary – or quite not that smart enough. Some people consider AI to be a danger to humanity if the progresses unabatedThe use of AI could also be more insidious. Prominent figures like Stephen Hawking and Elon Musk have been known to warn about the inevitable and imminent risks of AI for years. Their concern raised from the fact that AI devices could soon become super intelligent and find it no longer has a need for us humans. Others, however, argue the biggest threat from AI will only be human itself and the way they continue to use it.

Others also believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. We can already see artificial intelligence-based systems and chatbots taking every industry by storm. The rising concern of AI taking over everyone’s jobs is becoming urgent as recent AI breakthroughs and funding in research attract public attention.

But, one cannot deny the fact that AI has evolved to provide many specific benefits in every industry. In the 21st century, AI experienced a revival with advancements in computer power, theoretical understanding and AI techniques becoming a crucial and essential part of the technology driven industry, helping humans to solve many tough problems in computer science. Hence, artificial intelligence is not here to replace us. It augments our abilities and makes us perform better at what we do. Companies have also been making huge investments in AI to explore new arenas of market and grow their businesses.

Many of the AI applications which we use in our everyday lives, including virtual personal assistants such as Siri and Alexa and predictive traffic navigation apps, belong to this subset of AI.


Read Also | What is the future of technology in the Post-Covid world?


In the immediate future, AI is looking like the next big thing. It has the capability to completely transform our world and provide a cleaner, smarter and more advanced way of life with exciting new business opportunities too. Billions in cash, coupled with rapid technological breakthroughs and adequate funding in research gives us a future where AI seems plausible. Artificial intelligence will continue to enhance users' lifestyle choices by using search algorithms that provide targeted information.

It is hence safe to say that Artificial Intelligence driven technology has many benefits, endless applications and is here to stay for good. There is no doubt that this combination of humans and machines is going to be unstoppable.

Comments