AI or Artificial Intelligence is an emerging technology that has caught attention of the society. Tech leaders such as Elon Musk or Mark Zuckerberg have weighted in their opinion on the subject. While Musk referred to AI as the biggest existential threat, Zuckerburg has a firm belief in the benefit that AI will bring to humankind. In this article, we learn what AI is and see if we should fear it or not.
What is AI?
John McCarthy, the father of AI, explained that AI or Artificial Intelligence is a science of making an intelligent machine where intelligence means an ability to do a task that human can do.
There are three types of AI by a level of intelligence as follow:
1. Narrow AI is an AI to accomplish a single task. It has specific knowledge or is good at a particular area such as AlphaGo for playing Go, a system to classify spam emails or a virtual assistant. Even a self-driving car is considered a narrow AI (a self-driving car consists of several narrow AIs working together). Narrow AI is what researchers have achieved so far.
2. General AI is more sophisticated than Narrow AI. It can learn by itself and be able to solve problems better than or equal to a human. General AI is what the society has been talking about and anticipating its coming. Whereas we are still far from making a machine has this level of intelligence. Because the human brain is very complicated and researchers still don’t fully understand how the brain works. Therefore it is challenging to develop an AI that can interpret and connect knowledge from various areas to plan and make a decision.
3. Super AI or Superintelligence is an AI that is more intelligent than all geniuses from all domains of knowledge. It also has creativity, wisdom and social skill. Some researchers believe that we will achieve Super AI soon after we achieve General AI.
Trending AI Articles:
From the AI types, we can see that the definitions of General AI and Super AI are too general to measure if a machine archives that level of intelligence. One reason may be because it is still difficult to describe what human intelligence is. However, there is a test, called The Turing test, designed to test if a machine can think like a human.
The Turing Test
The Turing Test was outlined by Alan Turing. So let’s get to know him a little bit. Alan Turing was an English mathematician and computer scientist, born in 1912. He is considered to be the father of computer science. He proposed a concept of a computing machine which is regarded as a model of a modern computer.
Alan Turing played a significant role during World War II where he and his team at Government Code and Cypher School build a machine named Colossus to decode German ciphers. The success of the code-breaking work helped the Allies to defeat the Nazis. It has been estimated that his work saved over 14 million lives. His work and life during World War II is the subject of The Imitation Game (2014).
After World War II, Turing worked on machine intelligence and proposed a method to measure the intelligence of a machine in term of its ability to think like a human. He named this test “The Imitation Game” which is now known as the Turing Test.
Turing described the test as a party game involving three players sitting in different rooms. Player A is a machine; Player B is a human and Player C is a human judge. The judge talks to Player A and Player B via a chat program. A machine passes a test if there are more than 30% of judges believing that it is a human.
Turing envisioned that a machine would pass this test within the year 2000. However, developing AI is more complicated than Turing thought it would be. The first machine that claimed to pass the test is a chat bot called Eugene Goostman, portrayed as a boy from Ukraine. Some researchers believed that Eugene’s characteristics, being young and non-native English speaker, influenced the judges to forgive his grammatical mistakes and lack of knowledge in some topics.
Should we fear AI?
Even though we are still far from General AI, but many tech leaders have given their concerns about potential AI threats. Elon Musk, Tesla and SpaceX founder, commented during an interview at the AeroAsto Centennial Symposium, MIT, that AI could be the greatest existential threat to the human race.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
Bill Gates also expressed his thoughts on the potential risk of AI in the future.
“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.”
Even the late Stephen Hawking who depended on AI technology to communicate with the world warned that “[Superintelligence] AI could spell the end of the human race.” He explained that because AI can evolve exponentially while human evolution is much slower. Eventually, AI will be smarter than human and beyond control.
Nevertheless, let’s see what the actual experts in AI have to say about this topic. On the one hand, we do not doubt that Elon Mush or Bill Gates are intelligent, but when we discuss a specific subject such as AI, we should also listen to what the AI experts have said. Yann LeCun, a director of Facebook AI Research, said that:
“We would be baffled if we could build machines that would have the intelligence of a mouse in the near future, but we are far even from that.”
Andrew Ng, a professor at Stanford University and a leader in AI research, also said something similar:
“I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars.”
Steve Wozniak, an Apple co-founder, used to warn that “AI could turn humans into their pets” but he is not scared of AI anymore. His reason is that we still don’t completely understand how the human brain works; therefore it is very challenging to build a machine that can think like us. As a result, we shouldn’t waste our time worrying that Super AI will take over the world.
From the opinions above, I think those tech leaders are not against AI because AI has shown its potential in many applications that would be beneficial to society. For example, AI robots in operation rooms or AI system to detect cancer cells in medical images. And because of the potential benefits, Musk founded OpenAI, a research company to develop ethical AI under strict regulations to avoid potential dangers of AI being misused.
I think we are still very far from General AI, so we should not worry about AI taking over the world soon. Nevertheless, we should be aware of AI threats that can happen at some point in the future. All related organizations should get started in discussing and making a global framework to regulate AI developments and usages as well as creating an actionable solution, so we are ready when the time comes.
Don’t forget to give us your 👏 !
What is AI? Should we fear it? was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.