Science
Do We Need To Regulate AI Before It Becomes A Danger To Humanity

Around 10 years ago, the threat from artificial intelligence was only present in movies. The Skynet from Terminator is just one example. We were all aware of the fact that AI will become the thing of the future at one point. But no one could’ve guessed that the threat from AI will be a thing in 2017.
Scientists have a divided opinion when it comes to AI and if it’s going to become a danger to humanity. Some agree and others don’t, and they all have arguments to prove their believing.
But when someone nicknamed the real life Iron Man compares working on AI to “summoning the demon,” we should really get serious.

That someone is Elon Musk, and his opinion regarding AI pretty well known at this point. The Tesla CEO believes that AI could become a danger to humanity, if not regulated.
Musk suggests that that governments start regulating AI as soon as possible.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” said Elon Musk.
“I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”

“AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
Many corporations already use a form of AI, such as Google, Uber, and Microsoft. But that’s not the kind of AI we should be worrying about. According to Musk, the real threat comes from artificial general intelligence. What is artificial general intelligence?
Well, to keep it short artificial general intelligence is similar to you see in sci-fi movies.
“With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
However, many scientists disagree with Musk. They find his theory absurdly amusing.
The creator of the deep neural net platform Keras, François Chollet more concerned about the ability to mask unethical human activities through machine learning.
“Arguably the greatest threat is mass population control via message targeting and propaganda bot armies. [Machine learning is] not a requirement though,” said Chollet.
The threats coming from machine learning are far less extreme than what Musk is discussing. Which means that we shouldn’t worry much about the Skynet scenario coming to life, but worry about real and immediate problems.
You can watch Musk’s interview in the video below (AI starting at around 48 minutes):