In 2017, artificial intelligence has attracted $ 12 billion of investments from venture capitalists. We only begin to discover useful applications of AI. Recently Amazon introduced retail, if I may say so, the store where the cashiers and the cash register has been replaced by computer vision, sensors, and deep learning. And without investment, lighting press and radical innovation “artificial intelligence” for a long time at all on hearing. But he exists?
Speaking at the world economic forum, Dr. Kai-Fu Lee, the Taiwanese venture capitalist and founding President of Google China, said: “I think that every entrepreneur is trying to put his company as a company with artificial intelligence, and every venture capitalist gladly would have said that he is investing in AI”. But the AI some bubbles may burst before the end of 2018, namely “startups that presented an impossible story, and deception has attracted venture capitalists, because they do not understand.”
However, Dr. Lee firmly believes that AI will continue to progress and take away jobs from people. What is the difference between the AI, with all its pros and cons, and over-hyped stories?
If you perform a few stories about the alleged AI, you will quickly discover a significant distinction in how people define artificial intelligence: the line is blurred between emulations of intelligence, and machine learning applications.
Experts in the field of AI struggled to find a consensus, but this issue raises more questions. For example, when it is important to be precise with the original definition of the term and when to stop to clarify the definition? It is not clear. The hype often interferes with the clarification. In addition, the hype is backed by 12 billion of the money invested.
This conversation is necessary in part because world leaders have begun publicly to discuss the dangers that entails AI. Facebook CEO mark Zuckerberg suggested that the skeptics who are trying to “come up with a doomsday scenario”, it is highly irresponsible and gloomy. OpenAI but the Creator and owner of many other companies Elon Musk shot back a statement Zuckerberg, saying that he is not up to the end understands the subject matter. In February, Musk said the same thing about Harvard Professor Steven Pinker. He wrote that Pinker doesn’t understand the difference between functional, narrow AI and AI utility.
Considering what fears are surrounded by artificial intelligence, the wider public, it is important to understand the differences between different levels of AI, in order to realistically assess the potential benefits and threats.
Smarter than man?
Eric Cambria, an expert in the field of natural language processing, believes that “today nobody is engaged in artificial intelligence, but everyone says it because it sounds cool. A few years ago the same story was with big data”.
Cambria notes that AI as a term originally referred to the emulation of human intelligence. “But today there is nothing even close to the same smart as the dumbest person on Earth. So, strictly speaking, no one deals with artificial intelligence, doesn’t make it, though, because we don’t know how a human brain works,” he says.
The term “AI” is often used in relation to powerful tools for data classification. These tools are impressive, but they work in a completely different spectrum than human cognition. Also, as it says, Cambria, people claim that neural networks have become part of the new wave of AI. It’s weird, because the technology exists for fifty years.
First and foremost, the AI gives access to a large computing power. All these achievements are welcome, but it would be wrong to assume that the machines have learned to emulate the complexity of our cognitive processes.
“Companies are just using tricks to create the behavior that will be intelligence, but it is not real intelligence, but only its reflection. Such expert system can be very good in a certain area but very bad in others,” he says.
This mimicry of intelligence inspired the public imagination. Systems operating in certain areas that breathed life into a wide range started. But did it help to rid the world of confusion? On the contrary.
The subsidiary, augmented or Autonomous
When it comes to scientific integrity, the question of exact definitions does not remain on the sidelines. In 1974 at Caltech with Richard Feynman said “the First principle is that you must not fool yourself — and you cheat easier”. And further: “you should Not fool the layman when you’re talking on behalf of the scientist”. He suggested that scientists need to think about the fact that they could be wrong, Mature root. “If you represent yourself as a scientist, you have to explain to the layman what you’re doing — and if he decides not to support you in these circumstances, then that’s his decision.”
In the case of AI, this could mean that professional scientists must be clear that developing an extremely powerful, controversial, profitable and even dangerous instruments that do not represent intelligence in any familiar or comprehensive sense.
The term “artificial intelligence” may be misleading, but attempts to clarify it are already being taken. In a recent report of PwC outlined a distinction between “sub-intelligence”, “augmented intelligence” and “Autonomous intelligence”. Subsidiary intelligence represent GPS navigation programs that work in cars. Augmented intelligence “allows people and organizations to do what they would otherwise”. Autonomous intelligence “enables cars to operate independently”, for example, in the case of a self-governing automobiles.
Roman yampolskiy, a security researcher intelligence, believes that “the intelligence (artificial or natural) continues to evolve, and with it the potential problems of this technology. We tend to think that AI will one day master the full range of human capabilities, will be a General artificial intelligence. Then become a superintelligence. But today we often use narrow AI. The problem is not terminology, but in the complexity of such systems even at current levels”.
Should people be afraid of AI? “As opportunities continue to emerge, will appear and all sorts of associated problems,” says Yampolsky. Incidents involving AI will be more.
According to Brian Decker, the founder of Encom Lab, machine learning algorithms are currently working exclusively to meet the needs of programmers. “Marketer will tell you that controlled photodiode light on the porch has an artificial intelligence, because “he knows when it’s dark out”, while a good engineer will point out that not a single bit in the entire history of computing has never been changed, if it has not been conceived in accordance with the logic of the previous programming.”