Can artificial intelligence destroy humanity by 2035?

British theoretical physicist Stephen Hawking believed that the creation of artificial intelligence (AI) would be "either the worst or the best event in human history." Back in 2016, the scientist advocated the creation of a scientific organization, the main task of which would be to study the prospects of artificial intelligence as "a critical issue for the future of our civilization and our species." Recently, forensic researchers identified the top 18 AI threats that we should worry about in the next 15 years. While science fiction and popular culture depict the death of mankind at the hands of intelligent robots, research has shown that the main threat actually has more in common with us than meets the eye.

Deep Fake is the main threat posed by AI

Is AI a threat?

Today it may seem that artificial intelligence, which poses a threat to humanity, is the lot of science fiction writers and films like "The Matrix" or "I am a robot". Agree, it's quite difficult to imagine an almighty and terrible AI when Siri is not able to give the correct weather forecast. But Shane Johnson, director of the Dawes Center for Future Crimes at the University of California, explains that potential threats will grow and become more and more complex, intertwined with our daily lives.

According to Johnson, quoted by Inverse , we live in an ever-changing world that creates new opportunities – both good and bad. This is why it is so important to anticipate future threats, including an increase in crime. This is necessary so that politicians and other stakeholders with competence can identify crimes before they happen. Yes, just like in the movie "Minority Report" with Tom Cruise in the lead roles.

Although the authors of the work, published in the journal Crime Science, admit that the research findings are inherently speculative and depend on the current political environment and technological development, in the future technology and politics will go hand in hand.

Чтобы всегда быть в курсе последних научных открытий в области высоких технологий и не только, подписывайтесь на наш канал в Google News.

Shot from the film "Dissenting Opinion"

Artificial intelligence and crime

To draw these futuristic findings, the researchers assembled a team of 14 scientists in related fields, seven experts from the private sector and 10 experts from the public sector. These 30 experts were evenly divided into groups of four to six people and received a list of potential AI crimes, ranging from physical threats (such as autonomous drone attacks) to digital forms of threats. In order to make their judgments, the team looked at four main features of the attacks:

  • Harm
  • Profitability
  • Reachability
  • Affection

Harm, in this case, can refer to physical, mental or social harm. The study authors further determine that these threats can do harm, either by defeating AI (for example, by evading facial recognition) or using AI to commit crimes (for example, by blackmailing people with deep fake videos).

Although these factors cannot really be separated from each other, experts were asked to consider the impact of these criteria separately. The teams' results were then sorted to determine the overall most dangerous AI threats over the next 15 years.

AI – threat diagram

Unlike a robotic threat that can cause physical harm or damage to property, deep fake can rob us of our trust in people and society itself. By assessing threats against the above criteria, the research team determined that deep fake – a technology that already exists and is spreading – represents the highest level of threat.

It is important to understand that the threats posed by AI will undoubtedly be a force to be reckoned with in the coming years.

Comparing 18 different types of AI threats, the group determined that deep fake video and audio manipulation were the biggest threats.

“People have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a lot of credibility (and often legal force) despite a long history of photographic deception,” the authors explain. "But recent developments in training (including deep fake) have greatly expanded the possibilities for generating fake content."

Это интересно: Искусственный интеллект раскрыл секрет счастливых отношений

The study authors believe the potential impact of these manipulations ranges from individuals tricking older people into posing as family members to videos designed to sow distrust in public and government figures. They also add that these attacks are difficult to detect by individuals (and even experts in some cases), making them difficult to stop. Thus, changes in the behavior of citizens may be the only effective defense.

Deep Fake is the main threat to AI

Other top threats included autonomous vehicles used as remote weapons, the car-like terrorist attacks we have seen in recent years, AI and fake news. Interestingly, the group considered robotic burglars (small robots that can climb through small holes to steal keys and help burglars and burglars) as one of the most minor threats.

А что вы думаете по этому поводу? Ответ будем ждать здесь!

We are doomed?

No, but we have some work to do. Popular AI threat imagery suggests we will have one red button to press that could make all nefarious robots and computers stop. In fact, the threat lies not so much in the robots themselves as in the way we use them to manipulate and harm each other. Understanding this potential harm and doing everything you can to get ahead in the form of information literacy and community building can be a powerful tool against this more realistic robot apocalypse.

Leave a Reply

Your email address will not be published. Required fields are marked *