Could humans suffer the same fate as the dinosaurs?
While there’s still a lot of uncertainty around the risk potentially posed by artificial intelligence, some experts have some concerns AI could trigger humankind’s downfall.
But even more alarming: scientists believe there’s a 50% chance AI will outperform humans in the next 20 years or so.
According to a paper for which 2,778 AI researchers were surveyed, a majority of the scientists (58%) believe that there is a 5% chance of human extinction or other catastrophic outcomes from AI.
“It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” paper co-author Katja Grace, of the Machine Intelligence Research Institute in Berkeley, California, told New Scientist. “I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk.”
Respondents anticipate that the tech’s chances of successfully completing certain sample tasks within a decade — such as writing songs of the caliber of Taylor Swift compositions or coding a payment processing site from scratch — is about 50% or higher.
However, more complicated tasks, such as installing electrical wiring or solving mathematical mysteries, are predicted to take longer to achieve — but could indeed happen in our lifetime.
The odds of AI outperforming humans on every task by 2047 was estimated at 50%, and the odds of all human jobs becoming fully automated by 2116 was 50%. However, both estimates are earlier than those given in a 2022 version of the same survey.
But despite the odds moving further away in our favor, scientists said it’s not exactly cause to ring the alarm just yet.
The survey, the largest of its kind to date, invited the researchers to give their opinions on what the timeline will look like for future AI advancements and milestones, as well as on the societal consequences, good and bad, of AI.
The authors gave the caveat that AI researchers are not experts in artificial intelligence forecasting.
Plus, these kinds of surveys “don’t have a good track record” for predicting AI achievements, so there’s no need for immediate worry, Émile Torres of Cleveland’s Case Western Reserve University told the outlet.
In fact, previous research has shown that predictions from experts in AI were no more accurate than predictions from the general public in the long run.
“A lot of these breakthroughs are pretty unpredictable. And it’s entirely possible that the field of AI goes through another winter,” Torres said.
Meanwhile, the researchers did have more immediate worries regarding AI.
Of those surveyed, 70% or more expressed substantial or extreme concern over scenarios such as deepfakes, public opinion manipulation, engineered weapons, authoritarian control, worsening economic inequality, spreading disinformation and worsening democratic governance.
“We already have the technology, here and now, that could seriously undermine [the US] democracy,” says Torres. “We’ll see what happens in the 2024 election.”
Source