Artificial intelligence threatens humanity with “extinction”, warn experts and entrepreneurs in this booming sector, calling for awareness. But is this still distant disaster scenario credible?
• Read also: Labor shortage: robots using artificial intelligence to the rescue of hoteliers in Switzerland
• Read also: Samuel L. Jackson refuses to be replaced by an artificial intelligence
Trombones of death
The nightmare, inspired by countless sci-fi movies, begins when the machines see their abilities surpass those of humans and spin out of control.
“As soon as we have machines trying to survive, we will have problems”, recently asserted Canadian researcher Yoshua Bengio, one of the fathers of machine learning.
According to a variant imagined by the Swedish philosopher Nick Bostrom, the decisive moment will come when machines know how to make other machines themselves, causing an “explosion of intelligence”.
According to his “paper clip theory”, if an AI had for example the ultimate goal of optimizing the production of this stationery accessory, it would end up covering “first the Earth and then more and more important pieces of the Universe with paperclips”, he illustrates.
Nick Bostrom is a controversial figure, having asserted that humanity could be a computer simulation, or supported theories close to eugenics. He also had to apologize recently for a racist message sent in the 90s, which had resurfaced.
Yet his insights into the dangers of AI remain highly influential, and inspired both billionaire Tesla and SpaceX boss Elon Musk and physicist Stephen Hawking, who died in 2018.
Terminator
The image of the red-eyed cyborg from Terminator, sent from the future by an AI to put an end to all human resistance, particularly marked the collective unconscious.
But according to experts from the “Stop Killer Robots” campaign, this is not the form in which autonomous weapons will prevail in the years to come, they wrote in a report in 2021. .
“Artificial intelligence will not give machines the desire to kill humans,” reassures robotics specialist Kerstin Dautenhahn, from the University of Waterloo in Canada, interviewed by AFP.
“Robots aren’t evil,” she asserts, while conceding that their developers could program them to do harm.
Chemical Weapons
A less obvious scenario involves using artificial intelligence to create toxins or new viruses, with the aim of spreading them around the world.
A group of scientists who used AI to discover new drugs conducted an experiment in which they modified it to look for harmful molecules instead.
In less than six hours, they managed to generate 40,000 potentially toxic agents, according to an article in the journal Nature Machine Intelligence.
With these technologies, someone might finally find a way to spread a poison like anthrax faster, said Joanna Bryson, an AI expert at the Hertie School in Berlin.
“But it’s not an existential threat, just a terrible weapon,” she told AFP.
An outmoded species
In apocalypse films, disaster happens suddenly and everywhere at once. But what if humanity gradually disappeared, replaced by machines?
“In the worst case, our species could go extinct without a successor”, anticipates the philosopher Huw Price in a video promoting the Center for the Study of Existential Risks at the University of Cambridge.
There are, however, “less bleak possibilities”, where humans augmented with advanced technology could survive. “The purely biological species then ends up going extinct,” he continues.
In 2014, Stephen Hawking argued that our species would one day no longer be able to compete with machines, telling the BBC that this “would spell the death knell for the human race”.
Geoffrey Hinton, a researcher who is trying to create machines resembling the human brain, most recently for Google, spoke in similar terms of “superintelligences” superior to humans.
On the American channel PBS, he recently affirmed that it was possible that “humanity is only a passing phase in the evolution of intelligence”.