I am one of those who are both fascinated and troubled by the rapid development and potential of artificial intelligence. Confused enough to wonder if we are collectively ready for such rapid progress.
What uses for nuclear and military?
You may have read on Tuesday or Wednesday morning the thoughts of an AI pioneer, Geoffrey Hinton. He leaves Google claiming to regret his invention.
The words of the researcher are not very reassuring: “future versions of this technology could be a risk for humanity”.
By chance, reading Hinton’s regrets crosses paths with my reading of the concerns of many authorities, several Americans, about the development of nuclear power and its military applications. Good or bad AI on these questions?
As is the case in the vast majority of industries, AI offers potential for research and analysis that can be beneficial. If you are passionate about this subject, I am posting here an article from the International Atomic Energy Agency.
While there is no doubt that AI can have a beneficial effect in meeting the challenges of the 21st century, I believe that it is urgent to pause and reflect on the roles and responsibilities that we will delegate to AI.
Because tensions between Ukraine and Russia are not diminishing and because nuclear powers fairly regularly mention the use of a particularly destructive weapon, analysts and legislators are wondering about the place of AI in the deployment of a military strategy.
Do not miss the column of Luc Laliberté, specialist in American politics, at the microphone of Yasmine Abdelfadel, every day on the waves of QUB radio :
Is it conceivable that, following an analysis and a calculation of advanced risks, the AI will recommend the use of nuclear weapons? Could leaders be tempted to leave him a choice which, for a human, involves a good deal of stress and anguish?
You may be thinking that we are not there yet, but the Hoover Institute, attached to Stanford University, has already carried out a number of scenarios, simulations in which the American authorities, cut off from normal communications , had to defer to artificial intelligence.
A constitutional vacuum
You might think to yourself that an American president would never delegate his commander-in-chief authority to the AI.
Despite this, no current law would prohibit a president and cabinet torn between hawks and doves from leaving the ultimate decision to AI.
Sometimes we react very late to the negative impacts of an invention. In the case of AI, we should do everything to reduce errors.
You may be reassured to learn that US lawmakers want to introduce a bill. Rather rare in 2023, this bill is bipartisan and is called the Block Nuclear Launch by Autonomous Artificial Intelligence Act.
It only remains for us to hope that he does not get lost in the maze of the American political system.