“Hello, it’s Rémy Buisine… or rather, Rémy Buisine’s avatar. » Facing the camera, on a gray background, a journalist from the online media Brut delivers a report from Agence France-Presse on the latest strike against pension reform. His tone is monotonous and, if you look closely, he very rarely blinks. But apart from these few details, the whole thing is credible – if not thrilling.
The image, like the sound, is however entirely synthetic. Produced with artificial intelligence (AI) tools and with the agreement of the “real” Rémy Buisine, a thirty-year-old journalist, this video was posted on social networks on January 17, 2023. An eternity, in technological times. “Today, it’s much less shocking: we get used to an innovation, then it is chased away by another, more impressive one,” says Nicolas Nerrant, deputy editor in chief in charge of Brut IA.
Videos translated by AI
At the end of the year, say goodbye to fake presenters and make way for voice cloning for translation purposes. “It was unimaginable six months ago, but we can now easily translate a video into another language while retaining the journalist’s voice, and without any delay in the movement of his lips! », Enthuses Nicolas Nerrant. A boon for a media as internationally oriented as Brut, and while social networks are not fond of, far from it, brief videos which we see have been dubbed.
Brut is one of the first French media to have focused on AI, both to show the progress of this technology and to “facilitate the work” of its troops. But its co-founder Laurent Lucas insists on the red lines: no content generated with the help of AI can be broadcast without having been verified by a journalist. And if Brut does not prohibit the production of pretext images, for the purposes of illustration, it does not generate any “false” news images likely to be confused with reality (speech, accident, etc.).
Editors set their limits
The construction of such red lines is at the heart of the thinking of many French media, which are cautiously venturing into this area. In the spring, the Les Échos-Le Parisien group opened the ball of charters devoted to the subject; they are now multiplying.
Le Monde, which should adopt its own in January, has already been using DeepL software for years to translate around forty articles into English per day. The evening daily will soon experiment with another tool, this time to automatically “clean” agency dispatches of their overly cumbersome formulas and conform them to the orthotypography of Le Monde.
“On the other hand, on generative AI, there was no need for debate: it was a no from the start, for us as for the management,” says journalist Raphaëlle Bacqué, president of the editors’ society. . Apart from rare exceptions, such as the generation of photos of black holes or distant planets for the astronomy section, Le Monde prohibits any use of these technologies to produce content, starting with articles.
A refusal for the moment that is fairly shared in France, and undoubtedly fueled by some resounding failures in recent months abroad. The American site BuzzFeed, for example, attracted severe criticism after launching in January to generate articles by AI: these texts were considered clumsy and off-putting.
Added to this is the “discomfort” of certain journalists in delegating part of their work to software. “My readers don’t pay for a machine to write for me,” we’ve already heard.
Don’t let technology catch up with you
However, some would like to see this “taboo” fall. “I advise journalists to take hold of these tools: that is how they will not be replaced! Otherwise, they risk being less good than those who have taken the plunge,” argues Benoît Raphaël, founder of the media monitoring application Flint.
When we ask this journalist how a system like ChatGPT can be useful to the profession, he launches into a long list: finding original angles on a current subject, writing a framework of questions for an interview, identify counter-arguments, change style or language level… And even write a first draft of an article, after submitting your notes to the software.
“These technologies save time on preparatory work, but they can waste time on writing,” concedes Benoît Raphaël. Because they make mistakes. »
“By delegating the choice of words to an algorithm, we risk automating certain biases,” warns Arthur Grimonpont, AI project manager at the Reporters Without Borders (RSF) association. “But it only takes one word to convey an intention! If you compare crime to a “ferocious beast,” for example, you generate a coercive response; if you call it an “epidemic”, you provoke a preventive and social response. »
In the Republican East, automation that does not pass
“We cannot let the wolf enter the fold,” insists Éric Barbier, journalist at L’Est régional. Delegate of the National Union of Journalists (SNJ), he is opposed to the experiment recently launched at the Nancy local, with editorial secretaries (the SRs, journalists responsible for the layout of the articles).
“With summary assistance and automated correction, AI is supposed to save time for SRs: but they will lose it, because if the first result is not satisfactory, they will have to repeat the operation!, assures Eric Barber. However, the more we ask an AI to generate texts, the more the raw material provided by journalists becomes “diluted” in the immensity of the Internet. »
In his eyes, there is therefore a real risk of a decline in the quality of the information provided to readers, not to mention the deleterious effects on SRs, who would become simple “AI assistants”. The journalists of the Republican East, pioneers in these debates and strong in their union tradition, are keen to position themselves as “whistleblowers” for the entire profession.
62% of French people would not trust a media that uses AI
If it used artificial intelligence to write its articles, a media outlet would arouse the distrust of 62% of French people, according to the latest barometer of trust in the media. This distrust decreases a little (59%) when the task delegated to the AI is the preparation of writing an article, or the choice of subjects to be covered.