In November 2022, a digital earthquake shook the media planet: ChatGPT, a generative artificial intelligence whose version 4, more “creative and collaborative” than ever, although still offline, was presented on Tuesday March 14. Very quickly, journalists and citizens around the world tested the conversational robot designed by the American company OpenAI, in order to identify its abilities, its limits, as well as the potential danger it represents.
In terms of information, however, artificial intelligence did not wait for this latest addition to show off its powers. For several years now, powered by the major digital platforms, it has profoundly changed the way we follow the news.
Recommendation algorithms, already masters of the dissemination of information
More insidious than the now very famous ChatGPT, recommendation algorithms decide what information reaches us, and what escapes us. Each platform has its own and keeps its secrets, but their operation is based on this principle: “Exploit data to capture our attention span and capitalize on it”, explains Asma Mhalla, specialist in the political and geopolitical issues of tech. .
At a time when 36% of French people (52% of those under 35) say they get information every day from social networks, according to the latest barometer of trust in the media published by La Croix, their impact on dissemination and consumption of information is indisputable. “Each algorithm is designed to spend as much time as possible on the platform it serves. It takes into account neither the veracity of the messages, nor the reliability of the source”, develops the engineer Arthur Grimonpont, author of Algocracy. Living free in the age of algorithms (Actes Sud, 2022).
Their main criterion: the viral potential of each new content. “Unfortunately, in this race for attention, fair and nuanced information is often at a disadvantage alongside content with a strong emotional charge”, underlines Bruno Patino, president of Arte France and professor at the School of Journalism of Sciences PoParis.
Especially since digital platforms are able to offer us ultra-personalized content. “Without realizing it, we produce a lot of data to help them: we spend more or less time on a video or an article, we like, we share, we comment,” explains Asma Mhalla. The algorithms thus lock us up “in a digital microcosm”, supports Arthur Grimonpont.
“Filter bubbles” that the researcher accuses of contributing to serious events, such as the assault on the Capitol in 2021, or the invasion of places of power in Brasilia by several thousand supporters of Jair Bolsonaro in early 2023. do not create distrust, they amplify it, nuance Asma Mhalla. By serving only partial and biased information, by creating communities. In other words, the problem is less to be exposed to certain content than not to be exposed to others.
This revolution in uses, those who produce information cannot ignore it, insists Bruno Patino: “Training journalists in the use and understanding of these new technologies is essential. Moreover, artificial intelligence has already imposed itself in their work. “We use it to collect and organize the results of elections or sports, to translate, but also to check or sort data for an investigation,” explains the journalist.
A new era of information production, and disinformation
If artificial intelligence has revolutionized the dissemination and hierarchy of information in recent years, ChatGPT opens a new era: “That of production, says Bruno Patino. It’s dizzying, both in terms of the possibilities and the dangers involved. Because one of the talents of the new robotic darling is to imitate human language, almost as well as the equally human ability to make mistakes.
A faculty which, according to Asma Mhalla, could make this type of artificial intelligence “a major disinformation tool, capable of producing not only a speech, but also an avatar, a video, sound, and combining them to make it very compelling content, explains the researcher. It can become a powerful weapon in a context of information warfare. »
At the same time, these high-performance technologies should “quickly become tools for press organs, which could delegate repetitive tasks to them with low added value, such as writing briefs or whipping up dispatches”, estimates Nicolas Gaudemet, specialist in artificial intelligence within the consulting firm Onepoint.
Former media executive Antoine de Tarlé, author of The End of Journalism? (Les Éditions de l’Atelier, 2019), is even worried about “seeing the emergence of information sites whose almost all content is produced by generative artificial intelligence. Efficient, smooth text taps. “With significant economic consequences for the sector:” By meeting the requirements of the algorithms, such sites would be able to capture a substantial audience at a lower cost. »
However, Nicolas Gaudemet tempers the fear of seeing the production of information delegated to computer programs: “Artificial intelligence, however efficient it may be, will not replace the field work of a reporter, the position of a columnist, the eye of a press photographer or the analysis of an investigative journalist. »
It is up to the editorial staff to allocate the time and resources freed up by artificial intelligence to this content with greater added value, adds the one who advises several media. “We have a responsibility to offer quality content, which cannot be confused with that of AI, and there is an audience for this content”, confirms Bruno Patino.
Technological speed versus legal slowness, the challenge of regulation
Disinformation, plagiarism, protection of personal data, accountability of platforms… The number and scale of legal problems related to AI are constantly growing, and the legislator is struggling to keep pace. However, it has tools that condemn distribution and production in bad faith (read the benchmarks). But this is where the shoe pinches: the line between malicious misinformation and clumsy misinformation, often tenuous, must be observed, otherwise the risk of curbing freedom of expression.
“There is no legal vacuum, underlines Samir Merabet, professor of private law at the University of the West Indies. But the quantity of potential vectors of fake news is such that control is made impossible. And then, what’s the point of tracking down the account responsible for publishing fake news, if it has already spread like wildfire? The challenge now is to stem their proliferation.
To achieve this, the Council of the European Union has voted a Regulation on digital services, applicable from 2023 for very large platforms. It provides, among other things, the obligation to distinguish the content to which the user has chosen to subscribe and that which the platform recommends. A feature already available on TikTok, Twitter or Instagram, which relies on user empowerment. “It’s a way out of the filter bubbles. However, we don’t use it because our recommendation thread is extremely comfortable, ”analyzes Samir Merabet.
The same Regulation also requires platforms to provide the algorithms of their interfaces to the European Commission and the competent national authorities. For Imane Bello, lawyer at the Paris Bar and specialist in the ethics of artificial intelligence, this obligation of transparency must now extend to generative AI. “The important thing is that there can be no confusion with human content, thanks to an explicit report”, submits the lawyer. A necessity that the European Union had not yet considered, when drafting its regulations on artificial intelligence, the AI Act, which is still in progress. The text intends to make the platforms more responsible, in particular by prohibiting certain practices which it deems “unacceptable”, such as “the unconscious manipulation of behavior”. But the new technical possibilities in terms of text, video or voice generation should provoke new thinking and no doubt significant modifications.
About intellectual property, in particular. Because we must not forget that ChatGPT, as agile as it is, does not invent anything. To provide a coherent discourse, he must dig into the data on which he has trained, without always mentioning them and without ever paying their authors. “We could possibly imagine a system for excluding certain websites, such as that of a media or a magazine, from the database of these generative AIs, suggests Samir Merabet. But then we run the risk of them relying only on dubious sources…”
There remains the delicate question of responsibility in the event of misinformation. In the current state of the law, that of the creator of an AI can be engaged only if it is proven that he has sued a malicious company, and that of the platforms if they have knowledge of illegal content. “The European Union is actively working to resolve these problems,” reassures Samir Merabet. Asma Mhalla invites us not to rely solely on a coercive system. “The law is as essential as it is insufficient,” says the researcher. The startling appearance of ChatGPT caused a wave of panic to blow, but we will learn to swim in troubled waters and little by little to live with it. »