Artificial intelligence, human intelligence: the double enigma
by Daniel Andler
Gallimard, 432 p., 25 €
Like electricity before it, artificial intelligence resembles a “universal technology”, gradually penetrating almost all spheres of activity. Like the automobile, its appearance threatens to shape a world in which, one day, we will no longer be able to do without it. Only here it is: neither electricity nor the automobile have claimed to compare themselves to humanity in order to equal, and even less exceed, its intelligence.
Achieving intelligent AI in the human sense is, however, the goal of most researchers in the field, who have never abandoned the ambition of the founders of the 1950s. However, after half a century of downward revaluation, recent success of “deep learning” seem to revive this perspective.
In this philosophical essay, Daniel Andler tackles the problem head-on. Yet we sense this professor emeritus at Sorbonne University inclined to a skepticism that could make him give in to “ChatGPT bashing”: this statistical text-generating machine does not “know” what it is “talking about”; she only spits out what she finds on the Internet; it cannot seize cases that are too far removed from its learning base…
Aware of the limits of systems that sometimes produce aberrant results, the author nevertheless considers that not taking the measure of their success is “to risk missing out on a technoscientific event of primary importance” and to have to try to limit – but too late – serious damage. Starting with the stranglehold of large private companies on “crucial decisions for society”.
Daniel Andler therefore embarks on an in-depth, sometimes arduous exploration of the many properties of intelligence and the seven decades of history of AI. If he is betting on the automation of “the greatest possible number of cognitive functions”, he concludes that it is impossible to achieve a “Promethean” AI. Because she knows how to solve problems, but not deal with situations. His skills are moreover based on induction, in other words on the ability to predict the future from the observation of the past.
These systems are therefore not genuinely “intelligent”. But are they “autonomous”? This is, again, the ambition of their designers. An ambition that Daniel Andler considers as dangerous as it is useless: we need docile tools, he writes, “not pseudo-people equipped with an inhuman form of cognition”.