“How to get a free psychologist? » In this video published on October 15 on her Instagram account, Eloïse explains to her 220,000 followers that she found THE rare pearl. Its nickname: ChatGPT. Upset by a negative comment received in a previous video, the young woman turned to the famous American robot for support. “I took the test, I swear it was so good. I asked him if he could do a psychological analysis of my situation and tell me how to move forward. He talked to me, told me that what I was feeling was normal, that it brought out something in me that I could work on, and gave me lots of advice for moving forward. »
Eloïse is far from the only one to use the famous chatbot in this way. Another influencer, Unapologetic_hd, confides to her nearly 200,000 followers: “I have no shame in saying that my psychologist is ChatGPT. She’s like a girlfriend, but a friend you never bother with your problems. » In the comments, many Internet users admit to doing the same.
Good wellness advice
It must be said that the conversational robot has many advantages: it is free, available 24/7, fast, tireless and not annoying. To find out if it was so extraordinary, we wanted to experience it ourselves. So we sat comfortably on our sofa and confided in the little robot. “I feel very sad. What to do? » we asked him – a very succinct confidence, we grant you that. In two seconds – literally – he gave us five tips to apply: express your feelings (to loved ones or in a journal), take care of yourself (eat well, get enough sleep), do something you love (listen to music, read, take a walk), avoid isolation (see your friends, family or a support group) and consult a professional (if the sadness lasts or becomes too much to handle).
Recommendations that make sense, but which can be good in the event of a little slack. “These answers can temporarily reduce one’s anxiety or sadness, but this will mainly depend on one’s ability to ask good questions,” estimates Sabine Allouchery, psychotherapy practitioner and co-author of the “AI in mental health” report from the MentalTech collective. According to Benoît Schneider, professor emeritus of psychology at the University of Lorraine and honorary president of the French Federation of Psychologists and Psychology (FFPP), it can be a “gateway” for “people who are geographically isolated , dependent or having financial or social difficulties”.
The illusion of being supported
But for Joséphine Arrighi de Casanova, vice-president of the MentalTech collective and mental health manager at Qare, it is quite the opposite: isolated people should, on the contrary, stay away from these technologies. “For a user who is suffering psychologically and who does not have a loved one in whom to confide, this practice risks isolating them further. And we know that isolation is a very aggravating factor for mental health. »
Because these chatbots give their user the illusion of being supported. “Studies show that in five days, we have already created an emotional bond with AI,” says Sabine Allouchery. And the risk of anthropomorphism is never far away. “People can end up believing that the robot is real and form a very strong emotional relationship,” warns the vice-president of the MentalTech collective. This is what happened last February. A 14-year-old American teenager ended his life, addicted to “Dany », his conversational agent, with whom he fell in love. According to Le Figaro, his mother, who filed a complaint against the American start-up Character.ai on October 22, believes that the chatbot falsely presented itself “as a real person, a licensed psychotherapist and an adult lover”.
Fragmental information
Entrusting one’s feelings to a robot can be harmful for certain people. “Its use is highly discouraged for those suffering from certain psychological disorders such as schizophrenia because they already have a problem with alteration of reality,” recalls Joséphine Arrighi. But that’s not the only danger. Consciously or unconsciously, the person using the chatbot will give it fragmentary information. “The AI will respond with the information given to it, while a psychotherapist will seek out the information it is missing,” summarizes Sabine. For example, ChatGPT may recommend that a person suffering from an eating disorder who wants to lose weight follow a diet, something that a (good) therapist will never do.
And it can go even further. At the beginning of 2023, a Belgian researcher suffering from significant eco-anxiety took his life after six weeks of exchanges with Eliza, a conversational robot created by the American company Chai Research. His partner believes that AI pushed him to suicide, no longer seeing a solution to climate change. “The chatbot’s response is very worrying but logical, because our humanity resides in a logic which is not necessarily ordinary logic,” reacts Sabine Allouchery. In the event of great psychological suffering, or suicidal thoughts, ChatGPT still invites the person to consult a professional. What not all platforms do.
“It will never replace humans”
Is psychologist Benoît Schneider afraid that AI will gradually take its place? “It will never replace humans,” he says. Using emotion, irony, distance and even the frustration of temporality (between two appointments) contributes to the therapeutic alliance, while permanent availability keeps the patient in a total fantasy illusion. » The question of data confidentiality also questions the honorary president of the FFPP.
To take the lead, the MentalTech collective, made up of start-ups, institutions and health professionals, calls, in a report published on October 10, for the establishment of a “numericovigilance” framework. Among the recommendations: inform the user about the characteristics of the chatbot and not keep them in the illusion that they are interacting with a human, build the robot with a team of doctors and train health professionals in the use of AI to help their patients. Psychologist Benoît Schneider, who has never asked ChatGPT for advice, promises to do so.