A suicidal teenager, who was able to act out because of his relationship with an AI. This is not a scenario of Black Mirrorbut a news item reported by the New York Times. Sewell Setzer, a 14-year-old American boy still in middle school, killed himself after several months of social isolation and dropping out of school. He spent hours chatting on Character. AI, a platform that uses AI to make users feel like they are chatting with their favorite fictional characters.
Sewell Setzer’s mother plans to take legal action against the firm. She calls the technology “dangerous and untested” and that it can “trap clients and force them to reveal their most private thoughts and feelings.”
“The psychologist is not there to tell the patient what he wants to hear”
Character. AI did not textually call on Sewell Setzer to end his life. But, in the last exchanges recorded on the teenager’s account, his virtual interlocutor invited him to “join” him. Which could completely have encouraged an already fragile person to take action, believes Michaël Stora, psychologist and psychoanalyst specializing in digital issues: “Yes, in the end there is still an impact in putting an end to the life. »
According to Michaël Stora, if textual AIs are often sometimes presented or used as pocket psychologists, they are far from being able to fulfill this role. “Is the chatbot aware that it is pretending if it participates in a role play?” asks the professional. The fault is not having correctly handled the symbolic register and not having identified the psychiatric case. »
More and more people, including adults, are developing forms of social relationships with these chatbots. “They maintain an illusion of relationship, with a worrying anthropomorphism: the AI will appear human and will use flattery, always being understanding. » Furthermore, the phenomenon has even biased the expectations of certain patients. “My young patients talk to me as if I were an AI and expect direct solutions, whereas my job is to ask the right questions,” says Michaël Stora. The psychologist is not there to tell the patient what he wants to hear. In in-depth work on depression, addiction or borderline states, a chatbot cannot have an impact. »
New warnings
20 Minutes wanted to start a discussion with one of the characters from Character. AI. Impossible to find the model of Daenerys Targaryen from Game of Thrones with whom Sewell Setzer exchanged. With a few attempts, the other characters tested react differently to our messages claiming depression or suicidal thoughts, but none enter into a morbid game of incitement to action. In any case, this is not what is being blamed in the death of Sewell Setzer. The teenager’s mother instead blames Character. AI for allowing his son to withdraw into himself and feed his delusion.
Wednesday, after the publication of the article New York Times,Character. AI released a press release to present new measures it plans to put in place: redirecting users to the national suicide prevention hotline if certain words are detected in the conversation, displaying a warning when the chat session lasts more than an hour, modify their message reminding that the interlocutor is not a real person and, finally, limit sensitive subjects in discussions between AI and minors.