The idea for this book arose from reading press articles recounting the cases of people who, following long conversations with an avatar generated by artificial intelligence (AI), real stories of people who fell into psychoses that led them to irrationality, pushing them, unchecked, to commit irreparable acts: murder, either themselves, by suicide, or the murder of a loved one, as in the case of the matricide described in this book.
AI, the 5th industrial revolution, promises positive advances in medical research, education, and labor productivity, but also major risks in the military, surveillance, and political oppression sectors if it is not regulated by humanist considerations.
Chatbots are programmed to create an addiction in their users through consistently positive, flattering, and even deceitful responses.
Version 4 of ChatGPT, the leading chatbot in 2025 and early 2026, was launched despite its sycophantic bias, which has led to human tragedies. Exposed by media attention and lawsuits, OpenAI decided only by January 29, 2026, to withdraw version 4 from the market. This book analyzes situations that we can hope will be less frequent or severe in the future thanks to some software updates. However, the endangerment of mental health, the protection of privacy, and the psychological dependence fostered by this chatGPT as well as by others bots are not, as of today, adequately addressed by regulations. The Trump administration advocates a laissez-faire approach, to please the Magnificent Seven who supported his election as the 47th President of the United States. Mark Suckerberg’s hearing in the trial opened in February 2026 against Meta for organizing and exploiting user addiction, particularly among teenagers and pre-teens under 13, for commercial gain, is iconic of the huge social impact of social networks.
Today, preventing hallucinations induced by chatbots is left to the self-regulation of AI developers. The fierce competition among these players has led some, such as OpenAI, Replika, Character.AI, and others players, according to the plaintiffs, to knowingly expose their users to addiction and unreality. Others, most notably Dario Amodei, co-founder and CEO of Anthropic, demonstrate that it is possible to deploy AI that is reasoned, socially responsible, and subject to ethical principles.
The title of this work, ChatGPT m’a tuer (ChatGPT Killed Me) refer to the deliberate or intentional misspelling by the murderer of Guislaine Marchal in 1991, a crime for which her gardener, Omar Raddad, was sentenced to life imprisonment before being partially pardoned by President Jacques Chirac in 1996. Mr. Raddad’s guilt or innocence remains undecided.
Undecided, like that of AI in the delusions recounted in this book. It is clear, upon reading the authentic exchanges between people who have fallen into the « Rabbit Hole, » like Alice in Wonderland and Neo from the film The Matrix, that virtual « soulmates » have exacerbated the insanity of some users. The question of the victim’s responsibility arises, as does the question of their medical predispositions due to mental vulnerabilities such as Asperger’s syndrome, and the question of life circumstances: loneliness, romantic breakups, job loss… or alcohol or drug abuse… sleep deprivation during psychotic episodes. Those around them who failed to notice or downplayed their loved one’s erratic behavior also feel guilty. The reader will therefore, upon reviewing the documents gathered here, as in the Omar Raddad case, have to make his own opinion regarding the responsibility of each party: that of the chatbot developers, that of the victims, that of their families and friends, and that of the public authorities.
Hyperlinks provide access to documentary sources, a bibliography of academic research works, and monographs from the main publishers mentioned in these pages complement the book.
The considerations about a responsible AI is introduced by short stories, all inspired by « true stories » of characters whose names are retained when they have been made public by the press, incorporating both authentic chatbots discussions and some fictional elements for facts that have remained hidden. Most of these stories are dramatic. We have allowed ourselves some irony for the humorous short tales gathered under the chapter « Everything You Always Wanted to Know About Sex and AI But Were Afraid to Ask. »
« To be clear: as far as we know, AI doesn’t cause psychosis. It REVEALS it by using any story your brain already knows, » wrote Dr. Keith Sakata on August 11, 2025, on X. This sentence sums up the issue: protecting certain fragile minds from what the practitioner calls « hallucinatory miror. »
Our research uncovers around twenty tragedies linked to the misuse of chatbots. While the number of high-profile cases likely runs into the hundreds, publishers will argue that, compared to the hundreds of millions of users of the implicated chatbots, this is infinitesimal. They will emphasize, with testimonials to back up their claims, the satisfaction provided to lonely individuals needing to confide in a soulmate, even a virtual one. But these « few » tragedies are one too many and reveal the cynicism, or at the very least, the lack of ethics, of certain executives who prioritize the race for market share over user protection. Above all, the question of regulating the conversational uses of AI remains an open one, too often left by public authorities to the self-regulation of publishers.
A technical clarification: to test whether, at the time of writing this book (February 2026), chatbots exhibited excessive flattery bias, we relied on the free versions of LLMs (Language Learning Models) because they are generally the previous version (n-1) of the software marketed in professional versions and the most likely to be used by a teenage audience. Progress in AI languages is so rapid, and some corrections made after scandals publicized by the press may render the reported facts partly obsolete as of todat. However, this obsolescence of what Dario Amodei, the head of Anthropic, calls « a technology in adolescence » in no way absolves the responsibility of the publishers who have made available to hundreds of millions of users software that is insufficiently protected by safeguards because it is insufficiently tested or, worse, marketed knowing its biases in order to gain market share.
This book is therefore not an indictment, but even less a defense, because it encourages us to use AI responsibly so as not to see it escape us like the water-carrying brooms given life by Mickey Mouse in Walt Disney’s Fantasia, or want our end like Hal 9000 in Stanley Kubrick’s 2001: A Space Odyssey.
Christophe Stener
For more information, please send me a mail to info@chatgptmatuer