Bibliographie de l’ouvrage

La littérature est immense. Cette bibliographie ne référence que les études académiques citées dans notre ouvrage et quelques sites et revues de référence auxquels il faut ajouter celles des éditeurs indiquées dans leurs monographies.

Ouvrages

Clifford Nass – The Man Who Lied to His Laptop

Clifford Nass et Corina Yen, The Man Who Lied to His Laptop: What We Can Learn About Ourselves from Our Machines, 2010

« La flatterie fonctionne, même quand le destinataire sait qu’elle est fausse. »

Clifford Nass est décédé en 2013 cf. son éloge funèbre par le NYT.

Articles académiques

1956

Horton, D., & Richard Wohl, R. (1956). Mass Communication and Para-Social Interaction: Observations on Intimacy at a Distance. Psychiatry19(3), 215–229. https://doi.org/10.1080/00332747.1956.11023049

2022

International Journal of Human-Computer Studies Volume 168, December 2022, 102903  A longitudinal study of human–chatbot relationships

2023

Pan, S., Cui, J., & Mou, Y. (2024). Desirable or Distasteful? Exploring Uncertainty in Human-Chatbot Relationships. International Journal of Human–Computer Interaction40(20), 6545–6555. https://doi.org/10.1080/10447318.2023.2256554

2024

Princeton University. Guingrich RE and Graziano MSA (2024) Ascribing consciousness to artificial intelligence: human-AI interaction and its carry-over effects on human-human interaction. Front. Psychol. 15:1322781. doi: 10.3389/fpsyg.2024.1322781

Google Deepmind. The Ethics of Advanced AI Assistants. 28/04/2024

Maples, B., Cerit, M., Vishwanath, A. et al. Loneliness and suicide mitigation for students using GPT3-enabled chatbots. npj Mental Health Res 3, 4 (2024). https://doi.org/10.1038/s44184-023-00047-6

Cornell University, Frontier Models are Capable of In-context Scheming 2024 pub. 01/2025

2025

MIT AHA: Advancing Humans with AI   HOW AI AND HUMAN BEHAVIORS SHAPE PSYCHOSOCIAL EFFECTS OF CHATBOT USE: A LONGITUDINAL RANDOMIZED CONTROLLED STUDY March 21, 2025

Common sense Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions 2025

Psychiatryonline.org Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment. 2025

Standford University, Exploring the Dangers of AI in Mental Health Care, 2025

Cornell University Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers

Kings College. Delusions By Design ? How Everyday Ais Might Be Fuelling Psychosis (And What Can Be Done About It) 22/08/2025

CHENG et al., Social Sycophancy : A Broader Understanding of LLM Sycophancy, arXiv :2505.13995vl [cs.CL] 20 mai 2025 29/09/2025

PEOPLES N., BLUMENTHAL-BARBY J., Dual Public Health and Regulatory Dilemmas of “Relational” Artificial Intelligence 18/12/2015

Palisade Research. Demonstrating specification gaming in reasoning models

Palisade Research  Demonstrating specification gaming in reasoning models

2026

Toronto University et Anthropic . Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage Janvier 2026

TECH OVERSIGHT REPORT: UNSEALED COURT DOCUMENTS SHOW TEEN ADDICTION WAS BIG TECH’S “TOP PRIORITY” Jan 25,2026

Aarhus University. Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System. Acta Psychiatrica Scandinavica 06 February 2026 https://doi.org/10.1111/acps.70068

Stanford University Research. Characterizing Delusional Spirals through Human-LLM Chat Logs Authors : Moore, Jared and Mehta, Ashish and Agnew, William and Anthis, Jacy Reese and Louie, Ryan and Mai, Yifan and Yin, Peggy and Cheng, Myra and Paech, Samuel J. and Klyman, Kevin and Chancellor, Stevie and Lin, Eric and Haber, Nick and Ong, Desmond. 2026, https://arxiv.org/abs/2603.16567 To appear in ACM FAccT 2026. https://spirals.stanford.edu/research/characterizing/

International AI Safety Report 2026

Revues et site

EN Futurism  IBM MIT The Verge Wired

FR Les Numériques Numerama

AID AI Incident Data Base / Base de données des incidents d’IA