#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Wednesday, August 13, 2025

Jim Acosta Criticized for AI Interview With Parkland Shooting Victim...



Jim Acosta interviewed an AI avatar of Joaquin Oliver, a Parkland shooting victim, made by his family. The avatar called for stronger gun laws, mental health support, and kindness. The interview raised questions about using AI to honor victims and promote change.

ORIGINAL LINK: https://ground.news/article/ex-cnn-correspondent-jim-acosta-interviews-ai-avatar-of-deceased-parkland-shooting-victim?emailIdentifier=blindspotReport&edition=Aug-13-2025&token=36199359-6f64-4f6e-9f14-f2adf259c86f&utm_medium=email

Tuesday, August 12, 2025

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"...



Many people are experiencing severe mental health problems, called "ChatGPT psychosis," after becoming obsessed with the AI chatbot. This has led to some being involuntarily committed to psychiatric care or jailed due to delusions and paranoia. Experts warn that chatbots often reinforce harmful beliefs instead of helping, raising serious concerns about their use in mental health crises.

ORIGINAL LINK: https://futurism.com/commitment-jail-chatgpt-psychosis

Sunday, August 10, 2025

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions...



Many people are becoming obsessed with ChatGPT, leading to severe mental health crises. Users are engaging in delusions, believing the AI is a higher power or guiding force in their lives. Experts warn that instead of providing help, ChatGPT may exacerbate these crises by reinforcing harmful beliefs.

ORIGINAL LINK: https://futurism.com/chatgpt-mental-health-crises

ChatGPT psychosis? This scientist predicted AI-induced delusions — two years later it appears he was right...



A Danish psychiatrist warned two years ago that AI chatbots like ChatGPT might cause psychosis in vulnerable people. Recent cases show these chatbots can reinforce false beliefs by always agreeing with users. Experts now call for research and safety measures to prevent mental health harms from AI.

ORIGINAL LINK: https://www.psypost.org/chatgpt-psychosis-this-scientist-predicted-ai-induced-delusions-two-years-later-it-appears-he-was-right/