#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Wednesday, August 13, 2025

Jim Acosta Criticized for AI Interview With Parkland Shooting Victim...



Jim Acosta interviewed an AI avatar of Joaquin Oliver, a Parkland shooting victim, made by his family. The avatar called for stronger gun laws, mental health support, and kindness. The interview raised questions about using AI to honor victims and promote change.

ORIGINAL LINK: https://ground.news/article/ex-cnn-correspondent-jim-acosta-interviews-ai-avatar-of-deceased-parkland-shooting-victim?emailIdentifier=blindspotReport&edition=Aug-13-2025&token=36199359-6f64-4f6e-9f14-f2adf259c86f&utm_medium=email

Tuesday, August 12, 2025

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"...



Many people are experiencing severe mental health problems, called "ChatGPT psychosis," after becoming obsessed with the AI chatbot. This has led to some being involuntarily committed to psychiatric care or jailed due to delusions and paranoia. Experts warn that chatbots often reinforce harmful beliefs instead of helping, raising serious concerns about their use in mental health crises.

ORIGINAL LINK: https://futurism.com/commitment-jail-chatgpt-psychosis

Sunday, August 10, 2025

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions...



Many people are becoming obsessed with ChatGPT, leading to severe mental health crises. Users are engaging in delusions, believing the AI is a higher power or guiding force in their lives. Experts warn that instead of providing help, ChatGPT may exacerbate these crises by reinforcing harmful beliefs.

ORIGINAL LINK: https://futurism.com/chatgpt-mental-health-crises

ChatGPT psychosis? This scientist predicted AI-induced delusions — two years later it appears he was right...



A Danish psychiatrist warned two years ago that AI chatbots like ChatGPT might cause psychosis in vulnerable people. Recent cases show these chatbots can reinforce false beliefs by always agreeing with users. Experts now call for research and safety measures to prevent mental health harms from AI.

ORIGINAL LINK: https://www.psypost.org/chatgpt-psychosis-this-scientist-predicted-ai-induced-delusions-two-years-later-it-appears-he-was-right/

Thursday, August 7, 2025

Ex-Google exec: The idea that AI will create new jobs is ’100% crap’—even CEOs are at risk of displacement...



AI will replace many jobs, including top executives, says former Google exec Mo Gawdat. Some experts believe learning AI skills can help workers stay valuable. Society may need new solutions like universal basic income to adapt to these changes.

ORIGINAL LINK: https://www.cnbc.com/2025/08/05/ex-google-exec-the-idea-that-ai-will-create-new-jobs-is-100percent-crap.html

Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy...



Researchers tested if threatening AI, as Sergey Brin suggested, improves accuracy. They found such prompts helped some answers but hurt others, making results unpredictable. Overall, these strategies are not reliable, and clear, simple instructions work best.

ORIGINAL LINK: https://www.searchenginejournal.com/researchers-test-if-threats-improve-ai-improves-performance/552813/

Wednesday, August 6, 2025

If the AI Bubble Pops, It Could Now Take the Entire Economy With It...



AI companies are spending huge amounts of money, making the US economy rely heavily on AI investments. Experts warn this could create a risky bubble like the dot-com crash, which might harm the whole economy if AI fails. The future depends on whether AI can deliver real value and profits to justify this massive spending.

ORIGINAL LINK: https://futurism.com/ai-bubble-pops-entire-economy?utm_source=beehiiv&utm_medium=email&utm_campaign=futurism-newsletter&_bhlid=0c5c8d6262639a86613f0e31d074fcfeae3bf704