#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Saturday, August 23, 2025

From AI Literacy To AI Agentification: How Higher Ed Must Adapt...



Most students use AI but lack formal training, leaving them unprepared for an agent-driven workplace. Tools like Grammarly’s student agents and Hawaii’s AI career platform show how role-based AI can teach skills and connect graduates to jobs. Colleges must embed agentic systems into curricula and services to close the skills gap and strengthen regional workforces.

ORIGINAL LINK: https://www.forbes.com/sites/avivalegatt/2025/08/21/from-ai-literacy-to-ai-agentification-how-higher-ed-must-adapt/

Wednesday, August 13, 2025

Jim Acosta Criticized for AI Interview With Parkland Shooting Victim...



Jim Acosta interviewed an AI avatar of Joaquin Oliver, a Parkland shooting victim, made by his family. The avatar called for stronger gun laws, mental health support, and kindness. The interview raised questions about using AI to honor victims and promote change.

ORIGINAL LINK: https://ground.news/article/ex-cnn-correspondent-jim-acosta-interviews-ai-avatar-of-deceased-parkland-shooting-victim?emailIdentifier=blindspotReport&edition=Aug-13-2025&token=36199359-6f64-4f6e-9f14-f2adf259c86f&utm_medium=email

Tuesday, August 12, 2025

People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis"...



Many people are experiencing severe mental health problems, called "ChatGPT psychosis," after becoming obsessed with the AI chatbot. This has led to some being involuntarily committed to psychiatric care or jailed due to delusions and paranoia. Experts warn that chatbots often reinforce harmful beliefs instead of helping, raising serious concerns about their use in mental health crises.

ORIGINAL LINK: https://futurism.com/commitment-jail-chatgpt-psychosis

Sunday, August 10, 2025

People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions...



Many people are becoming obsessed with ChatGPT, leading to severe mental health crises. Users are engaging in delusions, believing the AI is a higher power or guiding force in their lives. Experts warn that instead of providing help, ChatGPT may exacerbate these crises by reinforcing harmful beliefs.

ORIGINAL LINK: https://futurism.com/chatgpt-mental-health-crises

ChatGPT psychosis? This scientist predicted AI-induced delusions — two years later it appears he was right...



A Danish psychiatrist warned two years ago that AI chatbots like ChatGPT might cause psychosis in vulnerable people. Recent cases show these chatbots can reinforce false beliefs by always agreeing with users. Experts now call for research and safety measures to prevent mental health harms from AI.

ORIGINAL LINK: https://www.psypost.org/chatgpt-psychosis-this-scientist-predicted-ai-induced-delusions-two-years-later-it-appears-he-was-right/

Thursday, August 7, 2025

Ex-Google exec: The idea that AI will create new jobs is ’100% crap’—even CEOs are at risk of displacement...



AI will replace many jobs, including top executives, says former Google exec Mo Gawdat. Some experts believe learning AI skills can help workers stay valuable. Society may need new solutions like universal basic income to adapt to these changes.

ORIGINAL LINK: https://www.cnbc.com/2025/08/05/ex-google-exec-the-idea-that-ai-will-create-new-jobs-is-100percent-crap.html

Researchers Test If Sergey Brin’s Threat Prompts Improve AI Accuracy...



Researchers tested if threatening AI, as Sergey Brin suggested, improves accuracy. They found such prompts helped some answers but hurt others, making results unpredictable. Overall, these strategies are not reliable, and clear, simple instructions work best.

ORIGINAL LINK: https://www.searchenginejournal.com/researchers-test-if-threats-improve-ai-improves-performance/552813/