#header-inner img {width: 900px; margin: 0 auto;} #header-inner {text-align: center;}

Future of AI

News on Artificial Intelligence in Education and Libraries

Friday, July 18, 2025

Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems...



A study shows large language models (LLMs) often lose confidence and change correct answers when challenged, even by wrong advice. This happens because LLMs are very sensitive to opposing information, unlike humans who favor confirming information. Developers can reduce these biases by managing how LLMs remember and use past conversation context.

ORIGINAL LINK: https://venturebeat.com/ai/google-study-shows-llms-abandon-correct-answers-under-pressure-threatening-multi-turn-ai-systems/?utm_source=Iterable&utm_medium=email&utm_campaign=AGIWeekly-Iterable