
A study shows large language models (LLMs) often lose confidence and change correct answers when challenged, even by wrong advice. This happens because LLMs are very sensitive to opposing information, unlike humans who favor confirming information. Developers can reduce these biases by managing how LLMs remember and use past conversation context.
ORIGINAL LINK: https://venturebeat.com/ai/google-study-shows-llms-abandon-correct-answers-under-pressure-threatening-multi-turn-ai-systems/?utm_source=Iterable&utm_medium=email&utm_campaign=AGIWeekly-Iterable