
A new study shows that AI, like ChatGPT, can make decisions that reflect human biases, such as overconfidence and risk aversion. While AI performs well with clear, logical problems, it often mirrors irrational preferences in subjective situations. Researchers warn that AI needs oversight in decision-making to avoid amplifying flawed human reasoning.
ORIGINAL LINK: https://www.livescience.com/technology/artificial-intelligence/ai-is-just-as-overconfident-and-biased-as-humans-can-be-study-shows