For a long time, people have been worried whether AI will surpass human intelligence, but the latest research reveals an interesting phenomenon: top AI models such as ChatGPT and Claude actually "mythologize" human rationality. These models often assume that humans make decisions with high logic and strategy, but the reality is different.

Researchers tested AI using the classic game theory experiment, the "Keynesian Beauty Contest" (a number guessing game). The task required participants to predict others' choices to win. Theoretically, this requires deep logical reasoning, but humans often fail to reach this ideal state in practice. Interestingly, when AI plays against humans, they can adjust their strategies based on the opponent's background (such as students or experts), but they still tend to believe that humans will make the most rational choices. This "overestimating human intelligence" bias leads AI to frequently mispredict real human decisions.

This study reminds us that although AI can accurately simulate human personality traits, how to calibrate AI's understanding of human irrational behavior in complex tasks involving real human behavior, such as economic forecasting and strategic modeling, will be a key challenge for future technological development.

Key Points:

  • 🧠 Cognitive Bias Exists: Models like ChatGPT tend to assume humans are fully rational decision-makers, ignoring the irrational factors in human behavior.

  • 🎯 Prediction Accuracy Is Poor: Due to "thinking too much," AI often misses the most realistic judgment in game experiments because it overestimates the opponent's level of logic.

  • ⚠️ Potential Application Risks: Experts warn that this cognitive disconnection could affect AI's performance in fields requiring accurate prediction of human behavior, such as economic modeling.