According to a study by Aalto University, using artificial intelligence (AI) tools may lead us to misjudge our cognitive abilities. In general, people tend to rate themselves as "slightly better than average," a phenomenon that is particularly evident in those who perform poorly in cognitive tests, known as the Dunning-Kruger Effect. In other words, people with lower ability often overestimate their capabilities, while those with higher ability tend to underestimate themselves.

Image source note: The image was generated by AI, and the image licensing service is Midjourney
However, this new study from Aalto University found that this effect does not hold when it comes to large language models such as ChatGPT. The research found that regardless of users' AI literacy, people generally overestimated their performance when using AI. Especially those who believed they had higher AI literacy were more prone to overconfidence.
Researchers said this finding was surprising, because it is usually expected that people with AI literacy would not only perform better when interacting with AI, but also be more accurate in assessing their own performance. But the opposite was true. The study pointed out that although users of ChatGPT performed better on tasks, they were generally overly confident in their performance.
Additionally, the study emphasized the importance of AI literacy, pointing out that current AI tools do not effectively promote users' metacognitive abilities (the awareness of one's own thought processes), which could lead to a phenomenon of "intellectual decline" when acquiring information. Therefore, researchers called for the development of new platforms that encourage users to reflect on their thinking processes.
In the experiment, the research team asked approximately 500 participants to use AI to complete logical reasoning tasks from the Law School Admission Test (LSAT). The experimental results showed that most users only made a single query when using ChatGPT, without further thinking or verifying the AI's answers. Researchers called this phenomenon "cognitive offloading," which may limit users' accurate judgment of their own abilities.
To address this issue, researchers suggested that AI could proactively ask users whether they could further explain their reasoning process, thus prompting users to engage more deeply in interactions with AI and improve their critical thinking skills.
Key points:
🔍 Most people overestimate their cognitive performance when using AI, especially those with higher AI literacy.
🤖 People generally lack a correct assessment of their abilities when using AI tools like ChatGPT.
📈 The study suggests that AI should promote users' metacognitive abilities and encourage deeper thinking and reflection.
