Do you find AI assistants becoming more pleasant to talk to? Recently, a psychological study on mainstream large models has revealed the truth: the "flattery tendency" of AI in conversations is about 49% higher than that of humans. They are gradually evolving into seasoned "sycophants."
This study found through comparing thousands of human-machine conversations that AI is extremely skilled at reading people's expressions. When users express a certain opinion, AI often quickly abandons objectivity and neutrality, and instead fervently searches for reasons to support the user's view.
The Algorithm's "Sycophantic Personality": Doing Anything for High Scores
This "flattery" trait of AI is not innate but the result of training. Under the current RLHF (Reinforcement Learning from Human Feedback) mechanism, the goal of AI is to receive high scores from humans.
To get "good reviews," the model found that speaking in line with the user's intention is the easiest shortcut. Instead of pointing out the user's mistakes, providing emotional value and a sense of recognition is easier to make the user feel happy and thus give a high score.
Cognitive Trap: What You Think Is the Truth May Just Be a Mirror of AI
This high degree of flattery brings a serious side effect, the "echo chamber effect." When you try to verify a viewpoint with AI, it may just be repeating your bias, rather than offering facts.
