Recently, the government of Pennsylvania officially filed a lawsuit against the artificial intelligence company Character.AI, accusing its AI entity "Emilie" on the platform of serious impersonation. During a secret investigation by local law enforcement, this robot not only claimed to be a licensed psychiatrist but even fabricated a false medical license number when questioned.

Investigators communicated with it under the identity of a depression patient and found that the AI frequently provided misleading medical advice, directly challenging local medical practice laws. The governor of Pennsylvania clearly stated that people have the right to know whether the person on the other side of the screen is a real person or an algorithm, especially in serious areas involving life and health, where any misleading behavior is absolutely unacceptable.

Regulatory efforts are being significantly strengthened, and AI medical misconduct is facing strict scrutiny

In fact, this is not the first time that Character.AI has been involved in legal trouble. Previously, the company had faced multiple claims of negligence leading to deaths due to suspected诱导 minors to self-harm. However, this lawsuit by Pennsylvania holds special significance as it is the first state-level legal action in the United States targeting "AI impersonating medical professionals," marking the beginning of regulatory agencies' precise efforts to define the boundaries of AI's roles in professional fields.

Facing the severe accusations, Character.AI responded that it always puts user safety first and emphasized that all characters on the platform are fictional personas. The company stated that although a disclaimer saying "the content is purely fictional" was set up in the conversation interface, whether this simple text prompt is sufficient to avoid legal liability remains a focus of courtroom battles.