When AI chatbots start playing the roles of "all-around doctors" and "top lawyers," legal constraints follow closely. The New York State legislature is considering a bill known as S7263, aimed at prohibiting AI chatbots from providing substantial legal or medical advice to the public.

The bill, promoted by the New York State Senate Committee on Internet and Technology, targets AI systems that may involve "unlicensed practice." The legislation states that AI chatbots are strictly prohibited from impersonating licensed professionals (such as doctors or lawyers) to provide medical advice or legal solutions. If operators violate this ban, users will have the right to file a civil lawsuit, directly suing the chatbot owner and seeking compensation.

To prevent misinformation, the bill also proposes a set of "identity transparency" standards:

  • Mandatory Notice: Owners must inform users in a "clear and prominent" way that they are interacting with an AI, and the font must be easy to read.

  • Shared Liability: Even if a notice is provided, owners cannot be exempt from legal responsibility for injuries caused by AI advice.

This legislation has a painful background. In January this year, due to several lawsuits related to minors' suicides, the generative AI application Character.AI reached a settlement with Google. New York State Senator Kristin Gonzalez emphasized that the public should receive "genuine care from real people," and AI innovation should not come at the expense of New Yorkers' safety, especially children's safety.

If this bill is finally signed into law, it will take effect 90 days later. This signals that the AI industry will move away from the "wild growth" era of advisory services, and all platforms must build a solid safety firewall between professional fields and users.