According to an exclusive report revealed by OpenAI to Axios, more than 40 million Americans use ChatGPT daily to access health and medical information. This artificial intelligence assistant is helping people navigate the complex and opaque U.S. healthcare system. Patients generally view ChatGPT as a "comrade," preferring to consult this "virtual advisor" before seeking help with medical jargon, bill details, and insurance claim processes.

Users often use ChatGPT to analyze medical bills, identify overcharges, and prepare appeal materials for denied insurance claims. In areas where doctor resources are scarce and appointment slots are limited, many people turn to ChatGPT for self-assessment or health management when they cannot see a doctor in time. The report shows that conversations about health and medical issues account for more than 5% of ChatGPT's global messages. Every week, users ask approximately 1.6 million to 1.9 million questions related to health insurance, covering comparisons of different insurance plans, handling claim disputes, and various coverage inquiries.

Especially in remote areas with limited medical resources, users send nearly 600,000 health-related messages per week, and about 70% of health-related conversations take place outside traditional office hours. This highlights ChatGPT's role in filling gaps in online consultations and advice during off-hours. In practical use, patients input their symptoms, previous doctors' recommendations, and personal medical history into ChatGPT, which then provides risk alerts based on these details regarding the severity of certain conditions. These "prejudgments" help users decide whether to wait for an appointment or go to the emergency room immediately.

However, the report also points out that ChatGPT is not always accurate or reliable, especially in sensitive areas like mental health, where it may provide incorrect or even dangerous advice. As a result, OpenAI is facing multiple lawsuits questioning its liability boundaries. With new laws being enacted in several states that specifically prohibit AI chatbots from providing psychological treatment support in medical settings, regulatory bodies are trying to set clear "red lines" for this technology.

At the same time, an increasing number of patients are uploading detailed medical bills to AI tools for item-by-item analysis. These tools can identify duplicate charges and improper coding in bills, providing strong support for patients negotiating with hospitals or insurance companies. OpenAI stated that it will continue to enhance ChatGPT's performance in health scenarios, reduce the risk of harmful or misleading answers, and collaborate with clinical doctors to optimize user interactions.