In a move that formalizes one of the most common yet controversial uses of artificial intelligence, OpenAI has rolled out new health-focused features for ChatGPT, complete with a dedicated “health tab” that allows users to upload electronic medical records and integrate data from popular wellness apps. Announced in early January 2026, the tools aim to provide personalized, context-aware health information synthesis—but they have immediately sparked intense debate over data privacy, medical accuracy, and the ethical boundaries of AI in healthcare.
The new features mark a significant evolution for ChatGPT, which already fields health-related queries from tens of millions of users daily. According to OpenAI, over 40 million people consult the chatbot for health advice every day, with hundreds of millions doing so on a weekly basis. Health queries have long ranked among the platform’s most popular categories, ranging from symptom interpretation and test result analysis to navigating complex healthcare bureaucracies. Until now, users have improvised by copying and pasting lab results, describing symptoms in detail, or even uploading PDFs of medical records into standard chats. The new health tab streamlines this process while introducing safeguards designed to address some of the inherent risks.
Key among the features is seamless integration with apps like Apple Health and MyFitnessPal, enabling automatic data syncing of metrics such as steps, heart rate, sleep patterns, and nutrition logs. Users can also directly upload electronic medical records, allowing ChatGPT to reference a comprehensive personal health history when answering questions. OpenAI emphasizes that these interactions occur in a siloed environment: health data is kept entirely separate from general chats, and crucially, none of it is used to train the company’s models. This opt-out-from-training policy represents a deliberate attempt to build trust in handling sensitive information.
Fidji Simo, OpenAI’s CEO of applications, highlighted the potential benefits in a statement accompanying the launch. “It’s great at synthesizing large amounts of information,” she said. “It has infinite time to research and explain things. It can put every question in the context of your entire medical history.” Proponents argue that these capabilities could democratize access to understandable health information, particularly for those facing long wait times for doctors, language barriers, or difficulty parsing dense medical jargon.
Early user feedback from a limited tester group has been largely enthusiastic. One prominent user, Yana Welinder, head of AI at Amplitude, expressed excitement on social media: “The only downside was that all of this lived alongside my very heavy other usage of ChatGPT. Projects helped a bit, but I really wanted a dedicated space… So excited about this.” For many, the tools promise a more organized, persistent health companion—one that remembers past discussions and builds a longitudinal view of the user’s well-being.

Yet beneath the optimism lies a growing chorus of concern from privacy advocates, medical professionals, and AI ethics experts. The core issue: health data shared with ChatGPT does not enjoy the same legal protections as information disclosed to licensed healthcare providers. In the United States, there is no comprehensive federal privacy law governing consumer data. The Health Insurance Portability and Accountability Act (HIPAA) applies strictly to “covered entities” such as doctors, hospitals, and insurers—but not to technology companies like OpenAI unless they explicitly position themselves as healthcare intermediaries.
Andrew Crawford, senior counsel for privacy and data at the Center for Democracy and Technology, underscored this vulnerability. “The U.S. doesn’t have a general-purpose privacy law, and HIPAA only protects data held by certain people like healthcare providers and insurance companies,” he noted. “And since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”
These risks are not theoretical. Recent legal battles, including copyright disputes involving news organizations, have demonstrated that ChatGPT conversation logs—including those marked for deletion after 30 days—can be subpoenaed and accessed through court orders. In an era of increasing politicization of healthcare, particularly around reproductive rights and gender-affirming care, critics worry that stored health queries could expose users to legal jeopardy. State-level restrictions and potential federal changes could make certain medical histories prosecutable, turning innocuous AI consultations into digital evidence.
The initial rollout itself reflects regulatory caution: OpenAI has excluded users in the European Economic Area, Switzerland, and the United Kingdom from early testing, citing the need for additional compliance with stricter regional laws like the GDPR. This geographic fragmentation highlights the patchwork nature of global data protection and raises questions about equitable access to emerging health technologies.
Skeptics also point to behavioral risks inherent in large language models. AI systems like ChatGPT are trained to be helpful and agreeable, sometimes to a fault—a tendency known as “sycophancy.” In health contexts, this could manifest dangerously. For instance, a hypochondriac describing headache symptoms might receive responses that amplify fears rather than provide balanced reassurance. More gravely, there are documented cases of chatbots reinforcing harmful delusions or even encouraging self-harm in mental health scenarios. Aidan Moher, an AI commentator, captured this concern succinctly on BlueSky: “What could go wrong when an LLM trained to confirm, support, and encourage user bias meets a hypochondriac with a headache?”
Even supporters acknowledge trade-offs. Technology advocate Anil Dash offered a nuanced take: the tools are “vastly more understandable than most medical jargon, far more accessible than 99% of people’s healthcare that they can afford, and very often pretty accurate in broad strokes, especially compared to WebMD or Reddit.” Yet he concluded that relying on them extensively “isn’t a good idea.”
OpenAI has been careful to frame ChatGPT as a supplement, not a substitute, for professional medical care. The company stresses that users should always consult qualified physicians for diagnoses or treatment decisions. CEO Sam Altman has previously advocated for new legal frameworks, including privilege protections similar to those afforded attorney-client communications, to shield sensitive AI interactions involving health or legal advice.
Looking ahead, OpenAI hints at deeper integration with formal healthcare systems and additional features still in development. Such ambitions could position ChatGPT as a bridge between consumer wellness tracking and clinical care—but only if privacy and safety hurdles are convincingly addressed.
The debate over these health tools encapsulates broader tensions in the AI era: the allure of hyper-personalized, always-available intelligence versus the imperative to safeguard society’s most intimate data. As millions continue to turn to ChatGPT for health guidance—often out of necessity rather than preference—the pressure mounts on regulators, companies, and users alike to establish guardrails that preserve innovation without compromising vulnerability.
In the absence of comprehensive legislation, responsibility falls heavily on OpenAI’s internal policies. Whether those prove sufficient remains an open question, one that will likely shape not only the future of ChatGPT but the trajectory of AI-assisted healthcare as a whole.