ChatGPT Handles 600,000 Weekly Health Queries from 'Hospital Deserts' — 70% After Business Hours
OpenAI has disclosed that approximately 600,000 of ChatGPT's weekly health-related queries originate from areas where residents must travel 30+ minutes to reach the nearest hospital. Seven in ten of all health queries arrive outside regular office hours. The numbers reframe ChatGPT as de facto healthcare infrastructure for medically underserved populations.

D.O.T.S AI Newsroom
AI News Desk
OpenAI has disclosed detailed usage data that reveals the scale at which ChatGPT is functioning as a healthcare information resource for populations that lack adequate access to traditional medical services. The figures — shared publicly by Chengpeng Mou, OpenAI's Head of Business Finance — are more specific than anything the company has previously disclosed about healthcare usage patterns, and they carry significant implications for how policymakers and health systems think about AI's role in medical access.
The Numbers
ChatGPT processes approximately two million weekly messages related to health insurance topics alone, according to Mou. Within that broader usage, roughly 600,000 weekly queries originate from what OpenAI terms "hospital deserts" — geographic areas where residents must travel at least 30 minutes to reach the nearest hospital. That figure represents concentrated reliance on an AI chatbot as a medical information resource among populations for whom in-person care access is structurally limited.
The temporal pattern is equally significant: seven out of ten health queries arrive outside regular office hours. This is not incidental. It reflects the reality of when people experience health concerns — evenings, nights, weekends — and when traditional healthcare access is unavailable. Urgent care costs money and requires transportation. Emergency rooms involve long waits for non-emergency concerns. ChatGPT is available at 2 AM, free, and does not require travel.
The Disclosure Context
Mou shared the data in response to a social media post by Simon Smith, who described using ChatGPT to consolidate medical information while helping manage his father's illness. Smith's family pooled information from multiple care providers into a shared ChatGPT project to facilitate decision-making across family members. Mou's response noted that the scenario represents mainstream usage, not an edge case — and the usage figures he cited substantiate that characterization.
OpenAI's Healthcare Expansion
The disclosure accompanies OpenAI's broader move into healthcare as a strategic vertical. The company recently launched a dedicated health section within ChatGPT, which has attracted 230 million weekly users globally. The specialized interface allows users to integrate medical records, Apple Health data, and wellness applications including MyFitnessPal and Peloton, enabling interpretation of lab results and preparation for doctor appointments. Health conversations are segregated from standard chat histories, excluded from AI training data, and maintained in isolated memory systems — privacy protections the company introduced following concerns about sensitive data handling in healthcare contexts.
OpenAI is simultaneously pursuing institutional partnerships with major U.S. hospital systems and developing a dedicated healthcare product line for clinical deployment. The two-pronged strategy — mass consumer access plus institutional integration — reflects a calculated positioning: establish ChatGPT as the default health information layer for the general population while building the enterprise relationships that will be necessary to access clinical workflows and electronic health record systems.
The Access Equity Dimension
The hospital desert data raises a question that goes beyond product strategy: if a significant share of the U.S. population is already relying on an AI chatbot for health information outside of clinical settings, what are the obligations of the company providing that service? The populations most likely to be in hospital deserts — rural communities, low-income areas, regions with chronic physician shortages — are also the populations most likely to have limited health literacy and the least ability to identify when AI-generated health information is incorrect or insufficient.
OpenAI's current approach frames the usage as beneficial access — filling gaps that the healthcare system has failed to fill. That framing is defensible. It does not resolve the question of what happens when ChatGPT gets it wrong for a population that has no alternative, or whether voluntary privacy protections are adequate safeguards for medical data at this scale of usage.