ChatGPT Receives 600,000 Health Queries Per Week in Areas With No Nearby Hospitals
New data from OpenAI reveals the scale of AI's role as a first-line health resource in medically underserved communities: 600,000 weekly queries from hospital desert ZIP codes, with 70% arriving outside standard clinic hours.

D.O.T.S AI Newsroom
AI News Desk
OpenAI disclosed new usage data on Monday showing that ChatGPT receives approximately 600,000 health-related queries per week from ZIP codes classified as hospital deserts — areas where the nearest hospital is more than 30 miles away. Seven in ten of those queries arrive outside standard clinic hours, when no human medical professional is locally reachable.
The Scale of the Gap
The United States has approximately 30 million people living in hospital desert ZIP codes. The 600,000 weekly figure represents a significant subset of that population turning to a large language model as their first point of contact with health information. For context, the 988 mental health crisis line received approximately 5 million calls in all of 2023 — ChatGPT is handling comparable health interaction volume in a single month, distributed across a population that has few other convenient options.
OpenAI has not released specific information about the types of queries — whether they skew toward chronic condition management, acute symptom checking, or mental health support. That distinction matters significantly for evaluating both the utility and the risk of the current usage pattern. A model helping someone understand their diabetes management regimen is a different proposition from a model triaging acute chest pain.
Implications for AI Health Policy
The data arrives as U.S. health policy discussions increasingly focus on AI's potential to address access gaps. Proponents argue that AI tools like ChatGPT are already functioning as de facto healthcare infrastructure in underserved communities, and that policy should catch up to that reality with appropriate guidelines and integration pathways. Critics argue that an unregulated LLM serving as primary health advisor for medically underserved populations represents a significant patient safety risk that should be addressed before scale increases further.
OpenAI has indicated it is working with health systems on structured integrations, but the 600,000 weekly query figure reflects usage that is entirely outside any formal care coordination framework — patients querying a general-purpose chat interface, not a medically supervised AI health tool.