Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

Utah Is Now Letting an AI System Prescribe Psychiatric Drugs Without a Doctor

Utah has authorized an AI system to prescribe and refill psychiatric medications autonomously — only the second time any U.S. state has delegated this level of clinical authority to an AI. State officials say it cuts costs and eases care shortages. Physicians say the system is opaque and the risk is real.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

3 min read
Utah Is Now Letting an AI System Prescribe Psychiatric Drugs Without a Doctor

Utah has become only the second U.S. state to formally authorize an AI system to prescribe psychiatric medications without physician oversight. The state's authorization covers prescription and refill decisions for a defined set of psychiatric drugs, allowing the AI to function in a clinical capacity that has historically required a licensed physician's judgment.

State officials frame the decision around access: Utah faces significant shortages of psychiatrists, particularly in rural areas, and the authorization is intended to extend the reach of psychiatric care to patients who would otherwise face months-long wait times or no access at all. On cost grounds, AI-assisted prescribing can reduce per-appointment overhead substantially.

What Physicians Are Warning

The medical community's concerns are specific, not reflexively anti-technology. Psychiatric medication decisions are complex in ways that scale poorly to algorithmic treatment: drug interactions are numerous and sometimes idiosyncratic, patient response curves vary widely, and the diagnostic signals that inform medication adjustments — affect, body language, the texture of a patient's account of their symptoms — are difficult to assess through a digital interface. Physicians who have reviewed Utah's authorization describe the AI system as "opaque" — they cannot audit the model's reasoning or identify why it made a specific prescribing decision.

The opacity concern is not minor. In medicine, the ability to understand why a treatment decision was made is foundational to accountability, error correction, and informed consent. A prescribing system that cannot explain its reasoning is not just a regulatory problem — it is a patient safety problem.

The Kintsugi Parallel

The same week Utah's authorization became public, California-based startup Kintsugi — which spent seven years developing AI designed to detect signs of depression and anxiety from speech — announced it is shutting down after failing to secure FDA clearance in time. The contrast is stark: one jurisdiction is extending prescribing authority to AI systems, while the FDA's clearance process forced a carefully-developed diagnostic tool out of the market. The regulatory landscape for AI in clinical care is not coherent — it is a patchwork of state-level experiments operating in a federal vacuum.

Where This Goes

The question is not whether AI will play a larger role in psychiatric care — the shortage math makes that outcome likely regardless of the policy debate. The question is whether the governance structures being built now will be adequate to catch the failures that inevitably occur. Utah is running that experiment in real time, on real patients, with an opaque system. The results will matter for every state watching to see what happens next.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom