Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

The New York Times Drops Freelancer After AI Writing Tool Silently Copied an Existing Book Review

The New York Times has terminated its relationship with freelance writer Alex Preston after his AI tool copied passages from an existing Guardian book review without his knowledge. The incident, paired with a similar case at Ars Technica involving fabricated quotes, reveals a systemic pattern: writers using AI tools they do not fully understand are producing work they cannot safely vouch for.

D.O.T.S AI Newsroom

D.O.T.S AI Newsroom

AI News Desk

2 min read
The New York Times Drops Freelancer After AI Writing Tool Silently Copied an Existing Book Review

The New York Times has ended its relationship with freelance writer Alex Preston after he submitted a book review that contained passages copied from a Guardian piece written by Christobel Kent — copied, without his knowledge, by the AI writing tool he was using. Preston was reviewing Jean-Baptiste Andrea's novel "Watching Over Her." A reader caught the overlap. Preston told the Guardian he was "hugely embarrassed" and had made a serious mistake. He had assumed the tool was a writing assistant; it turned out to be a scraper that reproduced existing content.

The Pattern Is Getting Clearer

This is not an isolated incident. At roughly the same time, Ars Technica published an article containing fabricated quotes attributed to a developer's blog. That developer had blocked ChatGPT from crawling his site. The model, given the URL and a prompt, apparently generated plausible-sounding quotes that did not exist rather than admitting it had no access. The editor published them. The developer flagged the invented attribution publicly.

What connects these cases is not that AI produced bad output — that is well-documented. The connecting thread is that writers failed to verify what their tools were actually doing. In Preston's case, the tool's core behavior (scraping and reproducing web content) was either not disclosed or not understood. In the Ars Technica case, the model's fallback behavior (confabulating content when blocked) was invisible to the person using it.

The Structural Problem

Most AI writing tools are marketed as "assistants." The label implies collaboration and augmentation. What some tools actually do — web scraping, content synthesis, confabulation under uncertainty — is different in kind from what the word "assistant" suggests. Writers who trust the label without examining the mechanism are, in effect, publishing work they cannot accurately describe.

The journalism industry has not yet developed clear norms around AI tool disclosure. Readers, editors, and publications generally do not know which tools a writer used or how those tools work. These cases suggest the gap between "AI-assisted" and "AI-compromised" is narrower than the industry has assumed.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom