Four Uncomfortable Truths About AI Coding Agents That Nobody Wants to Say
As AI coding agents proliferate across engineering teams — from early adopters to Notion, Stripe, and Spotify — one senior engineer is naming the risks that aren't getting enough attention: skill atrophy, artificially deflated cost expectations, prompt injection vulnerabilities, and unresolved copyright exposure.

D.O.T.S AI Newsroom
AI News Desk
AI coding agents have arrived. The trajectory from impressive demo to core engineering workflow has been faster than almost anyone predicted. Companies from early adopters to established players like Notion, Stripe, and Spotify are betting significant engineering capacity on agentic coding systems. A recent essay from software engineer and developer at standupforme.app cuts through the enthusiasm to name four structural problems that deserve serious attention before the industry normalizes the risks away.
1. Skill Atrophy Is Real and Systematic
The optimistic framing of AI coding agents is that engineers become "software engineering managers," directing AI junior developers rather than writing code themselves. The author argues this framing obscures a structural problem: code review load will increase while the reviewer pool shrinks, as fewer engineers are expected to oversee more agents.
The resulting dynamic is predictable: reviewers become complacent out of necessity. The skill of deeply reading and understanding unfamiliar code — critical for security, maintainability, and system understanding — atrophies in exactly the population responsible for catching agent mistakes. The agents compound the problem by producing code that looks correct at first glance but contains subtle errors that require deep domain knowledge to catch.
2. The Cost Calculation Is Artificially Low
Current AI coding agent pricing reflects compute costs, not total cost of ownership. The hidden costs — engineer time spent reviewing generated code, debugging subtle agent errors, and untangling unexpected architectural decisions made by the agent — are not captured in any per-seat SaaS pricing model. As the author notes, organizations making build-vs-buy decisions on the basis of visible licensing costs are systematically underestimating what it actually costs to deploy agents safely.
3. Prompt Injection Is an Unsolved Attack Surface
Coding agents that read codebases, documentation, and external files inherit the prompt injection attack surface of every piece of text they process. A malicious comment in a dependency, a poisoned documentation page, or a crafted README in a cloned repository can redirect agent behavior in ways that neither the agent nor the engineer reviewing its output will necessarily detect. This is not a theoretical risk — security researchers have already demonstrated practical attacks against several commercial coding agent products.
4. Copyright and Licensing Exposure Remains Unresolved
The legal status of code generated by models trained on open-source repositories under various licenses remains actively contested. Organizations shipping AI-generated code to production are making implicit legal bets that the courts and regulators have not yet settled. The author's concern is not that the exposure is certain — it's that the risk is being treated as zero when it is plainly not.
None of these issues necessarily argue against using AI coding agents. They argue for using them with clear eyes about what the risks are — and building the organizational practices to manage them before a production incident makes the case the hard way.