Live
OpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling SoraOpenAI announces GPT-5 with unprecedented reasoning capabilitiesGoogle DeepMind achieves breakthrough in protein folding for rare diseasesEU passes landmark AI Safety Act with global implicationsAnthropic raises $7B as enterprise demand for Claude surgesMeta open-sources Llama 4 with 1T parameter modelNVIDIA unveils next-gen Blackwell Ultra chips for AI data centersApple integrates on-device AI across entire product lineupSam Altman testifies before Congress on AI regulation frameworkMistral AI reaches $10B valuation after Series C funding roundStability AI launches video generation model rivaling Sora
Policy

The EU AI Act's High-Risk Provisions Are Now Live — Here's What Every Enterprise Needs to Know

Phase two of the EU AI Act enters force this month, bringing mandatory conformity assessments, fundamental rights impact evaluations, and human oversight requirements for AI systems used in hiring, credit, healthcare, and law enforcement.

Meet Deshani

Meet Deshani

Founder & Editor-in-Chief

4 min read
The EU AI Act's High-Risk Provisions Are Now Live — Here's What Every Enterprise Needs to Know

The European Union's AI Act has moved from policy document to legal reality. Phase two of the regulation, which covers high-risk AI system requirements, entered into force on March 1, 2026, giving enterprises a 12-month window to achieve compliance before enforcement begins.

The practical obligations are substantial. Any AI system used in hiring and HR, credit scoring, healthcare diagnosis, biometric identification, or law enforcement now falls under the high-risk category. Deploying organizations must maintain comprehensive technical documentation, implement logging and monitoring, conduct fundamental rights impact assessments, and ensure a human can review and override any consequential AI decision.

What Compliance Actually Requires

The documentation requirements alone represent a significant operational lift. Organizations must maintain records of training data provenance, model architecture details, testing results, and ongoing performance monitoring. The Conformity Assessment — the EU's equivalent of a safety audit — must be conducted before deployment for many high-risk applications, and annually thereafter.

For companies using third-party AI systems from providers like Microsoft, Google, or SAP, responsibility does not transfer to the vendor. The deploying organization remains the "deployer" under the Act and bears primary compliance obligations.

Back to Home

Related Stories

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation
Policy

Musk Updates His OpenAI Lawsuit to Route Any $150 Billion Damages Award to the Nonprofit Foundation

Elon Musk has amended his lawsuit against OpenAI with a strategic addition: any damages recovered — potentially up to $150 billion — should be redirected to OpenAI's nonprofit foundation rather than awarded to Musk personally. The update reframes the litigation from a personal grievance into a structural argument about OpenAI's obligations to its original charitable mission.

D.O.T.S AI Newsroom
OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation
Policy

OpenAI's Child Safety Blueprint Confronts AI's Role in the Surge of Child Sexual Exploitation

OpenAI has released a Child Safety Blueprint outlining its approach to detecting, preventing, and reporting AI-generated child sexual abuse material. The document arrives as law enforcement agencies globally report a sharp increase in CSAM volume, with AI tools enabling the production of synthetic material at scale. It is the company's most detailed public statement on the problem it helped create.

D.O.T.S AI Newsroom
Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It
Policy

Anthropic's Claude Mythos Found Thousands of Zero-Days — So They're Not Releasing It

Anthropic has quietly restricted its most capable new model, Claude Mythos, after the system autonomously discovered thousands of critical vulnerabilities in major operating systems and browsers — including a 27-year-old OpenBSD bug and a 16-year-old FFmpeg flaw. The model is being deployed exclusively through Project Glasswing with 11 vetted security partners. It is the most concrete case yet of an AI lab withholding a model because of genuinely demonstrated risk.

D.O.T.S AI Newsroom