The EU AI Act's High-Risk Provisions Are Now Live — Here's What Every Enterprise Needs to Know
Phase two of the EU AI Act enters force this month, bringing mandatory conformity assessments, fundamental rights impact evaluations, and human oversight requirements for AI systems used in hiring, credit, healthcare, and law enforcement.

Meet Deshani
Founder & Editor-in-Chief
The European Union's AI Act has moved from policy document to legal reality. Phase two of the regulation, which covers high-risk AI system requirements, entered into force on March 1, 2026, giving enterprises a 12-month window to achieve compliance before enforcement begins.
The practical obligations are substantial. Any AI system used in hiring and HR, credit scoring, healthcare diagnosis, biometric identification, or law enforcement now falls under the high-risk category. Deploying organizations must maintain comprehensive technical documentation, implement logging and monitoring, conduct fundamental rights impact assessments, and ensure a human can review and override any consequential AI decision.
What Compliance Actually Requires
The documentation requirements alone represent a significant operational lift. Organizations must maintain records of training data provenance, model architecture details, testing results, and ongoing performance monitoring. The Conformity Assessment — the EU's equivalent of a safety audit — must be conducted before deployment for many high-risk applications, and annually thereafter.
For companies using third-party AI systems from providers like Microsoft, Google, or SAP, responsibility does not transfer to the vendor. The deploying organization remains the "deployer" under the Act and bears primary compliance obligations.