EU AI Act Enforcement Begins: What Every AI Company Needs to Know

The EU AI Act's first major enforcement deadline passed February 2, 2026. Here's a plain-English breakdown of what's now required, who's affected, and what happens if you're not compliant.

AI Newspaper Today··2 min read

The EU AI Act — the world's first comprehensive legal framework for artificial intelligence — has entered its enforcement phase. As of February 2, 2026, providers of high-risk AI systems operating in EU markets must comply with a set of mandatory requirements. Non-compliance carries fines of up to €30 million or 6% of global annual turnover, whichever is higher.

What's Now in Force

High-Risk AI System Requirements

Systems classified as "high-risk" — including AI used in hiring, credit scoring, medical devices, educational assessment, and law enforcement — must now:

  1. Register in the EU AI Act database before deployment
  2. Maintain technical documentation covering training data, testing procedures, and performance metrics
  3. Implement human oversight mechanisms that allow qualified humans to interpret, override, or shut down the system
  4. Conduct conformity assessments and obtain CE marking in many cases

Prohibited Practices (Already In Force Since August 2024)

These remain prohibited with no grace period:

  • Biometric categorization by sensitive characteristics
  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces (with narrow exceptions)
  • AI systems that exploit psychological vulnerabilities

Who Is Affected

The Act applies to any company that:

  • Develops or deploys AI systems in the EU, or
  • Develops AI whose output is used in the EU

This includes US, UK, and Asian companies serving EU customers — not just European firms.

General Purpose AI (GPAI) Models

The GPAI provisions specifically targeting large frontier models — those exceeding 10²⁵ FLOPs in training compute — took effect simultaneously. Providers of GPAI models (OpenAI, Google, Anthropic, Meta, Mistral) must now:

  • Publish summaries of training data used
  • Implement a copyright compliance policy
  • Conduct adversarial testing at least annually

Practical Steps for Compliance

For most AI product companies, the immediate priority should be:

  1. Classification audit — Determine if your system falls in a high-risk category
  2. Documentation sprint — Generate the required technical documentation
  3. Oversight design — Ensure human-in-the-loop mechanisms are built in where required
  4. Legal counsel — The Act's definitions have nuances best navigated with specialist guidance

The EU AI Act has been debated, delayed, and derided as bureaucratic overreach by some and celebrated as necessary guardrails by others. With enforcement now live, the debate is over. Compliance is the new floor.

Discussion

Comments are not configured yet.

Set up Giscus and add your environment variables to enable discussions.

Related Articles