Skip to main content
Back to blog

The European AI Act: Everything You Need to Know in 2025

The European AI Act: Everything You Need to Know in 2025
Guillaume Hochard
2025-03-10
5 min

Key takeaways: The European AI Act, the world's first comprehensive AI regulation, classifies AI systems into four risk levels: unacceptable (banned), high risk (strictly regulated), limited risk (transparency required), and minimal risk (unrestricted). Banned systems include social scoring, real-time public biometric identification, and behavioral manipulation. High-risk categories cover critical infrastructure, education, employment, credit scoring, and justice, requiring high-quality data, detailed documentation, transparency, human oversight, and security. Most companies using standard AI tools like Copilot or ChatGPT face only transparency obligations, but organizations developing or deploying AI for HR, banking, or healthcare likely fall under high-risk requirements. Penalties reach up to 35 million euros or 7% of global turnover. Ikasia recommends three immediate actions: mapping all AI systems in use, classifying each by risk level, and appointing an internal AI compliance referent to manage governance.

A Global First

The European Union is the first major power to legislate comprehensively on artificial intelligence. The AI Act aims to classify AI systems according to the risk they pose to the fundamental rights of users.

The 4 Risk Levels

  1. Unacceptable Risk (Banned): Social scoring, real-time biometric identification in public spaces (with exceptions), manipulation of human behavior. These systems are simply prohibited.
  2. High Risk (Strictly Regulated): AI in critical infrastructure, education, employment (CV sorting), credit scoring, justice.
    • Obligations: High-quality data, detailed documentation, transparency, human oversight, robustness, and security.
  3. Limited Risk (Transparency): Chatbots, emotion recognition systems, deepfakes.
    • Obligation: Users must be informed that they are interacting with a machine or that the content is artificially generated.
  4. Minimal Risk (No Restrictions): Spam filters, video games. The vast majority of AI systems fall into this category.

Impact on Companies

For most companies using standard AI tools (Copilot, ChatGPT), the impact is limited to transparency. However, if you develop or deploy AI systems for HR, banking, or health, you are likely in the "High Risk" category.

What to Do?

  • Map your AI systems: Identify all AIs used or developed internally.
  • Classify risks: Determine the category of each system.
  • Governance: Appoint an AI compliance referent.

Sanctions

Penalties can go up to 35 million euros or 7% of global turnover, whichever is higher. Compliance is therefore not an option.


Enjoyed this article? Check out our AI Ethics & Governance Training — 2 days to drive AI strategy across your organisation.

Tags

Regulation AI Act Europe

Want to go further?

Ikasia offers AI training designed for professionals. From strategy to hands-on technical workshops.