As of February 2, the European Union has officially banned AI systems that regulators consider to pose an “unacceptable risk” to individuals or society. This marks the first compliance deadline for the EU’s landmark AI Act, a comprehensive regulatory framework that was finalized in March 2024 and took effect on August 1.
Defining Risk Levels in the AI Act
The AI Act categorizes AI systems into four risk levels:
- Minimal Risk – Systems like spam filters, which require no regulatory oversight.
- Limited Risk – AI applications such as customer service chatbots, subject to light regulation.
- High Risk – AI used in healthcare, hiring, and legal decisions, facing stringent regulatory scrutiny.
- Unacceptable Risk – AI applications deemed harmful, now prohibited outright.
AI Applications Now Banned
Under Article 5 of the AI Act, the following AI applications are now illegal in the EU:
- Social scoring systems, such as AI that ranks individuals based on behavior.
- Manipulative AI, which subliminally or deceptively influences decision-making.
- AI exploiting vulnerabilities, including those related to age, disability, or socioeconomic status.
- Predictive policing AI, which forecasts criminal behavior based on appearance.
- Emotion recognition in workplaces and schools, except for safety or medical purposes.
- Real-time biometric surveillance in public spaces for law enforcement, with limited exceptions.
- Facial recognition databases built by scraping online images or security footage.
Enforcement and Penalties
Companies violating these bans—regardless of their headquarters—could face fines of up to €35 million (~$36 million) or 7% of their global annual revenue, whichever is higher.
Voluntary Compliance and Industry Response
Even before this deadline, many tech firms had begun aligning with the Act. Over 100 companies, including Amazon, Google, and OpenAI, signed the EU AI Pact in September, committing to early compliance. However, major players like Apple, Meta, and the French AI startup Mistral opted not to sign. While these companies are still legally bound by the Act, their reluctance underscores concerns about regulatory clarity.
Exceptions and Future Guidelines
The AI Act does allow exceptions for law enforcement in cases of national security, missing persons investigations, or imminent threats—subject to strict oversight. Similarly, emotion recognition AI may be used in schools or workplaces for therapeutic or safety reasons.
Further guidance is expected in early 2025, as the European Commission finalizes additional implementation details following stakeholder consultations. With enforcement now underway, the coming months will reveal how effectively the EU can balance innovation with ethical AI governance.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.