The European Commission has released new guidelines to help companies operating AI models deemed to pose systemic risks comply with the European Union’s AI Act.
These guidelines are intended to ease concerns among AI developers and businesses about the regulatory demands of the AI Act, while offering greater clarity on how to meet the law’s requirements. Companies that fail to comply may face significant penalties—ranging from €7.5 million or 1.5% of annual revenue to as much as €35 million or 7% of global turnover.
The AI Act, adopted into law last year, will begin applying on August 2 for AI models considered to have systemic risks, including general-purpose foundation models developed by companies like Google, OpenAI, Meta, Anthropic, and Mistral. These companies will have until August 2, 2026 to fully align with the legislation.
According to the Commission, AI systems with systemic risks are those with advanced computational power that could significantly affect public health, safety, fundamental rights, or the broader society.
To comply, these models will be required to:
- Undergo technical evaluations
- Identify and mitigate risks
- Conduct adversarial testing
- Report serious incidents to the Commission
- Maintain strong cybersecurity protections to guard against misuse or data theft
Additionally, general-purpose AI (GPAI) or foundation models must meet transparency obligations. These include:
- Creating technical documentation
- Implementing copyright compliance policies
- Sharing summaries of training data sources
“With today’s guidelines, the Commission supports the smooth and effective application of the AI Act,” said Henna Virkkunen, the EU’s Commissioner for Digital Affairs.
We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.