Key Points
- The EU has begun enforcing the AI Act with strict compliance requirements.
- For violations, companies face fines of up to €35 million or 7% of global revenue.
- Banned AI applications include social scoring, real-time facial recognition, and manipulative AI tools.
- The EU AI Office is developing detailed compliance guidelines.
The European Union has officially begun enforcing its groundbreaking Artificial Intelligence (AI) law, marking a significant step in global AI regulation. The EU AI Act, which first came into force in August 2024, now requires companies to comply with strict guidelines or face substantial penalties.
As of Sunday, prohibitions on specific AI applications and requirements for staff training on AI literacy are fully enforceable. Companies violating the rules could be fined up to €35 million ($35.8 million) or 7% of global annual revenue, whichever is higher. These penalties exceed those of the General Data Protection Regulation (GDPR), which caps fines at €20 million or 4% of annual turnover.
The AI Act prohibits AI systems that pose an “unacceptable risk” to citizens. These include:
- Social scoring systems are similar to those used in China.
- Real-time facial recognition and biometric identification based on sensitive attributes like race or sexual orientation.
- Manipulative AI tools that exploit human behavior.
The law aims to ensure AI development remains ethical, safe, and transparent, focusing on protecting European citizens from potential misuse.
The AI Act is not yet in full effect, as further secondary legislation and guidelines will follow. Tasos Stampelos, head of EU public policy at Mozilla, says this regulation is necessary despite its imperfections. “Compliance will depend on how future standards, guidelines, and derivative instruments shape its enforcement,” he stated.
The newly created EU AI Office has begun refining compliance frameworks, including a draft Code of Practice for General-Purpose AI (GPAI) models, such as OpenAI’s GPT. Developers of high-impact AI models will face rigorous risk assessments and accountability measures.
While some tech leaders worry that strict regulations may stifle innovation, others believe it could position Europe as a leader in trustworthy AI development. Critics, including Prince Constantijn of the Netherlands, argue that Europe focuses too much on regulation rather than innovation.
However, Diyan Bogdanov, an AI expert at fintech firm Payhawk, sees the regulation as an opportunity. “The AI Act’s requirements aren’t limiting innovation—defining what responsible AI should look like.”