The EU AI Act, proposed by the European Commission in April 2021, aims to regulate the use of AI in the EU by protecting users from AI-related harm and prioritizing human rights.
Using a risk-based approach, the EU AI Act imposes obligations that are proportional to the risk posed by an AI system or a general purpose AI model.
After a lengthy consultation process that saw several amendments proposed, it passed in the European Parliament in June 2023, marking the start of a six-month Trilogue period.
At the end of this process, a provisional agreement was reached in December 2023, before the Coreper I (Committee of the Permanent Representatives) reached a political agreement in February 2024.
The European Parliament AI Act was voted and approved on 13 March 2024.
This landmark legislation is set to become the global gold standard for AI legislation and will have important implications for organizations both within and outside of the EU due to its extraterrestrial scope.
The core objectives of the EU AI Act are to:
These principles aim to create a balanced approach that promotes innovation while protecting users.
The EU AI Act includes several key provisions that businesses must adhere to:
Risk Level | Examples |
---|---|
Minimal Risk | Spam filters, AI-enabled video games |
Limited Risk | Chatbots, deepfakes |
High Risk | AI in healthcare, employment screening tools |
Unacceptable Risk | Social scoring systems, real-time biometric ID |
...
Minimal Risk:
Limited Risk:
High Risk:
Unacceptable Risk:
The EU AI Act affects a broad range of entities involved in the AI lifecycle.
These include:
The Act has an extraterritorial scope, meaning it applies not only to EU-based entities but also to companies outside the EU if their AI systems interact with EU residents. This broad reach ensures comprehensive compliance across the global AI ecosystem.
Providers:
Deployers:
Importers:
Distributors:
Transparency and human oversight are key to building trust and ensuring compliance.
Ensure clear communication about AI system usage to all stakeholders.
Implement training and competency requirements for human oversight to ensure personnel can effectively monitor AI systems.
Maintain transparency in AI operations by providing clear instructions and information about AI capabilities and limitations.
Clear Communication:
Training Requirements:
Operational Transparency:
The Act applies to providers, deployers, distributors, and importers of AI systems that are placed on the market or put in service within the European Union. The level of preparedness required under the Act is different for each operator. For providers and deployers, the Act may also apply extraterritorially, meaning that providers and deployers of AI system may need to prepare for the EU AI Act even if they are based outside the European Union.
There are seven key design-related requirements for the high-risk AI systems under the EU AI Act:
Non-compliance with the provisions of the EU AI Act sanctioned with hefty administrative fines.
...