On Wednesday, March 13, the European Parliament approved the regulation harmonizing rules on artificial intelligence (AI) (the AI Act).
Stakeholders must comply with the AI Act due to its global reach, when it takes effect this year.
Global scope — The AI Act will be applicable to all providers, manufacturers, importers, distributors and deployers of systems integrating AI that are established in the EU, or, if registered outside the EU, that market their AI system or model in the EU.
Each AI system or model will have to comply with the AI Act. Therefore, a company using multiple systems or models integrating AI will have to conduct a separate review of each of them for compliance with the AI Act.
Time frame — The AI Act has been formally adopted by the EU Parliament and must be endorsed by the EU Council. It will come into force within 20 days after its publication in the Official Journal of the EU. As it is a regulation, it will be directly effective in all Member States and will not have to be transposed into national law.
The provisions of the AI Act will come into force progressively, according to the following interim timetable:
New requirements — As detailed in our client alert of Dec. 19, 2023, the obligations vary according to the level of risk of the AI system:
The deployment of high-risk AI systems is strictly regulated by the AI Act.
Such systems will have to be assessed by the EU AI Office, to obtain a declaration of conformity, to be registered in an EU database and be CE marked before being marketed.
Fines for non-compliance are as high as 15 million euros or 3% of the company’s worldwide annual sales.
Users must be informed that the content to which they have access is generated by AI, for transparency reasons.
Fines for noncompliance are as high as 7.5 million euros or 1% of the company’s worldwide annual sales.
Next steps — In order to provide some flexibility in the regulatory process and take into account technological developments, some provisions of the AI Act will be clarified, notably in order to designate the national authorities responsible for monitoring and controlling the correct application of the regulation, or for imposing sanctions.
The European Commission is expected to issue guidance on various topics and delegated acts within five years, particularly on the definition of AI systems, criteria and use cases for “high-risk” AI, thresholds for general-purpose AI models with systemic risk, technical documentation requirements for general-purpose AI, conformity assessments, or EU declarations of conformity.
AI regulatory sandboxes will also be implemented at a national level and will be operational 24 months after entry into force, with the notable aim of providing guidance on regulatory expectations and how to fulfill the requirements and obligations set out in the AI Act.
The EU AI Office is to provide advice on the implementation of the new rules, in particular as regards GPAI models, and to develop codes of practice to support the application of the AI Act. All of these further developments will have to be closely monitored for full compliance with this new regulatory framework.