Artificial Intelligence Regulated: The EU AI Act Has Arrived!
The European Union reached a milestone in the regulation of artificial intelligence (AI). Five years after the proposal was submitted, the EU legislators have made a decisive move with an overwhelming majority. The legislation, known as the AI Act ((EU) 2024/1689), this legislation represents the first comprehensive framework regulating the development and use of AI within the EU. Its goal is to ensure that AI evolves and impacts society in an ethical and responsible manner. The act is part of the EU Digital Strategy, which aims to achieve the digital infrastructure development goals set for 2030.
The primary intention of the AI Act is to provide clear requirements and obligations for AI developers and users regarding the specific applications of artificial intelligence.
Risk-Based Approach
The AI Act, like many EU regulations, was originally designed as a consumer protection law. It takes a risk-based approach, defining four risk levels for AI systems: unacceptable, high, limited, and minimal or risk-free.
AI systems that pose an unacceptable risk, endangering people’s safety and rights, will be banned. The detailed list is specified item by item in the legislation. High-risk systems, which present significant risks to health, safety, and fundamental rights, such as AI technologies used in critical infrastructure, education, healthcare, and justice will be subject to strict obligations and control mechanisms.
The higher the risk associated with an AI application, the more rigorous the oversight it must undergo. Most AI systems are expected to be low-risk, such as content recommendation systems or spam filters.
Transparency Requirements
Generative artificial intelligence, like ChatGPT, does not fall into the high-risk category but must adhere to certain transparency requirements and EU copyright laws:
- It must be disclosed that the content was generated by AI.
- The model must be designed to avoid generating illegal content.
- Summaries of copyrighted data used for training must be published.
High-impact, general-purpose AI models, such as advanced versions of GPT-4 or Google Gemini, may pose systemic risks. These must undergo thorough evaluation, and all major incidents must be reported to the European Commission.
Content generated or modified by AI—such as images, audio, or video files (including 'deepfakes')—must be clearly labeled to ensure users are aware when they encounter such content.
Implementation and Supervision
The implementation of the legislation will be overseen by the European Artificial Intelligence Office, established in February 2024. The office aims to ensure the ethical and sustainable development of AI technologies and promote cooperation, innovation, and research related to AI. It also engages in international dialogue to achieve global alignment.
Entry into Force
Although the legislation has been adopted, its provisions will come into effect gradually from 1 August 2024:
- The ban on AI systems posing unacceptable risk will take effect six months after the legislation’s entry into force.
- Codes of practice must be applied nine months after the entry into force.
- Rules for general-purpose AI systems (such as those for chatbots), which must meet transparency requirements, will come into effect 12 months after the entry into force.
- High-risk systems will have more time to comply, with their obligations taking effect 36 months after the entry into force.
By mid-2026, the entire regulatory framework will be in effect.
The professional team of CLM Bitai & Partners possesses all the necessary knowledge regarding the latest regulations. If you have any questions do not hesitate to contact us!