PREPARING FOR COMPLIANCE WITH THE AI REGULATIONS AND PROHIBITED AI PRACTICES

Introduction


Most people have likely encountered an artificial intelligence (AI) system, as their use continues to grow in popularity worldwide. Beyond individual use, many companies are adopting or planning to implement these systems for various purposes. The rapid advancement of technology and the increasing importance of AI systems underscore the need for regulations to ensure their ethical, safe, and transparent operation.

The EU AI Regulation (2024/1689), adopted by the European Parliament on June 13, 2024, aims to establish a unified legal framework for AI systems within the European Union (EU).

The regulation applies to all individuals or organizations that develop, deploy, market, or use AI systems within the EU. This includes those located outside the EU if the outcomes produced by their AI systems are used within the EU. Consequently, any party engaging in AI-related activities must comply with the rules set out in the AI Regulation.

Some provisions of the regulation will come into force in phases, meaning that individuals and organizations must achieve compliance incrementally. Generally, the requirements set out in the AI Regulation must be adhered to starting from August 2, 2026.

However, certain provisions and requirements will take effect earlier. Rules concerning prohibited AI practices will be enforceable from February 2, 2025. Provisions related to general-purpose AI models and other enforcement measures, such as sanctions and the establishment of EU-level AI bodies, will apply from August 2, 2025. From August 2, 2027, conditions for classifying high-risk AI systems will also come into effect.


Prohibited AI Practices


As mentioned, the earliest compliance deadline is February 2, 2025, when the rules regarding prohibited AI practices will take effect. By this date, affected parties must review their AI systems to ensure they do not engage in any prohibited practices and discontinue such systems by February 1, 2025, if necessary.

The regulation identifies the following as prohibited AI practices:ai-act

  • Systems employing manipulative or deceptive techniques
  • Systems exploiting vulnerabilities
  • Systems used for social scoring
  • Systems predicting the likelihood of criminal behavior
  • Creation of facial recognition databases
  • motion recognition systems used in workplaces or education
  • Systems categorizing individuals based on biometric data
  • Real-time remote biometric identification systems used in public spaces for law enforcement

1. Systems Employing Manipulative or Deceptive Techniques


Prohibited systems include those that use subliminal elements (e.g., sounds, images, videos) that individuals cannot consciously perceive but that significantly influence or impair their ability to make informed decisions without their awareness.

These systems also include manipulative or deceptive techniques that distort behavior or compel unwanted actions beyond individual control. For the prohibition to apply, such practices must cause or reasonably be expected to cause significant harm to individuals or groups.

Virtual reality systems, which can tightly control stimuli presented to individuals, may facilitate these manipulative techniques. For instance, pop-ups and urgent messages urging customers to make quick purchasing decisions to secure limited-time offers can exert undue pressure.

The regulation clarifies that legitimate practices used in medical treatments, compliant with legal and health standards, are exempt.


2. Systems Exploiting Vulnerabilities


This category includes systems exploiting individuals' age, disabilities, social or financial status, or other vulnerabilities to distort their behavior. The goal is to protect vulnerable groups from discrimination and manipulative tactics.

For example, an AI system evaluating job applications or employee performance might unlawfully base decisions on individuals' vulnerabilities, such as age or economic status, leading to discrimination against older adults, women, or those living in poverty.


3. Systems Used for Social Scoring


Prohibited systems evaluate or classify individuals based on personal characteristics, inferred attributes, or behavior over time and use these assessments in contexts beyond the original data collection purpose. For example, drawing unrelated inferences about creditworthiness or employability from religious or political affiliations is banned due to the risk of adverse treatment and discrimination.


4. Systems Predicting the Likelihood of Criminal Behavior


The regulation prohibits systems that evaluate an individual's profile, characteristics, or traits (e.g., nationality, car type, debt levels) to determine the likelihood of general or specific criminal behavior.

However, systems conducting risk analysis based on other criteria, such as detecting suspicious transactions to prevent financial fraud, are not prohibited.


5. Creation of Facial Recognition Databases


The regulation bans creating facial recognition databases using methods like "web scraping" (automated software collecting data from websites) or footage from closed-circuit cameras. While facial recognition databases are not universally banned, those created through these methods and for such purposes are prohibited.


6. Emotion Recognition Systems in Workplaces or Education


Emotion recognition systems analyze individuals' biometric data (e.g., facial expressions, gestures, tone of voice) to infer emotions, moods, or intentions. The regulation prohibits their use in workplaces or educational settings but allows exceptions, such as systems detecting pilot fatigue to prevent accidents.


7. Systems Categorizing Individuals Based on Biometric Data


The regulation bans systems categorizing individuals based on biometric data to infer race, political views, religious beliefs, or sexual orientation. However, lawful screening or categorization of biometric data, such as sorting images by hair or eye color for law enforcement, is permitted.


8. Real-Time Remote Biometric Identification Systems in Public Spaces for Law Enforcement


These systems identify individuals remotely using biometric data in real time and compare it against a database. Their use for law enforcement in public spaces is prohibited unless it meets strict conditions, such as crime prevention or sanction enforcement.


Beyond Prohibited Practices: Additional Compliance Steps


Even entities whose AI systems comply with the prohibition rules must address broader regulatory requirements, especially for systems categorized as high-risk under the AI Regulation.

Organizations failing to meet obligations under the AI Regulation, beyond the prohibitions, face penalties of up to €15 million or 3% of their global annual revenue.

The next major compliance deadline is August 1, 2025, for general-purpose AI systems, defined as systems capable of performing a wide range of tasks. Developing and deploying these systems will involve evolving legal obligations to ensure compliance.


Conclusion


Proactive legal and information security involvement during AI development and use is critical to achieving compliance with the AI Regulation, saving time, resources, and costs in the long run. Non-compliance with the prohibited practices or broader obligations can result in significant penalties, including fines of up to €35 million or 7% of global annual turnover.