AI Legal Framework: Prohibited AI Practices and Fundamental Regulatory Issues

Most people have probably already encountered some kind of artificial intelligence system, the use of which is becoming increasingly popular worldwide. In addition to individual use, numerous companies apply these artificial intelligence systems for various purposes or plan their introduction. The rapid pace of technological development and the growing importance of AI systems clearly demonstrate that it is necessary to regulate their application in order to ensure that these systems operate in an ethical, safe and transparent manner.

The AI Regulation was adopted by the European Parliament on 13 June 2024, the purpose of which is to establish a uniform legal framework within the territory of the European Union with regard to artificial intelligence systems.

Personal scope of the AI Regulation

The scope of the AI Regulation extends to all persons or organisations that develop, put into service, place on the market or apply, use AI systems within the territory of the European Union. It also includes persons and organisations whose place of establishment or residence is outside the territory of the European Union, however the output produced by the AI system is used within the territory of the European Union. Accordingly, anyone who carries out professional activities related to an artificial intelligence system must act in compliance with the rules of the AI Regulation.

Entry into force

Certain provisions of the AI Regulation enter into force in several stages, therefore the affected persons and organisations must reach full compliance with the AI Regulation in several steps. The provisions relating to ensuring AI awareness, as well as those concerning prohibited AI practices, have been applicable and in force since 2 February 2025; therefore compliance with these provisions must already be ensured. If a company has not yet taken steps in this regard, it is recommended to take them without delay.

In addition to verifying compliance with provisions that have already entered into force, 2 August 2026 is approaching, which is the date of the general application of the AI Regulation; therefore the affected persons and companies must also prepare for further compliance steps.

Importance of related areas of law

In addition to compliance with the AI Regulation, the affected persons and companies must also comply with numerous other legal provisions and must take additional aspects into consideration during the development, putting into service, placing on the market, application and use of artificial intelligence systems. These include, for example, requirements relating to intellectual property law, labour law, the protection of personal data, the prohibition of unfair commercial practices towards consumers, the prohibition of misleading business partners, or compliance with competition law rules.

Preparation for compliance with the AI Regulation, as well as compliance steps relating to other connected areas, are particularly important, as in the event of non-compliance companies may face significant sanctions, including fines of up to 7% of their annual global turnover, and furthermore, an unlawfully operated artificial intelligence system or any incident caused by it may also pose reputational risks.

Prohibited AI practices

As indicated above, the provisions relating to prohibited AI practices are already applicable; therefore all companies and persons applying or developing artificial intelligence systems must review the AI systems used or developed by them from the perspective of whether they fall within any prohibited AI practice under the AI Regulation. If it is established that a given AI system implements any of the prohibited AI practices, the operation of such AI systems must be terminated without delay.

With regard to prohibited AI practices, it is important to record that the Regulation prohibits the development, placing on the market, putting into service and use of prohibited AI practices alike. An exception to this is real-time remote biometric identification AI systems, in the case of which only their use is prohibited.

Furthermore, it is important to clarify that not only those systems qualify as prohibited AI practices that were expressly developed or used for a prohibited purpose defined in the AI Regulation, but also those systems that were developed or used for a lawful purpose, yet during their operation are capable of implementing a prohibited AI practice.

Below we present the AI practices prohibited by the AI Regulation, taking into account the guidelines issued by the European Union.

Manipulative or deceptive systems

The prohibition covers, on the one hand, systems that use subliminal, that is subconscious, elements or stimuli (sound, image, video) (for example flashing windows or captions) that individuals are not able to consciously perceive, however they significantly influence and weaken the individual’s ability to make informed decisions, without the affected person being aware of it.

In addition to systems influencing individuals subconsciously, systems applying manipulative or deceptive techniques are also considered prohibited, which operate with the purpose or effect of materially distorting individuals’ behaviour. The distortion must be of such a degree that it noticeably impairs the affected persons’ ability to make informed, well-founded decisions and thereby leads to a decision that the given person or group would otherwise not have made.

A further condition for the realisation of the prohibited practice is that the prohibited AI practice causes significant harm to the given person or to a specific group of persons, or that such harm can reasonably be expected to occur.

The weakening of the ability to make informed decisions may be facilitated by virtual reality, since it enables a greater degree of control over what stimuli are displayed. Such a technique may, for example, occur when on online platforms attention is drawn to limited-time offers by pop-up windows or attention-grabbing messages that urge and pressure the customer to make a purchase decision as soon as possible, otherwise they will miss favourable conditions.

AI systems exploiting vulnerability

This category includes techniques that distort the behaviour of a given person or a specific group of persons by exploiting their age, disability, social or financial situation and the vulnerability arising from that situation. The purpose of the provision is to protect vulnerable groups from discrimination and manipulative techniques.

A system falling under this prohibition may, for example, exist where artificial intelligence is used for performance evaluation or recruitment, for screening CVs, however the AI system does not evaluate employees or screen CVs based on objective criteria, rather it bases its decision also on one of the above-mentioned characteristics or vulnerabilities of the applicant and thereby discriminates, for example, against older persons, women or persons living in poverty. It may also be such a case where a given system persuades elderly persons who are not comprehensively familiar with modern technology to make harmful investments.

Systems used for social scoring

In this case we are talking about systems that evaluate or classify individuals over certain periods based on data associated with behaviour displayed in different contexts, or known, inferred personal characteristics or personality traits, and use such evaluations in a context that goes beyond the one in which they were originally collected.

In this case, the purpose of the artificial intelligence system or the manner of its application implements an evaluation or classification process during which natural persons or groups of persons are evaluated over a certain period, particularly on the basis of their social behaviour, as well as their known, presumed or inferred personality or personality traits.

In other words, this means that conclusions are drawn from individuals’ behaviour or personality that are not related to the classified personality trait or behaviour, for example drawing other conclusions from religious affiliation or political views with regard to creditworthiness or employment, which may lead to disadvantageous or unfavourable treatment and may violate the right to dignity and the prohibition of discrimination.

Systems intended to assess the probability of criminal offences

The AI Regulation prohibits the application of AI systems that, based on the profile, characteristics or personality traits of a given individual (for example nationality, type of vehicle, level of debt), determine, assess or predict the probability of committing a crime in general or certain criminal offences.

The prohibition does not apply to AI systems that are not based on the evaluation of individuals’ profiles, personality traits or characteristics, but perform risk analysis on different grounds, for example assessing the risk of financial fraud committed by undertakings on the basis of suspicious transactions, and where the evaluation is not carried out with the purpose of predicting whether a person will commit a specific criminal offence.

Creation of certain facial recognition databases

This prohibition forbids the creation or expansion of a facial recognition database that is created by means of so-called “web scraping” from the internet or from footage of closed-circuit camera systems. “Web scraping” generally means the use of automated software that collects information from the internet without a specific purpose, extracts or copies information for later use. Such may be the case, for example, if software automatically collects images from social media platforms (e.g. Facebook, Twitter) without a specific purpose for the purpose of facial recognition and builds a database from them.

In general, the creation of a facial recognition database is not prohibited under the AI Regulation; it is the creation thereof in the manner and for the purpose defined above that is identified as a prohibited practice by the AI Regulation.

Use of emotion recognition systems in the workplace or in education

The purpose of such a system is to draw conclusions regarding or to identify the emotions, mood or intentions of a given individual on the basis of biometric data, such as facial expressions, reactions, gestures, movements or even voice.

Biometric data means any personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as a facial image or fingerprint.

The AI Regulation does not establish a general prohibition with regard to such systems, but expressly prohibits their use in workplaces or educational institutions. An emotion recognition system may, for example, be one that infers sadness or anger from an employee’s voice or facial expression, or analyses the emotions of a job applicant during an interview.

A system detecting physical condition, such as fatigue, applied in the case of pilots in order to prevent accidents does not fall under the scope of the prohibition.

Systems used for the biometric categorisation of persons

This point of the AI Regulation listing prohibited AI practices prohibits the creation of systems that categorise individuals on the basis of their biometric data in order to draw conclusions regarding the individual’s racial origin, political opinion, religious or philosophical belief, sexual life or sexual orientation. Such a system may, for example, attempt to determine religious affiliation on the basis of facial features or tattoos, or display different advertisements based on gender or skin colour.

This prohibition does not extend to the lawful filtering or categorisation of biometric datasets or biometric data obtained in accordance with Union or national law, for example the sorting of images according to hair colour or eye colour, which may be used, for example, in the field of law enforcement.

AI systems used for real-time remote biometric identification in publicly accessible places for law enforcement purposes

The essence of remote biometric identification systems is that they identify an individual without their active participation, from a distance, by comparing their biometric data with data contained in a database. Real-time operation means that the comparison and identification of biometric data takes place simultaneously with the recording of the biometric data or with minor, non-significant delay.

This part of the AI Regulation prohibits real-time AI systems used for biometric identification where they are used for law enforcement purposes in publicly accessible places and create discriminatory effects or distorted results.

The AI Regulation defines several exceptions, such as the targeted search for missing persons or victims of kidnapping, human trafficking or sexual exploitation, for which purpose the use of real-time AI systems for biometric identification is permitted.

Summary and compliance considerations

Overall, it can be stated that it is advisable to evaluate and consider all conditions in detail and to examine the characteristics of the given system, its technical features, the purpose of its creation and its impact in order to ensure that a system implementing prohibited AI practices is not developed, placed on the market or put into use.

In order to ensure compliance with the AI Regulation, it is advisable to involve legal and information security experts already at the stage of using the AI system or — where the AI system is specifically developed to meet the needs of a given company — already at the beginning of the development process, so that compliance with the AI Regulation is ensured from the initial phase, thereby saving time, energy and financial resources.

In the event of non-compliance with the provisions relating to prohibited AI practices, those concerned may expect serious sanctions; the maximum amount of the fine that may be imposed is EUR 35,000,000 (approximately HUF 14 billion) or a fine corresponding to 7% of the total global annual turnover of the organisation concerned in the preceding financial year.

In addition, the persons and organisations concerned may face further legal consequences and fines due to non-compliance with other legislation, for example Regulation (EU) 2016/679 (hereinafter: GDPR) or the Labour Code.

 

The author of the article is Dr. Putnoki Poppea, Associated Attorney-at-law of CLM Bitai & Partners Law Firm.

The article was first published on the website of Grant Thornton Hungary: https://grantthornton.hu/en/audit-tax-valuation-accounting-digitax

grant-thornton-brandmark-png