AI Red Flags: Navigating Prohibited Practices under the AI Act

May 3, 2024
Authored by
Osman Gazi Güçlütürk
Legal & Regulatory Lead in Public Policy at Holistic AI
Bahadir Vural
Legal Researcher at Holistic AI
AI Red Flags: Navigating Prohibited Practices under the AI Act

As the European Union’s landmark Artificial Intelligence Act (AI Act) comes closer to the finalization of its legislative journey with the European Parliament’s approval last month, navigating its restrictions as well as red lines gains more importance. The AI Act introduces a risk-based framework for classifying AI systems, exclusively categorizing systems as being prohibited, high-risk, and low (or minimal) risk.

Notably, the systems that are prohibited are those that are associated with an unacceptable level of risk, and, instead of imposing specific development and deployment standards to mitigate associated risks such as with high-risk systems, the AI Act opts for an outright ban of these systems. While the Act refrains from defining 'unacceptable risk,' it identifies certain AI practices as inherently unacceptable due to their potential to significantly conflict with core Union values and fundamental rights, including human dignity, freedom, equality, and privacy.

Article 5 of the AI Act, listing prohibited AI practices, more commonly known as AI systems with unacceptable risk, is one of the most significant parts of the Act, not only because it will mark the end of the road for some AI systems within the EU but also it will be the first provision to apply in Act’s gradual application timeline, that spans to 24 months starting from the entry into force.

To assist providers, deployers, users, and other stakeholders in understanding the red lines of the AI Act, with this blog, we will outline the prohibited practices under the Act and their implications.

Key Takeaways:

  • The EU AI Act identifies specific AI systems as posing intolerable risks to foundational European values due to their potential negative impacts and prohibits the placing on the market or the putting into service of these systems.
  • The prohibited practices are not absolute, with many of them having exceptions, so should be examined on a case-by-case basis.
  • The rules on prohibited practices do not apply to AI models directly, but their application may be triggered when an AI model is used to create an AI system.
  • Unlike other requirements or obligations under the Act, rules on prohibited practices are operator-agnostic and do not change depending on the operator’s role or identity.
  • Non-compliance with these prohibitions can result in significant administrative fines, up to €35,000,000 or up to 7% of their global annual turnover.
  • Prohibitions will be the first part of the Act to start applying 6 months after the publication of the Act in the Official Journal.

Which systems are prohibited under the EU AI Act?

Eight key AI practices are prohibited in the EU under the EU AI Act, as can be seen in the figure below.

Prohibited AI Practices

Subliminal, manipulative and deceptive AI techniques with the risk of significant harm

The AI Act establishes strict prohibitions against AI systems that utilize subliminal techniques, manipulations, or deceptions to alter human behavior, coercing individuals into making decisions they wouldn't otherwise consider, especially when such actions could lead to significant harm. These AI systems, by influencing decisions and actions, potentially undermine personal autonomy and freedom of choice, often without individuals being consciously aware or able to counteract these influences. Such manipulations are considered highly risky, potentially leading to detrimental outcomes on an individual's physical or psychological health, or financial well-being.

AI technologies might employ subtle cues through audio, imagery, or video that, while undetectable to the human senses, are potent enough to sway behavior. Examples include streaming services embedding unnoticed messages in videos or films, or social media platforms that algorithmically promote emotionally charged content to manipulate user feelings, aiming to extend their platform engagement. These practices can subtly influence users’ subconscious, altering thoughts or actions without their realization, or exploiting emotions for undesirable ends.

The Act, however, does not ban AI's application in advertising but draws a fine line between permissible AI-enhanced advertising and forbidden manipulative or deceptive techniques. This distinction is not always straightforward and requires careful examination of the specific context on a case-by-case basis, ensuring the use of AI in advertising respects consumer autonomy and decision-making.

AI systems that exploit the vulnerabilities of persons in a way that can cause significant harm

The AI Act also prohibits AI systems that exploit human vulnerabilities to significantly distort behavior, deeming such practices to carry unacceptable risks. The Act emphasizes the protection of individuals particularly susceptible due to factors like age, disabilities (as defined by EU accessibility legislation, which includes long-term physical, mental, intellectual, or sensory impairments), or specific social or economic situations, including severe financial hardship or belonging to ethnic or religious minorities.

Again, the advertising activities may be relevant for this type of prohibited practice. For instance, these AI systems might deploy advanced data analytics to generate highly personalized online ads. By leveraging sensitive information—such as a person's age, mental health status, or employment situation—these systems aim to exploit vulnerabilities, thereby influencing individuals' choices or the frequency of their purchases. This relentless targeting not only invades privacy but gradually erodes individuals' sense of autonomy, leaving them feeling powerless in managing their online shopping behaviors and choices.

AI systems used for the classification or scoring of people based on behavior or personality characteristics leading to detrimental treatment

The AI Act bans social scoring AI systems assessing or categorizing individuals or groups over time based on their social behavior or known, inferred, or predicted personal traits. Additionally, if any of the below are true, then the AI system will be prohibited:

  1. The AI system results in adverse treatment of individuals in social situations unrelated to the original data generation or collection contexts, or;
  1. The use of the AI system causes adverse treatment that is unfair based on their social behavior or its seriousness.

Specifically, the EU AI Act recognizes that these systems, when used by both public and private entities, could result in discriminatory consequences and the marginalization of specific demographics. Such systems may infringe on the right to dignity and non-discrimination, along with fundamental values like equality and justice. For example, employers using AI systems to analyze job applicants’ social media activity to make hiring decisions based on factors unrelated to job performance, such as political views, religious beliefs, or membership in specific groups would be prohibited.

However, it is important to note that this prohibition does not impede lawful assessment practices of individuals carried out for specific purposes in adherence to both national and Union regulations. The lawful deployment of AI algorithms by financial institutions to assess individuals' creditworthiness based on their financial behavior, such as payment history, debt levels, and credit utilization, helps them determine whether to approve loans or credit cards, without posing any unacceptable risk in the context of the prohibitions.

Predictive policing based solely on AI profiling or AI assessment of personality traits

AI systems that evaluate individuals' potential for criminal behavior based solely on profiling or personality traits are also banned under the EU AI Act. This provision upholds the principle of the presumption of innocence, affirming that all individuals should be considered innocent until proven guilty. It highlights the necessity for evaluations within the EU to rely on concrete actions rather than predictions of behavior derived from profiling, personality characteristics, nationality, or economic standing, absent any reasonable suspicion supported by objective evidence and human review.

However, the Act carves out exceptions for AI tools that support human decision-making in assessing an individual's engagement in criminal activities, provided these assessments are grounded in factual and verifiable evidence directly related to criminal conduct. Additionally, AI systems focusing on risk assessments unrelated to individual profiling or personality traits—such as analyzing anomalous transactions to prevent financial fraud or using trafficking patterns to locate illegal narcotics or contraband for customs purposes—remain permissible under the Act. This distinction ensures that while protecting individual rights and the presumption of innocence, the legislation does not impede the use of AI in legitimate and evidence-based law enforcement activities.

Untargeted scraping of facial images to create AI facial recognition databases

The AI Act prohibits AI systems designed to collect or enhance facial recognition databases through untargeted scraping of facial images from online sources or footage from closed-circuit television (CCTV) systems. CCTV systems, characterized by their network of video cameras that transmit signals to specific, non-publicly accessible monitors, are often used for surveillance and security. This prohibition is a critical measure within the AI Act aimed at preventing the spread of a culture of mass surveillance and practices that infringe upon fundamental rights, with a particular focus on the right to privacy. By banning such practices, the Act intends to protect individual autonomy and guard against the risks associated with uncontrolled data collection, emphasizing the importance of privacy and personal freedom in the digital age. The inclusion of this prohibition is responsive to the concerns arising from concrete examples of untargeted scraping and complementary to the EU’s General Data Protection Regulation (GDPR) in protecting privacy when the processing of personal data by or for AI is involved, with Clearview AI facing multiple penalties under the GDPR due to non-consensual scraping of images from the internet to build their facial recognition database.

AI systems for inferring emotions in workplaces and education

AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings will be banned under the EU AI Act. This measure stems from concerns over the scientific validity of these AI applications, which attempt to analyze human emotions. Indeed, given the diversity of emotional expressions across different cultures and situations, there is a significant risk that such AI systems could lead to inaccurate assessments and biases. These technologies often suffer from issues of reliability, accuracy, and applicability, leading to potential discriminatory practices and violations of personal rights. In environments like offices or schools, where there's a notable power differential, the use of emotion-detecting AI could result in unfair treatment—such as employees being sidelined based on assumed negative emotions or students being unfairly judged as underperforming due to perceived disengagement.

However, the AI Act specifies exceptions for AI applications designed for health or safety reasons, such as in medical or therapeutic settings, underscoring the Act’s nuanced approach to balancing technological advancement with ethical considerations and human rights protections.

Biometric categorization AI systems to infer sensitive personal traits

Another AI practice prohibited by the EU AI Act is categorizing individuals by analyzing biometric data, such as facial characteristics or fingerprints, to deduce their race, political leanings, trade union membership, religious or philosophical beliefs, sexual orientation, or details about their sex life. The use of AI in this manner risks enabling discriminatory practices across various sectors, including employment and housing, thus reinforcing societal disparities and infringing on fundamental rights like privacy and equality.

For example, when landlords or housing managers employ these AI tools for screening prospective tenants, there's a tangible risk of biased decisions against people from specific racial or ethnic backgrounds, or discrimination based on sexual orientation or gender identity. Such practices not only undermine fairness but also contravene principles of nondiscrimination and personal dignity.

Nevertheless, the AI Act acknowledges exceptions for activities that are legally permissible, including the organization of biometric data for specific, regulatory-approved purposes. Lawful uses might involve organizing images by attributes such as hair or eye color for objectives provided by law, including certain law enforcement activities, provided these actions comply with EU or national legislation. This nuanced approach aims to balance the benefits of AI technologies with the imperative to protect individual rights and prevent discrimination.

AI systems for real-time remote biometric identification in publicly accessible spaces for law enforcement purposes

Finally, the AI Act forbids AI systems for real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes. RBI refers to the process where capturing, comparing, and identifying biometric data occur almost instantaneously, without notable delay. Publicly accessible locations are described as areas, whether publicly or privately owned, that can be entered by an undetermined number of people, regardless of any conditions of access or capacity restrictions.

The Act recognizes such AI applications as profoundly infringing upon individuals' rights and freedoms, highlighting substantial privacy and civil liberties concerns. The potential for these technologies to intrude into private lives, foster a ubiquitous surveillance environment, and deter the exercise of essential freedoms, such as the right to peaceful assembly, is particularly troubling. Moreover, the propensity for technical shortcomings, including biases and inaccuracies within these systems, could lead to incorrect detentions or the disproportionate targeting of certain groups, undermining public confidence in law enforcement and intensifying societal divides. The immediate effects of deploying such systems, combined with the limited scope for subsequent oversight, amplify the risk of adverse outcomes.

Are there any exemptions for AI systems used for real-time biometric identification in the EU AI Act?

The AI Act specifies certain exceptions under precisely delineated and narrowly interpreted conditions where the use of such AI systems is deemed critical to protect a significant public interest that outweighs the potential risks involved. These exceptional use case scenarios, in which the utilization of real-time RBI systems in publicly accessible spaces for law enforcement is allowed, can be listed as follows:

These Exceptional Use Case Scenarios

In introducing these exceptional use cases, the AI Act imposes various requirements and obligations on law enforcement authorities and Member States to mitigate the risks posed by real-time RBI systems:

  1. Real-time RBI can only be used for verification of identity: The deployment of real-time RBI systems in publicly accessible areas for law enforcement purposes is restricted by the AI Act solely to verification of the intended individuals’ identity. Despite this limited scope, the effects of using this technology on the rights and freedoms of all affected stakeholders and the seriousness, probability and scale of the harm that would be caused if the system were not used must be taken into account. Therefore, law enforcement authorities may only use real-time RBI systems within a specified period, and in designated areas where there is evidence or indications of criminal activity. Furthermore, these deployments should be targeted toward specific individuals who are either potential victims, individuals posing a threat to public safety or suspected perpetrators.
  2. The law enforcement authority must conduct a fundamental rights impact assessment (FRIA) prior to using RBI. A FRIA is an evaluation process analyzing the potential impacts on human rights that may arise from the use of AI systems and determining the appropriate measures to mitigate these risks.
  3. The law enforcement authority must register the real-time RBI system in the EU database. However, in cases of urgency supported by justifications, the operation of these AI systems is allowed to commence without prior registration, on the condition that registration is completed without delay thereafter.
  4. Prior authorization from either a judicial authority or an independent administrative body is required for the use of RBI. In the case of an independent administrative body, only bodies whose decisions are binding on the Member State where the deployment is intended can give valid authorization. Exceptions to the requirement for prior approval are only permitted by the Act in situations of justified urgency, where it is practically impossible to obtain authorization before using the systems, in which cases the authorization must be requested without delay, within 24 hours of use. If the authorization request is denied, the usage must be ceased immediately, and all related data must be discarded and deleted.
  5. National authorities must be notified of each use of the RBIs. The Act provides that each instance of deploying a real-time RBI system in publicly accessible spaces for the purposes of law enforcement must be promptly notified to the relevant market surveillance authority and the national data protection authority. This notification must contain, as a minimum, the number of the decisions taken by judicial or independent administrative authorities and the information requested within the template to be issued by the Commission according to Article 5(6) of the Act.
  6. Member States may introduce detailed rules on the usage of these systems as well as the authorization procedure to their national law. In this case, they must inform the Commission about these within 30 days of the adoption of the rules. Member States could also choose not to recognize any exceptional use cases or introduce them.

Do EU AI Act prohibitions apply to AI models?

Rules on prohibited AI practices do not directly apply to AI models. The Act draws a subtle distinction between AI systems and AI models, introducing specific rules for the latter only when they are general-purpose, which are the key building blocks for generative AI. The rules on prohibitions primarily target AI systems and what is prohibited under the Act is placing on the market or putting into the service of AI systems engaging with the prohibited practices.

Hence, prohibitions do not directly apply to AI models. However, when an AI model, either a general-purpose or a specific one, is used to create an AI system, prohibitions under the Act will be triggered.

Who do EU AI Act prohibitions apply to?

Rules on prohibited AI practices are operator-agnostic. The AI Act distinguishes between various actors involved with AI systems, assigning distinct responsibilities based on their specific roles in relation to the AI system or model. This differentiation is particularly evident in the context of AI systems and general-purpose AI models, where the most significant responsibilities are allocated to the providers. This approach ensures that those who have the most control over the development and deployment of AI technologies are held accountable to the highest standards. In contrast to these tailored obligations for different actors, the rules regarding prohibited AI practices are designed to be operator-agnostic.

This means that the prohibitions apply universally, regardless of the actor's specific role. Whether it involves providing, developing, deploying, distributing, or utilizing AI systems that engage in prohibited practices, such actions are uniformly forbidden within the EU. This broad application underscores the Act's commitment to preventing practices that could undermine fundamental rights or pose unacceptable risks, emphasizing a comprehensive approach to regulation that encompasses all forms of interaction with AI technologies deemed harmful.

When will the EU AI Act prohibitions start applying?

The Act has a gradual application timeline that spreads across 36 months, starting from its entry into force, which will happen on the 20th day following the Act’s publication in the EU Official Journal. However, rules on prohibitions will be the first ones to apply, with a 6-month grace period after the Act’s entry into force. Given that the Act is expected to be officially adopted at the end of May 2024, the rules on prohibited practices are likely to start applying before the end of the year.

What are the implications of using a prohibited AI system under the EU AI Act?

The Act provides hefty penalties for non-compliance with its provisions, and the heftiest fines are triggered in the case of non-compliance with the rules on prohibited practices. Accordingly, non-compliance with the prohibition of the AI practices shall be subject to administrative fines of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. On the other hand, Union institutions, bodies, offices, and agencies will be subject to administrative fines of up to EUR 1,500,000 for their non-compliance with the prohibited practices.

Stay within the safe zone with Holistic AI!

Preparing for the implementation of a detailed regulatory framework like the AI Act is a considerable undertaking – requires time and careful planning. While the Act has a 24-month grace period before most of its provisions come into effect, the rules concerning prohibited practices are set to be enforced first. Affected entities must now prioritize the establishment of processes and practices to comply with the regulation.  

Find out how the EU AI Act impacts your business by using our EU AI Act Risk Calculator, and schedule a call to learn more about how Holistic AI can help you get ahead with your AI Act preparedness.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call