Prohibited Applications of AI Systems

The European Commission's proposal for the Artificial Intelligence Act (AIA) assesses and treats AI systems differently based on their application. Earlier, we wrote about "what the EU means by AI”. When it is clear that your project is an AI system, the second questions is: what risk category does my system fall under? In the AIA, there are three categories: prohibited applications (Title 2 AIA), high-risk applications (Title 3 AIA) and systems that interact with humans (Title 4 AIA). All systems that do not fall within one of these categories are considered low risk. In this article, we list the applications that are prohibited.

Article 5 of the AIA highlights four specific applications that are prohibited. However, the last application has some exceptions. All of these applications are prohibited from being marketed, made available or used. AI systems with the following applications are prohibited:

  1. Systems that, through signals that are not consciously recognized, attempt to influence a person and change their behaviour in a way that is likely to cause physical or mental distress to that person or another person.
  2. Systems that exploit vulnerabilities of certain groups because of their age or disability in a way that is likely to cause physical or mental distress to that person or to another.
  3. Systems that are used by or on behalf of the government to determine a reliability score for a person, based on behaviour or personality traits, and which results in:
    1. Adverse treatment for the person or an entire group, with no connection between the score and the new situation; or
    2. Adverse treatment for the person or an entire group that is not justified by the previous behaviour or severity of the consequences.
  4. Systems that perform direct remote biometric identification in public spaces for law enforcement, except when strictly necessary for the following purposes:
    1. Targeting specific potential victims of crime and missing children;
    2. Preventing a specific and immediate threat to a person's life or physical safety or preventing a terrorist attack;
    3. Finding or recognizing a suspect of a crime that carries a minimum maximum sentence of 3 years.

When it comes to these systems, you must always be able to justify that the use was necessary and weighed against the infringement on the rights of all innocent people. Any use of one of the exceptions must be approved by a judge or other independent authority who determines the conditions under which the system may be used.

So, these four uses of AI systems are either prohibited or can only be used with the permission of a judge. If you think your product falls into these categories, please contact us for advice on changes and improvements.

Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles