High-risk AI Systems under the new AI Act

In the European Commission's proposal for the Artificial Intelligence Act (AIA), AI systems are assessed and treated differently based on their application. Earlier, we wrote about "what the EU means by AI”. The moment it is clear that your project is an AI system, the second question is: What risk category does my system fall under? There are three categories in the AIA: prohibited applications (Title 2 AIA), high-risk applications (Title 3 AIA) and systems that interact with humans (Title 4 AIA). All systems that do not fall within one of these categories are considered low risk. In this article, we list the applications that fall under the category of high-risk applications.

Annex III of the regulation lists applications of AI systems that fall under the high-risk category. The list is divided into eight different application areas, with each area of application containing one or more specific applications. In our overview below, we provide a description of what is meant by high-risk for each application area, and where possible we also provide an existing implementation that would fall under it. With all specific applications, it is important, however, to use AI systems only for the purpose for which they are intended.

  1. BIOMETRIC IDENTIFICATION OR CATEGORIZATION OF PEOPLE

AI systems that:

  • Immediately or later from a distance;
  • attempt to identify or categorize people;
  • based on their biometric information (e.g. your face or way of moving).

In fact, most forms of direct identification using these methods are prohibited by Article 5 AIA, but there are exceptions, which would then fall under the high-risk category.

Example: A camera searching for shoplifters.

  1. CONTROL AND OPERATION OF CRITICAL INFRASTRUCTURE

AI systems that:

  • Act as security elements in;
  • controlling and operating critical infrastructure such as road traffic and utilities.

Example: Automatically monitoring water quality in a water treatment plant.

  1. EDUCATION

AI systems that:

  1. Determine access to education or assignment to courses;
  2. Assess students on their education or upon admission to a program.

Example: Screen students for admission to a university.

  1. WORK, HR AND SELF-EMPLOYMENT

AI systems that:

  1. Assist in selecting people for jobs or assignments and filtering and assessing responses to them.
  2. Assist in internal HR decisions such as promotions, layoffs and employee reviews.

Example: Systems to search and approach candidates for job openings

  1. ACCESS TO ESSENTIAL PUBLIC AND PRIVATE SERVICES

AI systems that:

  1. Assess whether people are entitled to benefits, allowances or other public services or that support the enforcement and implementation of these services.
  2. Determine people's creditworthiness or credit score, excluding internal systems of SMEs.
  3. Are used to regulate the broadcasting or prioritization of emergency services.

Example: The SyRI system to detect benefit fraud.

  1. JUSTICE

AI systems used by the police to:

  1. Determine for an individual the risk of (re)committing a crime or the risk to potential victims of crime.
  2. Determine a person's emotional state or to recognize lies.
  3. Recognize deep fakes.
  4. Assess the reliability of evidence during a criminal investigation.
  5. Assess the likelihood of crimes or to estimate characteristics of individuals or groups, based on profiling individuals.
  6. Profile people during the recognition, investigation and prosecution of crimes.
  7. Conduct crime analysis involving people which involves searching through large amounts of data from various sources to find patterns or relationships.

Example: Risk scores for citizens indicating the likelihood of subsequent crime

  1. MIGRATION AND BORDER CONTROL

AI systems used by government agencies to:

  1. Determine a person's emotional state or recognize lies.
  2. Assess immigration-related risks of people already on the territory of a member state or seeking to enter a member state.
  3. Determine the authenticity of travel documents and supporting documents and recognize forgeries based on security mechanisms.
  4. Assess asylum, visa and residence permit applications and handle complaints about them.

Example: Automated decisions on asylum applications

  1. LAWRENCE

AI systems that:

  • Assist a judge in researching and interpreting facts and the law and in applying the law to a concrete case.

For example: Pilots in Estonia for robotic judges.

SUMMARY

These eight areas cover a broad spectrum of applications, but they can all be linked back to fundamental rights. These rights are the justification for the increased requirements for these systems. If you meet the requirements of the AIA for high-risk applications, you may still use these systems in practice. Know, however, that you the obligations of monitoring and maintenance remain throughout the life of the system.

Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles