Requirements for high-risk AI systems in the new AI Act

The European Commission's new draft Artificial Intelligence Act (AIA) introduces several risk categories for AI systems. The three main categories are prohibited, high-risk and low-risk. Which category an AI system falls under depends on what the AI system is intended for. Which systems fall under high-risk can be read in our earlier article. In this article, we give an overview of the requirements for high-risk AI systems and how to meet them. After this, ‘AI system’ refers exclusively to a high-risk AI system.

  1. RISK MANAGEMENT SYSTEM (ART. 9)

AI systems must have a connected risk management system. The purpose of the risk management system is to identify the (foreseeable) risks of the AI system, assess them and then take measures to minimize the risks. Any risks for which measures are not taken, should be acceptable in the normal use of the AI system and these should be communicated to the user.

In addition to a system that keeps track of risks, attention must be paid to the users' knowledge of the system and the suitability of the management system for the specific AI system. Furthermore, the testing of the AI system is also covered by this article. This is because the AI system must be suitable for its intended purpose and this suitability must be tested.

  1. DATA AND DATA GOVERNANCE (ART. 10)

For an AI system that uses data, there are quality requirements for the data and how it is managed. First, the data must be managed properly, paying attention to the collection of the data, how it has already been processed, and possible defects in the data. All data must be accurate, fit for the intended purpose of the AI system and representational. Furthermore, this article includes a legal basis for processing sensitive personal data to counteract bias in the results of the AI system.

For AI systems that do not learn from data, there should be similar safeguards for the relevant techniques used there.

  1. TECHNICAL DOCUMENTATION (ART. 11)

Before an AI system is placed on the market, a technical documentation must be prepared. The documentation must be able to prove that the AI system meets the requirements of the AIA. Annex 4 lists specific topics that must at least be covered in the documentation. The documentation must at all times be consistent with the system currently in use and therefore must be changed from time to time.

  1. RECORD-KEEPING (ART. 12)

AI systems should be designed so that the use of the AI system is automatically recorded in the system's records. Using these records, it should be possible to assess the functioning of the system over time. Specifically, the system should be able to record the functioning of the system in high-risk situations.

For systems that use biometric data to identify or categorize individuals, records must be mandatory at the level of individual results. In doing so, it must be clear when the system was used, what the reference database is compared to, the data that led to identification with the reference database, and the employees who were involved in verifying the identification.

  1. TRANSPARENCY AND INFORMATION FOR USERS (ART. 13)

AI systems must be designed and developed in a way that their operation is clear enough for users. Users must be able to interpret and explain the outcomes and use them effectively. For this purpose, the AI system must have instructions for use, which describe the system and its capabilities. A precise list of requirements is also included in Art. 13 AIA.

  1. HUMAN OVERSIGHT (ART. 14)

AI systems must be designed and developed to allow effective human supervision. This supervision is intended to reduce or even prevent the risks of the AI system. To enable supervision, there must be a system that is appropriate for the AI system. This surveillance system must allow the human supervisor to use, assess, interpret and override the outcomes of the AI system.

For AI systems that use biometric data for human identification or categorization, there is the additional requirement that an identification may not be used by the AI system until it has been verified by two humans.

  1. ACCURACY, ROBUSTNESS AND CYBER SECURITY (ART. 15)

AI systems must be designed and developed to ensure accuracy, soundness, security and consistency of results during use for the intended purpose. To assess accuracy, the measures and scores of the system are included in the instructions for use. Soundness of the AI system means that the system is resistant to errors or ambiguities. To ensure safety, the AI system must be resistant to misuse for the purpose of altering the purpose or capabilities of the system.

CONCLUSION

Thus, there are seven main requirements that every high-risk AI project must meet. Almost all of these requirements focus on the structure and management of the project. This makes it easier to meet the requirements on a second or third project. Although this regulation will not take effect for some time, it does not hurt to look at how these requirements can be incorporated into existing projects. After all, they do provide oversight, can make the user feel confident in using the system and can detect and analyse errors for the developer.

If you have any questions about the AI Act as a result of this article, please contact us.

Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles