On April 21, 2021, the European Commission presented a proposal with new rules for Artificial Intelligence (AI).

WHY NEW RULES?

AI is already widely used, often without us even realizing it. Most AI systems pose no risk to users, but that is not true for all AI systems. Existing regulations are insufficient to ensure user safety and safeguard fundamental rights.

WHAT RISK CATEGORIES WILL THERE BE?

The European Commission proposes a “risk-approach” with four levels of risk:

  1. Unacceptable risk

A very limited number of AI applications will be identified as an unacceptable risk. These go against fundamental rights and are banned for that reason. As examples, the committee cites the social labelling of citizens by governments and remote biometric identification in public spaces. There are a few exceptions to the latter use case.

  1. High risk

A slightly larger number of AI applications pose a high risk. These are described in the proposal. They are high risk because they will most likely impact fundamental rights. The listing of these AI applications may be changed periodically.

These AI applications must meet a number of mandatory conditions. These conditions include quality requirements for the dataset used, technical documentation, transparency and disclosure to users, human oversight, and robustness, accuracy and cybersecurity. National supervisory authorities are given investigative rights here.

  1. Limited risk

Limited risk applies to a larger group of AI applications. Here, transparency is sufficient. The committee cites chatbots as an example here. Users must know that they are communicating with a chatbot.

  1. Minimal risk

For all other AI applications, existing laws and regulations are sufficient. The vast majority of current AI applications fall into this category.

HOW DO YOU CLASSIFY AI SYSTEMS?

The committee is coming up with a methodology for classifying AI applications into one of four risk levels. Its purpose is to provide certainty for businesses and others. Risk is assessed based on the intended use. This means looking at:

  • the intended purpose
  • the number of people potentially affected
  • the dependence on the outcome
  • the irreversibility of the damage

WHAT ARE THE IMPLICATIONS FOR HIGH-RISK AI SYSTEMS?

Before these AI systems may be used, a compliance investigation must be conducted. That examination must show that the AI system is compliant with respect to data quality requirements, documentation and traceability, transparency, human oversight, accuracy and robustness. For some AI systems, a "notified body" will need to be engaged. For these AI systems, a risk management system must also be set up by the vendor.

WHO WILL ENFORCE?

Member States will have to appoint an authority to monitor compliance.

Codes of Conduct

Suppliers of high-risk AI systems can make a voluntary code of conduct for the safe application of AI systems. The commission encourages industries to come up with these.

WHO IS LIABLE WHEN IMPORTING AI SYSTEMS?

The EU importer of AI systems is responsible for the imported AI system. He must ensure that the manufacturer complies with EU regulations and has a CE mark.

WHAT IS THE SANCTION?

Violating this regulation can be fined up to 6% of the annual turnover in the previous calendar year.

If you have any questions, please contact Jos van der Wijst (wijst@bg.legal)

Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles