Pre-final AI Act: updates
Pre-final AI Act: updates
Last week, the preview of the final text of the AI Act arrived. This blog will give you an overview of what is new, what potential drawbacks are and what you should do next to prepare for the implementation the AI Act.
BUT FIRST: A RECAP OF THE BASICS
For businesses new to the AI Act, the two most important things to start with are:
(1) Understanding the definition of AI: the AI act provides a broad definition of AI. It's important to know whether your systems fall under this definition and whether you need to comply with the AI Act. The literal definition within the AI Act is:
“An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
(2) Identify possible high-risk AI systems: high-risk AI systems are subject to stricter regulations. Determine if your AI systems are classified as high-risk and understand the specific obligations that apply to them.
The (pre-)finalised version of the AI Act brings several developments and changes. Some interesting points are explained below.
(1) Refined definitions and classifications
The (pre-)final text offers more precise definitions and classifications for AI systems, particularly clarifying what constitutes a high-risk AI system. Some AI systems will not be considered high-risk, even if used in critical sectors. If an AI system is just doing simple tasks, preparing things, improving past human work, or helping in decisions without replacing human judgment, it's not considered high-risk anymore. This rule helps decide if an AI is high-risk based on how much risk it really has. It’s meant to be fair, but companies must be careful not to wrongly claim their AI is low-risk.
Also, with ‘general purpose AI model “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications” is meant.
In other words, a type of AI model that is trained with a lot of data to work on its own across various tasks. It's a flexible model that can do many different things well, no matter how it's released in the market, and can be used in many different systems or apps. It does not cover AI models that are used before release on the market for research, development and prototyping activities.
(2) Provider: responsibility
Where a product contains an artificial intelligence system the providers shall be deemed responsible for ensuring that their product is fully compliant with all applicable requirements required under the Union harmonisation legislation. This means the providers must check and confirm that their AI-integrated products meet all established EU standards and regulations, covering aspects like safety, privacy, and ethical use.
A 'provider' in the context of AI refers to a person, company, or organization that creates an AI system or model, or has one created for them. They're responsible for introducing these AI products to the market or starting to use them, whether they sell them or offer them for free. This includes products released under their own brand or name.
Furthermore, for SMEs some exceptions have been further elaborated such as a simplified technical documentation requirement. This means that SMEs won't have to follow as complex or detailed processes for documenting their AI systems as larger companies do. This aims to make compliance with the Act more manageable for smaller businesses, reducing the administrative and financial burdens they face.
There are certain drawbacks of the (pre-)final version of the AI Act.
The biggest drawback is that the AI Act could impose significant compliance costs, especially burdensome for small and medium-sized enterprises developing smaller AI models. In this light, for example the specific requirement for a training data summary is too vague, which can lead to expensive and impractical detailing.
The broad exemptions for powerful open-source AI models are debatable on the ethical front, tighter controls to ensure public safety are possibly more favourable. Other ethical concerns are that the threshold for categorising systemic risk models is set too high, potentially excluding models that actually do pose significant risks. Also, there's insufficient regulation for content moderation in foundation models, raising concerns about the spread of hate speech and fake news.
Within the AI Act, there are no strong cybersecurity requirements laid down for general purpose AI models. This could lead to vulnerabilities in the AI value chain. We therefore advise to go a step further.
To prepare for the implementation of the AI Act, businesses should:
- Conduct an AI system assessment. Review your AI systems to determine if they fall under the high-risk category and understand the specific obligations that apply;
- Update data handling policies. Revise your data privacy and protection policies to align with the AI Act’s requirements, especially for systems involving personal data. It is also advisable to implement an AI policy;
- Consider consulting legal and technical experts to ensure full compliance and understand the implications for your business;
- Develop a compliance strategy. This means to plan and implement a comprehensive strategy for compliance, including technical adjustments and staff training;
- Check sector-specific implications. Depending on your industry, there may be specific implications or additional requirements under the AI Act. Understand how it intersects with sector-specific regulations; and
- Stay informed about ongoing updates, interpretations, and best practices related to the AI Act;
SOME ETHICAL TAKE AWAYS
In the context of the AI Act, ethical considerations are essential to encourage a responsible AI environment. The Act emphasizes user autonomy, ensuring that individuals have control over their data and the decisions influenced by AI. It highlights the critical need to mitigate biases, striving for AI systems that are fair and do not discriminate. Transparency and accountability are underscored as well, promoting trust in AI by making its workings and decisions understandable and responsible. The Act also strengthens the commitment to privacy preservation, protecting individuals from invasive AI technologies. Lastly, human oversight remains a cornerstone, ensuring that AI does not operate in a vacuum but under the guidance and intervention of human judgment. Keeping these ethical principles in mind will contribute hugely to building a trustworthy AI future.
There are different deadlines for the different types of rules of the AI Act. Here's a brief summary on the compliance timeline for various rules after the official and final AI Act is published (note that the Act isn't officially finalised and published yet).
- 6 months: the ban on unacceptable AI systems is applicable;
- 1 year: obligations for general purpose AI systems need to be fulfilled;
- 2 years: most other requirements need to be fulfilled;
- 3 years: obligations for high-risk systems need to be fulfilled.
Do you want to get ready for these upcoming AI Act deadlines and are you looking for customised advice on compliance? Feel free to get in touch with us.