Key developments from October: AI and law

The latest EU AI Act trilogue sessions of October marked a significant step in the European Union's journey to regulate Artificial Intelligence (AI). This blog lays down the key takeaways from the latest sessions and other relevant developments in the field outside of the EU. 

EU AI Act updates 
HRAIS, exemptions, sandboxes and surveillance 

In the first session of October there was a breakthrough for providers of high-risk AI systems (HRAIS), which were given a clearer set of requirements and obligations. Furthermore, regulatory sandboxes were approved, although real-world testing was excluded. There were also compromises reached on penalties and fines.  

The most significant outcome was an agreement on Article 6, which focuses on the classification of high-risk AI systems. The AI Act plans to introduce a certification regime for such systems, to ensure that they meet predefined safety and ethical standards. 

Interestingly, the trilogue introduced exemptions for AI systems that perform "purely accessory" tasks. For these exemptions, they defined four conditions, the AI systems should: 

  1. Perform narrow procedural tasks; 
  2. Detect deviations from decision-making patterns; 
  3. Not influence decisions; and 
  4. Solely improve work quality.  

The question of who decides if an AI system is high-risk and which systems are exempted remains a contentious issue and a critical part of the negotiations. Consumer and privacy advocates are wary of companies being allowed to self-classify their AI systems. Questions about the burden of proof for these exemptions are still on the table and there is a great need for real-life examples. 

Unresolved issues 

The use of AI in law enforcement was another area of focus, but they failed to result in specific text agreements. While European Parliament negotiators are cautious about the use of AI tools like facial recognition, the European Council and law enforcement agencies view them as helpful for public safety. 

Several other issues remain unresolved, including: 

  • The definition of AI and its alignment with international standards; 
  • The prohibited AI practices (e.g.: should biometric surveillance fall in this category?); 
  • The criteria for categorizing high-impact foundation models and generative AI (like ChatGPT); 
  • Concerns around the use of copyrighted content by AI. 
Looking Ahead 

With the EU Council aiming for a full agreement on the AI Act by the end of 2023, the next trilogue session on December 6 becomes a high-stakes meeting. Failure to achieve consensus, like happened during the last sessions, could push the timeline into 2024. This would make the lawmaking process even more complicated, since there are upcoming European Parliament elections in June 2024. Lawmakers are nevertheless being cautious not to rush into decisions before understanding the technical implications fully.  

Worldwide updates 
G7 leaders: Hiroshima AI Process  

The G7 leaders (from Canada, France, Germany, Italy, Japan, UK and US) have endorsed the Hiroshima AI Process Comprehensive Policy Framework, aimed at guiding responsible AI development and governance. The framework focuses on four key areas:  

  1. The analysis of risks, challenges and opportunities of generative AI; 
  2. The establishment of guiding principles for the AI eco-system;
  3. A voluntary international code of conduct for AI developers of advanced AI systems; and
  4. Project-based cooperation for responsible AI. 

While emphasizing the potential of advanced AI systems, the leaders also stressed the importance of risk management and democratic safeguards. Although not legally binding, this high-level endorsement serves as a political commitment that could influence future policy and legislative directions. The framework is intended to be dynamic, with ongoing updates and multi-stakeholder input to adapt to evolving technologies. 

President Biden: Landmark Executive Order 

In a landmark Executive Order, President Biden aims to position the United States as a leader in responsible AI development and governance. The AI strategy encompasses new safety and security standards for powerful AI systems, privacy protections, and measures for equity and civil rights. Developers of high-risk AI are required to share safety test results and other critical information with the U.S. government. The National Institute of Standards and Technology and the Department of Homeland Security will establish rigorous safety protocols.  

Additionally, the Executive Order promotes innovation, aims to protect consumers, and seeks to understand the labour-market impacts of AI. While emphasizing bilateral and multilateral international engagements, the Executive Order also focuses on responsible and effective government use of AI. The Executive Order is a direct binding directive to federal agencies to take certain actions, such as creating new rules. Once these rules go through the necessary administrative procedures they could become binding regulations for private parties, e.g. AI developers, providers and users. 

Start preparing for compliance 

AI system providers, along with importers, distributors, and users should prepare for compliance. The definition of AI remains under debate but generally involves systems operating with elements of autonomy. Currently, prioritizing preparations for compliance with the EU AI Act is crucial, as it is likely to be the first legislation to be finalized.  

The applicable obligations and responsibilities under the EU AI Act depend on how the AI system is classified. The current risk classifications are:   

  • Unacceptable risk: these AI systems, e.g. used for biometric identification, are prohibited; 
  • High-risk: these systems are used in specific sectors like education and law enforcement and are subject to articles 8-15; 
  • Low-risk: these AI systems are primarily governed by voluntary codes of conduct. 

Given the complexity and potential impact of the EU AI Act, with fines up to €7 million, it's crucial for businesses that use AI to start preparing for compliance. This will involve financial costs, legal expertise, and procedural adjustments. Moreover, monitoring the developments on this subject could offer a competitive advantage, especially since the EU AI Act will most likely become the standard for AI regulations worldwide.  

Do you have questions about the ongoing developments in regulating AI and how they might impact your business or sector? Feel free to reach out to for specialized advice and stay tuned for more updates on this subject. 

  • Created 01-11-2023
  • Last Edited 01-11-2023
  • Subject Legislation
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles