AI Act Compliance
On July 12, 2024, the AI Act was published. Gradually, obligations from the AI Act will come into effect. The AI Act assigns different obligations depending on the role in the AI value chain.
How do we determine if an AI system is compliant?
- AI Mapping: We identify which AI systems are used by an organization. For each AI system, we assess whether it meets the AI Act’s definition of Artificial Intelligence. If it does, the AI system must comply with the AI Act’s obligations.
- AI Classification: The AI Act categorizes AI systems into four risk categories: prohibited, high-risk, limited-risk, and minimal-risk. Each risk category has its own set of obligations. We determine which risk category an AI system falls into.
- AI Gap Analysis: We assess whether an AI system, considering its risk category, meets the obligations of that category. Where obligations are not met, we specify what needs to be done to achieve compliance. We make a roadmap to compliance.
All of this will be documented in a report that can be used internally and/or shared with stakeholders, etc. If we conclude that an AI system is compliant based on our findings, the organization will receive a certificate of compliance.
As these tasks sometimes require legal, ethical, or more technical expertise, they are carried out by a custom-assembled team.
AI Act Timeline:
August 2, 2024: The AI Act enters into force
February 2, 2025: Prohibited AI applications must be off the market
February 2, 2025: Providers and users of AI systems must take measures to train their staff on AI literacy
May 2, 2025: The European Commission must establish codes of conduct for providers of General Purpose AI Models (e.g., ChatGPT)
August 2, 2025: Obligations for General Purpose AI Models come into effect
August 2, 2026: Obligations for high-risk AI systems per Annex III come into effect
August 2, 2027: Obligations for all AI systems (including those per Annex II) come into effect.