The AI Act is almost finished. The main focus of the Act is on regulating high-risk AI systems to ensure that they comply with European norms and values. A good starting point for compliance is this checklist:

  1. Risk management system: Take appropriate and targeted risk management measures to address identified risks.
  2. Data and data governance: Use high-quality training data, document its origin, employ appropriate data governance practices and make sure your data is relevant and unbiased.
  3. Technical documentation: Prepare a filled-in document with all the requirements from Annex IV.
  4. Tracebility: Ensure that there are records of the processing in the AI system during its entire lifespan which can be used to check decisions. Design this for transparency and tracebility.
  5. Human oversight: Include tools in the AI system to enable human oversight when necessary. Enable users to understand what is happening and give them the confidence to take action.
  6. Accuracy, robustness and security: Take measures to ensure accuracy, robustness and security in every part of the AI system. Define performance metrics and monitor error robustness and bias.
  7. Quality management system:  Create a quality management system which covers all other complaince subjects.
  8. Declaration of conformity: Prepare a signed declaration of conformity for each high-risk AI system which asserts compliance with the AI Act.
  9. CE marking: High-risk AI systems need to be marked with the CE logo, either on the physical product or in the documentation of a purely software product. 
  10. Registration: Both your company and each high-risk AI system needs to be registered in an EU-wide database.
Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles