The development and deployment of AI technologies have raised various concerns and challenges. To ensure responsible and ethical AI practices, several measures have been introduced, including AI quickscans, impact analyses, ethics training, governance advice, and audits and certification. These measures aim to mitigate potential risks and biases associated with AI systems, promote transparency and accountability, and ensure that AI is developed and used in a way that aligns with ethical principles and legal guidelines.
These measures are essential for ensuring that AI systems are developed and used in a responsible and ethical manner, and for promoting transparency, accountability, and trust in AI.
On July 12, 2024, the AI Act was published. Gradually, obligations from the AI Act will come into effect. The AI Act assigns different obligations depending on the role in the AI value chain.
How do we determine if an AI system is compliant?
AI Mapping: We identify which AI systems are used by an organization. For each AI system, we assess whether it meets the AI Act’s definition of Artificial Intelligence. If it does, the AI system must comply with the AI Act’s obligations.
AI Classification: The AI Act categorizes AI systems into four risk categories: prohibited, high-risk, limited-risk, and minimal-risk. Each risk category has its own set of obligations. We determine which risk category an AI system falls into.
AI Gap Analysis: We assess whether an AI system, considering its risk category, meets the obligations of that category. Where obligations are not met, we specify what needs to be done to achieve compliance. We make a roadmap to compliance.
All of this will be documented in a report that can be used internally and/or shared with stakeholders, etc. If we conclude that an AI system is compliant based on our findings, the organization will receive a certificate of compliance.
As these tasks sometimes require legal, ethical, or more technical expertise, they are carried out by a custom-assembled team.
As an organization, you want to comply with laws and regulations. Ideally, you also want to demonstrate this compliance through certifications or quality marks. Sometimes, you must prove it because clients and stakeholders demand it.
AI is integrated into products, services, and processes, and is applied at all stages of development. Therefore, it is often necessary to address compliance issues throughout the entire lifecycle to avoid having to redo significant portions later. This is especially true if documentation requirements have not been met.
We evaluate AI systems against established standards (e.g. ISO 42001) and guidelines (EU) to ensure that they meet ethical and legal requirements. The audits assess the AI system’s design, development, and deployment to identify potential risks and biases, and to ensure that they align with ethical principles and legal guidelines. Certification provides assurance that the AI system has met established standards and guidelines, and provides a level of trust and confidence in the system’s performance.
For this, we offer the following AI compliance advice.
Organizations deploy AI to varying degrees. Sometimes this is done consciously, while other times it is embedded in products or services that the organization uses or offers. The AI Act imposes obligations on organizations to promote the responsible use of AI. This includes educating and training staff. In some cases, these obligations can hold executives or internal supervisors personally liable.
AI governance advice refers to the guidance and recommendations provided to (boards of) organizations on how to develop and deploy AI systems in a responsible and ethical manner. It involves providing advice on governance structures, policies, training personnel and procedures that ensure AI systems are developed and used in a way that aligns with ethical principles and legal guidelines. The advice helps organizations to establish clear guidelines and standards for AI development and deployment, and to ensure that AI systems are transparent, accountable, and fair. It also helps organizations to comply with the AI Literacy obligation from the AI Act (train and educate staff).
This includes advice regarding drafting an AI policy for organizations.
An AI QuickScan is a self-assessment tool used to assess the potential risks and impacts of AI systems. It involves a rapid evaluation of the AI system’s design, development, and deployment to identify potential biases, risks, and ethical concerns. The AI QuickScan helps organizations to identify areas that require further attention and to develop strategies to mitigate potential harms. It is often used as a preliminary step in the development of AI systems to ensure that ethical considerations are integrated into the design and development process.
The AI Act sometimes requires a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems. This obligation applies to public bodies and private entities providing public services, as well as to users of AI systems used for assessing creditworthiness, risk assessment, and pricing of life and health insurance for individuals.
A FRIA partially overlaps with a DPIA (Data Protection Impact Assessment). Therefore, we recommend conducting the DPIA before proceeding with the FRIA.
We provide support in conducting a FRIA / DPIA.