There is increasing public debate about the use of AI by companies and governments. The Dutch childcare benefit scandal, among others, has made it clear that automated decisions can lead to unacceptable situations for those about whom the decisions are made. When AI systems are developed, the interests of those affected by the consequences of the decisions are often not (sufficiently) involved. To change this, researchers at Utrecht University have developed the Impact Assessment Human Rights and Algorithms (IAMA). The Dutch House of Representatives passed a motion that all governments should carry out an IAMA when using algorithms. We can help you do this.

What is the IAMA?

The IAMA is a structured way of looking at how a system will work and by what means. The IAMA is divided into four parts:

  1. Why: Why is an AI system being developed and what is its purpose?
  2. What: What data will be used and what should the system deliver?
  3. How: How will the system work?
  4. Human rights: What risks are involved and what rights of data subjects might be affected.

Based on these four components, specific choices can then be made during development, thus reducing the risks of the system.

How does an IAMA work?

In an introductory meeting, we will discuss your project and the steps of the IAMA. Based on the conversation, we prepare the IAMA. Important here is also the focus on the different roles that are necessary in an IAMA in order to execute it successfully. 

We implement the IAMA together with you. In this, we think about:

  • The legal aspects applicable to your project; and
  • The risks to your project and its outcome.
  • As a final product, we deliver a report based on the IAMA.

More information?

You can contact Jos van der Wijst ( with questions and the costs involved.


  • Created 11-04-2024
  • Last Edited 04-06-2024
  • Subject (General) AI
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles