How can equality be concretized when assessing a model?

Lawfulness and fairness are principles of processing personal data under the GDPR. Thus, any model that processes personal data will also have to be fair and lawful. Fairness involves, among other things, the expectations of the data subject. A reasonable expectation of a model is that the outcomes are fair. Fair can be a abstract concept, but there are legal standards that provide a concrete elaboration of what is unfair. EU non-discrimination law prohibits discrimination based on a number of characteristics, such as religion, race, sex and more, almost all of which are also sensitive personal data. If a model gives different results for two people who differ only on these characteristics, then that model is almost certainly unfair. However, it can be permissible to process special personal data in very specific circumstances if the data is used to pursue equality (Art. 25 of the Dutch implementation act of the GDPR (UAVG)).

In general, equality means that people who hardly differ from each other in their qualities should also not be treated differently by a model. This means that the developer of a model may have to check whether his model is actually fair. There are various tools which can help with checking your models, like HolisticAI and Plot4AI.

More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles