What are the contractual risks involved in AI?

A model developed for a client may turn out to be unsuitable for its desired use or give incorrect results. There are no specific legal standards on which liability can be established in such a case. Contractually, however, parties can specify certain matters regarding liability. For instance, a developer that wants to present itself as reliable can take on all liability as long as there is no major human error.

It is useful to lay down what the developer's warranties are and who guarantees that the dataset is fit for purpose. As a developer, the rule that you must deliver what you offer applies, of course, but how do you express the imperfection of your model? You have to include that in the assignment agreement. One can set an acceptable margin of error or agree on a warranty system where the producer has to pay a penalty if there are more wrong outcomes than expected.

In systems that make decisions about people, there is also a need to determine whether a person has the right to a human assessment. This involves the questions of who will do the assessment and who controls the explanation of the model.

Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles