How can the expectations of an AI model be legally defined?

First, it is important to determine a good way to measure a model's performace. One can look at how often the model chooses the wrong option (accuracy) or how large the average deviation from the true value is (error rate). A false positive test, in the case of detecting a disease, is less bad, than a false negative test. Then you can look at the false positive and false negative results. One can determine that a false-negative result is twice as bad as a false-positive result and, based on that, create a specific score for a model. All these methods are quantifiable and can be well determined at development time.

It is also important that the model continues to work well after it is put into use. Therefore, it is advisable to conduct an evaluation and for that evaluation also review and check some of the outcomes of the model. Should the model perform worse than expected, in most cases the developer will have to be given another chance to improve the model. It is useful to include a provision for this in the agreement, should he refuse to do so. You can think of a provision stipulating that the developer has a duty to improve the model if the model performs worse in practice than a certain margin below the development value. Should the developer fail to fulfil his obligations, it might be possible to claim damages and/or dissolve the agreement.

Details
  • Created 29-06-2023
  • Last Edited 29-06-2023
  • Subject Using AI
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles