This case study is based on the second case from the Princeton Dialogues on AI and Ethics Case Studies. It was initially translated to Dutch and has now been translated back to English.

  1. BACKGROUND INFORMATION

Type 2 diabetes is a condition in which the production and use of insulin is disrupted. This leads to high fluctuation in patients' blood sugar levels. In addition to the discomfort of unstable blood sugar, patients are also at risk of long-term complications such as eye deterioration and nerve damage. Type 2 diabetes is treatable through a combination of a controlled diet, modified lifestyle and possibly the administration of insulin.

A group of medical researchers and computer scientists have developed a program called Charlie, using AI to improve the treatment of diabetes. Using a smartwatch, Charlie measures a patient's blood sugar level and based on that, the patient is prescribed personalized insulin amounts. What sets Charlie apart from other AI systems is that all measurements are combined on a proprietary data platform. This allows Charlie to learn from the effects of administering insulin. The app also reminds users to take their insulin regularly and to live a healthy life. The developers have also added a forum to the app where users can share information about diabetes and user experiences. Activity on the forum is analysed and the results are added to users' profiles.

  1. WHAT ROLE DOES AI PLAY IN THIS CASE STUDY? AND HOW IS THIS AI PUT TOGETHER?

Charlie's AI, which personalizes patients' insulin doses, is the core of the product. While at first predictions could only be improved based on individual information, the data platform allows Charlie to see the effects of different doses on different patients. Through the data platform's measurements, Charlie is getting better at smoothing out blood sugar levels. Charlie also uses notifications as reminders to exercise, eat well and take insulin regularly as ways to monitor patients' blood sugar levels.

In addition to AI prescribing insulin doses, Charlie uses AI to moderate posts on the forum and analyse the posts. This uses language analysis that determines the emotions and content of posts. The results are used to determine which posts are shown first and to provide additional information to users' profiles.

  1. IN WHAT WAYS WERE AGREEMENTS MADE BETWEEN THE RELEVANT PARTIES IN THIS CASE STUDY?

While users know that Charlie is trying to make diabetes treatment easier through personalized amounts of insulin and treatments, they do not know what goes on behind the scenes. They have given permission to share their data, but they are not informed that all of Charlie's actions that affect blood sugar levels are compared and analysed on the data platform.

  1. WHAT ARE THE RISKS INVOLVED IN THIS CASE?

A number of risks can be identified in this case study, such as:

  • Experimenting on patients

The goal of the researchers is to learn more about the effects of different treatments on different patients. For this reason, they will not always make the optimal recommendations, but will also vary the treatments for a time. The researchers decide to start testing multiple treatments: they start with random treatments and as more data comes in, they will increasingly choose the treatment that best suits the particular patient. The risk of this is that some of the patients will receive non-optimal recommendations in the initial period.

  • Discrimination

In a study on the effects of Charlie, treatment outcomes do not appear to be equal across all populations. In fact, among ethnic minorities, Charlie appears to be less effective than average. As a researcher, you want your treatment to work well for everyone, so the effectiveness of Charlie must be improved in these groups.

  • Privacy

There are also major risks in terms of privacy in this case study. The developers do not adequately inform patients about what happens to their data. For example, the developers do not provide patients with information about the processing of their personal data, on the basis of which they receive certain recommendations from Charlie and what happens to their data during treatments and after treatments.

  • Influence

In analysing the effects of Charlie, a group of patients was identified with a high likelihood of poor compliance with the recommendations. After the researchers compared the effects of Charlie among patients with the messages these patients posted on the forum, they saw a correlation: there was a group of patients who had poor compliance with Charlie's recommendations and who were sharing specific types of information on the forum. It could be that patients are being influenced by what they read about Charlie on the forum.

  • Liability

The fifth risk is in the area of liability. Charlie gives medical advice to patients, making it critical that this advice is correct. If a treatment prescribed by Charlie has subsequently caused great harm to one or more patients, Charlie's developers could be held liable for this.

  1. HOW ARE THESE RISKS MITIGATED IN THIS CASE STUDY?

In practice, it is important to mitigate (major) risks as much as possible. In this case study, the developers of Charlie did this in the following ways:

  • Experimenting with patients

In the context of experimenting with treatments for patients, the prohibition on automated decision-making (Article 22 GDPR) is important. Users of Charlie have the right not to be subject to automated decisions that significantly affect them. Also, some of the patients will temporarily receive worse treatment as a result of the experiments, which may affect their health. Explicit patient consent is required to conduct such experiments. In the case study, however, this consent was not sought.

  • Discrimination

To balance the effectiveness of Charlie, the researchers are collecting a more balanced dataset that includes all populations. This will be used as a new starting point for Charlie for recommendations for all users.

Charlie's developers should be careful to ensure that they have a legitimate basis for being allowed to process data regarding patients' ethnicity. Information about a person's ethnicity falls under the category of special personal data (Article 9 GDPR), and there are strict rules about the processing of this data. In the case study, it is not clear whether the developers thought about this.

  • Privacy

The developers of Charlie start processing more and more (new) data. Examples of new forms of processing are analysing the posts on the forum or sharing the data in the data portal. A new processing purpose usually requires a stand-alone processing ground, with informed consent being the most appropriate in this case. Failure to comply with this processing ground would result in a violation of the privacy of Charlie's users and a violation of the GDPR.

To mitigate privacy risk, Charlie's developers should seek patients' consent for any processing of their personal data. Consent for a (new) processing ground can be given through a clear statement to which patients expressly consent. In this case study, it was not explicitly requested.

  • Influencing

Although the developers are trying to help patients be healthy, they use tactics to do so that can be perceived as manipulative. First, they process patients' posts for a purpose for which consent was probably not given.

In the case study, no clear information is given to users about how Charlie is trying to motivate them. If developers want to avoid being accused of manipulation, they will have to inform patients more and think more carefully about what means they want to use to help patients.

  • Liability

In this case, the situation could arise where developers are held liable for the consequences of giving incorrect advice to patients. It is not clear from the facts of the case how the relationship between developers and users is regulated. Although the first participants in Charlie, who participated in the testing phase of the AI system, signed an agreement, this is most likely not the case for subsequent users. These later users may have only accepted the terms and conditions. Although the developers can potentially avert all liability herein, there is a good chance that such a provision is not valid.

To minimize liability for damages, it is very important to comply with all laws and regulations, like the GDPR and the Medical Devices Directive (Directive 93/42/EEC).

  1. WHAT SHOULD ONE CONTINUE TO PAY ATTENTION TO?

The main concern in this case is the explicit consent of users for the various developments of Charlie. Under the GDPR, many things are possible, but only with a proper explanation. The developers of AI must therefore ensure that they are transparent about the processing of the users' personal data and their grounds for processing.

Transparency and clarity are not only relevant in the context of the freedom of choice and the privacy of AI users but is also important for user trust. It is very important for the future of Charlie that the developers are open about the AI system, their intentions and the used methods so that users know where they stand.

Details
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles