AI in education

The CITO Foundation, known for various testing methods, is conducting research into the ethical aspects of AI in testing (at schools). The research is not yet complete but is important at a times where schoolchildren and students must take tests at home, possibly under the eye of a digital invigilator.

In what ways is AI currently being used in examinations?

TEST SURVEILANCE WITH A CAMERA

Proctoring is a way of remote surveying and is used to deter fraud while taking tests. However, many students do not agree with this measure. They consider this digital surveillance an invasion of their privacy: for example, they are filmed in their own rooms and their typing habits are sometimes tracked. If we look at this system of fraud prevention from the perspective of the EU's draft AI Act, this is probably a system with transparency obligations (Article 52 of the AI Act). One of the types of AI systems that fall into this category are systems that categorize biometric data (Article 52(2) of the AI Act). Proctoring uses biometric data, namely students' viewing and typing behaviour. This allows students to be categorized into potentially fraudulent or non-fraudulent. As a result, users of these systems must be transparent about how these systems work.

Currently, the GDPR already contains rules about automated decisions that could have a major impact on an individual. Being labelled a fraudster could also fall under this, as it could have major consequences for the rest of a student's education. If the advice of a proctoring system is followed blindly, then the controller must comply with the rules of Article 22 of the GDPR on automated decision-making. The controller must then take appropriate measures to ensure the protection of the rights, freedoms and legitimate interests of the data subject. This includes at least the data subject's right to express his/her point of view, the right to challenge automated decisions and the right to human intervention.

AUTOMATIC TEST REVIEW

AI can also be used in education to review tests answers. This will take some time in the Netherlands, but automated test review is already being used in the US. An AI system that assesses the content of tests, not just multiple-choice answers, will fall into the category of high-risk systems. Annex 3 of the AI Act classifies AI systems that assess students as high-risk. By qualifying this as high-risk, developers of these systems will have to meet all the obligations for high-risk systems, including having the system certified and tracking system performance.

Assessment of a test can also be a form of automated decision making. Suppose a study has only one test after three years of study, then the controller will have to comply with the rules of Article 22 of the GDPR if an automated assessment is used.

CONCLUSION

In the Netherlands, AI is now mainly used for proctoring and not for reviewing tests. With proctoring, transparency requirements must be met, so students must be informed about how the system works. If an AI system were to be used to grade open-ended questions, the developer of that system will have to meet all the requirements for high-risk systems.

Rules already exist for AI systems whose outcomes could have major consequences for the individual. If the outcomes are adopted blindly, Article 22 GDPR imposes additional requirements on the use of such systems. This could include a proctoring system or an automatic review system.

Details
  • Created 18-07-2023
  • Last Edited 18-07-2023
  • Subject Using AI
More questions?

If you were not able to find an answer to your question, contact us via our member-only helpdesk or our contact page.

Recent Articles