Swedish schools are introducing AI systems at an increasing rate. Many see them as tools to make teaching more equitable and fair, but researchers at the University of Gothenburg argue that the technology is not always as unbiased as we think and can even contribute to injustices if it is not adapted to the context in which it is used.
A research group at the University of Gothenburg dedicated to teachers' use of artificial intelligence in education argues that the guidelines and policy documents governing how these systems are used in schools need to focus more on how AI should be applied in a fair manner. This is necessary for teachers to get good support in their work and to ensure equitable teaching.
Terms like fairness and equity are frequently used in contexts where AI and education are discussed, but it is not always clear what these terms mean in practice.
Adjusting the systems
The research group argues that there is often an over-reliance on creating so-called fair algorithms and cleansing data of biases and prejudices. In reality, one should consider factors such as how teachers can make their own adjustments in the tools to make them work for their particular students.
"If we talk about AI systems being fair and equitable, we can not view them as generic systems or believe that the technology itself should be fair. We miss important factors if we do not look at what happens when they are applied in the classrooms and in schools," says Marie Utterberg Modén, postdoctoral researcher of IT and learning, and one of the authors of the article When fairness is an abstraction: equity and AI in Swedish compulsory education.
Systems perceived as objective
The article describes how a protest movement under the slogan "F*ck the algorithm" formed after an AI system was used to replace traditional entrance exams for university studies in England during the COVID-19 pandemic. The system's algorithms processed things like students' past performance and grade suggestions from their teachers to arrive at a result. It also factored in the average results from previous years from the schools the respective students attended to counteract expected grade inflation. All to create as fair a system as possible. The outcome showed that students from socially underprivileged areas were disadvantaged when the grades were set, and the method faced heavy criticism and sparked an extensive, global debate about AI and fairness.
"It's easy to trust such systems because they are perceived as objective," says Marie Utterberg Modén.
"But they work in ways that reinforce, categorise, and contain biases that stem from ourselves. We see things in a certain way and integrate that into the systems. In England, they thought the system they developed would take into account the disparities that existed, but in the end, they had to realise that it did not work well and that especially students with good results in lower socio-economic areas were affected."
Equal access to technology
Regarding Swedish schools, Marie Utterberg Modén sees a risk that AI tools will not be available to students on equal terms. Private schools, unlike public ones, operate for profit. They have the means to invest in the systems and can to a greater extent provide students and teachers with the opportunity to learn to use them.
In education, for example, generative AI could be linked to digital learning materials and act as a resource that talks to the student, provides advice and guidance, and can help understand the homework. If only some schools implement these systems, or if teachers' competence to use them varies, it contributes to inequalities in how teaching is adapted to students' different conditions. This, in turn, increases the disparities in students' opportunities for support.
"We see that equity in schools has decreased. That is why it is important that, if we introduce these kinds of systems, they should ideally improve equity and at least not make it even worse," says Marie Utterberg Modén.