2024 has been a year of AI proliferation. As the New Year approaches, to the tune of controversies like Coca-Cola’s entirely AI-generated Christmas advert, this embryonic technology’s growing presence in almost every corner of the modern world cannot be ignored. Higher education is no exception.
86% of students use generative AI programs (such as Google Gemini or ChatGPT) in some capacity, according to the Digital Education Council.
Dr Andrew Heath, a lecturer in the Department of History at the University of Sheffield said: “AI presents us with a series of challenges and opportunities… we have an obligation to equip students for an AI future.”
Dr Heath is also the co-author of the department’s AI guidance. In an interview with Forge, Dr Heath was asked whether AI is undermining the integrity of subjects like History.
It’s a risk,’ he said. ‘But I’d want to hope that we could spot bland, generic writing which really shouldn’t be doing very well’.
‘There are words that are appearing more frequently. Words like delved, or nuanced, or dynamic interplay’ he quipped; even to the puritanical among us, this vocabulary might sound very familiar. Incidentally, the University of Sheffield does not employ AI detection tools.
‘We don’t want a degree that deskills students. If they approach an assignment by putting a prompt into a generative AI they’re not learning anything.’
‘On the other hand, it’s perfectly possible for students to use AI to enable them to grapple with the intellectual challenges that a humanities degree should pose, in a way that enriches their understanding.’ Dr Heath said.
Questioned about whether he was an optimist or pessimist in the grand scheme of generative AI, Dr Heath gave another balanced answer. ‘Banning it, returning to invigilated exams, it feels like a war on drugs analogy. If you prohibit, you have to build a disciplinary apparatus, or accept that some students are going to cheat. Even with invigilated exams, students will use AI to revise, and I wouldn’t discourage that.’
He was pushed for a more melodramatic stance. In a subject striving for objectivity, academic integrity, and creativity of source analysis, surely the foibles of current AI pose a more serious threat?
‘My greatest fear is that AI puts people off studying humanities subjects,’ he answered. ‘If there’s a belief that large language models can reason and analyse better than we can, then we’re in trouble. Even more acutely in the creative arts.’
There is also the question of fairness. Are students who opt out of using AI disadvantaged compared to their peers? ‘There are very good reasons for conscientious objection, not least the environmental cost. Students need to be trained to recognise the biases, the problems, the tendency of AI to hallucinate.’
‘But for example; a student with a learning disability being able to run prose through what is tantamount to a clever grammar checker is a good thing.’ he pointed out.
Overall then, Dr Heath is not a whistleblowing doomsayer on AI in humanities.
‘Is this going to make us stupider, or can it make us better historians?’ He demonstrated NotebookLM, Google’s AI note-taking software. ‘A profound challenge is managing information. Too much has been written! We look for analogue shortcuts, but I think AI has the potential to help students with the sheer volume of information. New AI tools allow us to interrogate books much quicker than indexes.’
It is clear that AI isn’t going anywhere. The Guardian reports that a group of academics “predicted that the dawn of consciousness in AI systems was likely by 2025”, and even the rudimentary programs available to us today have had a transformative influence on the ways people read, write and learn. The influence it could have beyond its current nascent form is therefore uncertain and alarming. At present, though, Dr Heath believes it needn’t make us stupider… if used well.
Image Credit- Pexels