Last month Cambridge University announced that its students will be allowed to use ChatGPT during their studies. Professor Vira, Cambridge’s vice-chancellor for education believes that bans on ChatGPT and other AI software are not ‘sensible’, but the university has stated that it must not be used for coursework, exams, or other assessed materials.
The emergence of ChatGPT has caused many institutions to clarify their own standing for its use, with Oxford banning ChatGPT for assessed work and Edinburgh signalling that using it for assessed work would be in breach of conduct rules. On March 30th 2023, students across all departments and faculties at the University of Sheffield received an email from Mary Vincent, our vice-president for education. The email stated that the use of ChatGPT or other AI-based services “would be using unfair means as defined by the university policy,” as it is not produced by the student submitting it and therefore is not “reflective of your skills and knowledge.” As well as this, some students received an announcement via Blackboard that “from 4th April Turnitin will be turning on its AI writing detection capabilities.” But is the use of AI-based services during studies unethical, or are university authorities standing in the way of a more technological future for education?
Using AI software will allow students to gain a wide understanding of a field quickly and effectively as we know it. Having a singular location where students can ask a question and receive a direct response can be a great way to ask questions about the course or elements of a module they don’t quite understand. Whilst asking lecturers questions directly is another way to achieve this, some students don’t always feel comfortable approaching lecturers with questions, especially if they feel they are far behind or the answer is ‘obvious’ or common knowledge to other people. This could also ease the burden on lecturers, who are already overworked and underpaid, and allow them to focus more on delivering content rather than addressing individual questions on many aspects of their lectures.
As well as this, AI software would allow users to get the most up-to-date answers possible. There are many fields where the forefront of discovery, areas that students are often tasked to write about, is highly conflicted, with a vast array of research on either side of the argument, making students’ positions difficult as to how to write. Also, not all information, such as journals, are open access or subscribed to by the university. AI would allow students to effectively bypass this obstacle and get answers from AI that may previously have been difficult to find or access.
But as with all things, AI software is not perfect. ChatGPT was asked an exam-style history essay question, with an academic expert grading the response. They found the produced answer to be accurate but not fully explorative of the areas implied by the question, giving the answer a grade 4 pass. This issue may be remedied as and when similar questions are presented to it, as AI prides itself on learning from previous answers. Still, until then students may find any answers produced by AI methods aren’t as good as their own.
From a broader perspective, the use of essay-writing software puts the entire point of university study into question. Employers and industries rely on degrees and other qualifications to understand the content and skills that students have gained from their studies. If large amounts of this were actually achieved by the capabilities of AI programmes, the potential employee may not have the skills that are actually required for the job. Also, if these achievements were made by AI alone, it gives further weight to the arguments that AI-enabled systems may be better placed than humans for certain jobs. If students are leaving university without proof that they have the skills their degrees claim, are they really best suited for jobs that require them? Whilst assessed content can be stressful to produce and complete, it is necessary (at least in our current higher education system) to ensure that students can be fairly selected from the pool of applicants who apply to jobs or further study after their degrees.
Many lecturers don’t seem to be too fond of the idea of submitting work created by AI. Many across departments and the country feel that students submitting work generated by ChatGPT completely removes the point of setting the work, as well as the time marking, grading, and giving feedback on it. James, a computer science student at the University of Sheffield, said AI software “makes research for people with attention issues like [them] much easier,” but submitting exact answers should not be allowed, and that AIs still “confidently state lies.” Across the pond, a survey conducted by BestColleges.com found 51% of students believed that using AI to complete assessed work is cheating, but just over a third also believe that AI tools shouldn’t be fully prohibited in educational settings.
ChatGPT may have its uses in higher education and may be important for research and analysis purposes, but could also lead to an increase in work submitted that wasn’t written by its ‘author’. This risks students gaining qualifications which they haven’t studied as originally intended, and lessening the value of university education if many graduates turn out not to have developed skills as it was intended. Although it should be banned outright, as it has many benefits for a range of different students, we must be cautious in how we move forward.