AI guidelines issued to help students avoid misuse

The University has released guidelines on the use of Generative Artificial Intelligence (GenAI) to ensure students use the tool “positively and ethically” in their studies. 

GenAI is a type of software which can produce various forms of media – text, images, videos, etc. – based on written prompts input by the user. ChatGPT being a notable example.  

In an email circulated to students, Professor Mary Vincent, the Vice-President of Education, spoke of the opportunities presented by GenAI:

“AI has the potential to revolutionise the way we live, work and learn. Responsible and ethical use of GenAI is a skill that has the potential to be transformative across all academic disciplines.”  

Professor Vincent attached to the message newly-issued guidance for students which offered further positive sentiment:

“GenAI is here to stay. It is already having, and will continue to have, a far-reaching effect on what we need to learn and how we learn it.”

Literacy in the use of the technology is “a skill that is increasingly valued by employers within a wide range of professional contexts.”

However, it went on to warn of misuse of GenAI. 

On a practical level, they warn these models are “not always correct and can produce ‘hallucinations.’” That being the term for when AI produces incomprehensible or incorrect data. 

Adam Dangerfield, a 2nd year Engineering student, believes the University is right to use a degree of caution:

“AI is a relatively new technology in the world and as such it seems to me to be relatively unpredictable and unknown.”

He believes that, whilst AI “should be accepted”, these guidelines are a step in the right direction towards necessary “heavy regulation”.

The University also warned of academic cheating through AI saying: 

“Creating something using GenAI tools then presenting it as your own work contravenes academic integrity and is considered unfair means.”

As AI requires a lot of computing power and, in turn, electricity, the University also has environmental “concerns about the carbon footprint of AI”.

They promise to: “continue to review and update this guidance as we learn together about the implications and potential of GenAI”

How AI is going to interact with academia in the future is still uncertain. Regardless, the University is cautiously embracing what could be the future.

Image credits: Palatinate

Latest