Since the release of ChatGPT in November last year, there’s been an influx of articles and social media posts expressing concern over the impact of AI writing tools.
Currently, large language models such as ChatGPT are advanced enough to write high-level essays, articles, and even poetry in response to prompts. Many teachers across the world have spoken about students using ChatGPT to cheat; a common example of this is students inputting their essay prompt into the tool and then tweaking the resulting draft slightly in order to avoid detection. This isn’t completely impossible to detect, but it is very difficult.
So, what exactly is ChatGPT?
ChatGPT was released in November by OpenAI, a company that also produced a digital image generator called DALL.E. The ‘large language model tool’ is based on GPT-3, an AI model trained on 175 billion ‘parameters’ (data points). It has a huge amount of data and vocabulary at its disposal and can understand words in a given context which helps to mimic speech patterns.
However, despite all this, ChatGPT cannot ‘think’ as people do, so can only present facts or other people’s opinions. Furthermore, the knowledge cut-off for the software is 2021, but it can answer questions about current events by relying on this past data and analysing the context and any available information about the event. In terms of the accuracy and reliability of any answers given, this will never be 100%, so it’s important to cross-reference with current sources.
Do AI writing tools pose a serious threat to the journalism industry?
AI tools like ChatGPT can produce high-quality articles on events already written about on the internet, but they cannot create original journalism on things that haven’t been written about before.
At the moment, ChatGPT is simply a language model taking data from various sources and presenting it to you – it still relies on being able to access data on the topic it’s asked about. AI writing tools cannot create their own thoughts or go out into the world investigating potential stories which are new and as-yet undocumented.
In some ways, AI tools can enhance the job of journalists. If a journalist has a particular opinion on an issue, they can use AI to reference other similar or contrasting opinions. Also, due to the vast and varied data available to them, tools like ChatGPT can make connections between seemingly unrelated topics or ideas. These tools are incredibly logical, so could find similarities that people just have the time or range of knowledge to do so.
These tools are there to assist journalists in their jobs, not to do the entire job without them.
To demonstrate this, I asked ChatGPT about how humans can avoid the negative consequences of AI tools such as itself and instead use them to our advantage. It answered:
“As with any technology, AI tools like ChatGPT can have both positive and negative consequences, depending on how they are developed, deployed, and used. To ensure that we can use AI tools to our advantage and avoid negative consequences, we can consider the following strategies:
-
-
- Develop AI tools with transparency and ethical considerations. This means that developers should consider the potential impact of AI tools on society, including issues related to bias, privacy, and fairness.
- Ensure that AI tools are designed for specific applications: AI tools are not a one-size-fits-all solution. They should be designed to address specific tasks or applications with appropriate training and data points.
- Regularly assess the performance of AI tools to ensure they are working as intended and that any unintended consequences are addressed promptly.
- Use AI tools to augment human decision-making, not replace it. This means that humans should still have the final say in making decisions, and AI tools should be used to provide insights and recommendations that can inform those decisions.
- Educate the public about the potential benefits and risks of AI. This will help build trust in AI tools and ensure they are used in a responsible and ethical manner.
-
In summary, AI tools with transparency and ethical considerations, ensuring that they are designed for specific applications, regularly assessing their performance, using them to augment human decision-making, and educating the public about their benefits and risks, we can avoid negative consequences and use AI tools to our advantage.”
If I was writing a whole article specifically on how to avoid the consequences of AI and instead use it to our advantage, I could ask ChatGPT to elaborate on its answer, change the focus of its answer, correct or challenge its answer, or input new information and ask it to reassess its answer.
This is just one example of how journalists can use ChatGPT to their advantage by cutting down on research time to focus on investigation of original journalism.