Three Laws of ChatGPT

By Louise Skelding Tattle

The use of AI in academic research is not new, but the release of ChatGPT last November almost broke the internet. Lots has been written in the short time since on the development of generative AI tools and the use of ChatGPT in teaching, research and academic publishing (some selected further reading is listed below). It has also sparked lots of philosophical conversations within the Research Integrity Group at SAGE about the ethics of using generative AI to ‘write’ (in quote marks) articles, about the potential risks of publishing articles not written (not in quote marks) by a human, and whether bots qualify as authors. And what even is an author?

Can generative AI be a force for good or is this the end of civilisation as we know it? It has been argued that generative AI will become just another regular tool we use to help us solve problems, like any other piece of software. Nevertheless, ChatGPT (in its current form at least) undoubtedly gives publishers and editors pause for thought on the question of research integrity. Is a piece of work produced by an AI fraudulent? What if the AI can produce scientifically sound content? Will we soon be publishing articles written by robots? The future is here – ChatGPT already has a publications CV. The World Association of Medical Editors (WAME) recently released some Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications, which discusses the limitations of ChatGPT and its potential threat to research integrity. However, AI is here to stay and so how to develop policies that support authors in using these tools but also protect research integrity?

We may be having different discussions in a few years but at the moment we feel that AIs shouldn’t be included on the byline since they are tools used in the production of the article and cannot fulfil all the criteria for authorship as laid out by the ICMJE (International Council of Medical Journal Editors). This view is also shared by WAME in their Recommendations on ChatGPT mentioned above.

And how to treat content generated by ChatGPT? The Research Integrity Group at SAGE discussed whether we would be comfortable in publishing work generated by ChatGPT where ChatGPT is not the subject of the article. Ultimately, we didn’t want to restrict authors in this way given that ChatGPT and its uses will evolve over time. We have therefore introduced our first – but not likely to be our last – generative AI policy for our authors, which stipulates that the use of generative AI must be made clear in the text and also acknowledged in the Acknowledgements section. This policy can be found on our Journal Gateway here.

We’ll be keeping a close eye on how the use of ChatGPT and generative AI evolves and will adapt our policies accordingly. We are about to experience what is beyond the Outer Limits.

 

Further Reading

AI-Generated Texts' Implications for Academic Writing - Social Science Space

Guest Post – AI and Scholarly Publishing: A View from Three Experts - The Scholarly Kitchen (sspnet.org)

https://www.thebookseller.com/comment/preparing-for-academic-ai

Thoughts on AI's Impact on Scholarly Communications? An Interview with ChatGPT - The Scholarly Kitchen (sspnet.org)

About the Author