Debates between editors, researchers, and publishers have arisen over the place of artificial-intelligence (AI) tools such as ChatGPT, the chatbot that has taken the world by storm, in the published literature and whether it is appropriate to cite the bot as an author. This has caused publishers to race to create policies for the free-to-use tool released by tech company OpenAI in San Francisco, California in November.
ChatGPT is a large language model that generates sentences by mimicking the statistical patterns of language found in a large database of text. It has been found to be useful in various sectors, including academia, but it's not considered as an author for scientific papers as it cannot take responsibility for the content and integrity of the papers. However, some publishers have stated that an AI's contribution can be acknowledged in sections other than the author list. In one case, ChatGPT had been cited as a co-author in error and the journal will be correcting this.
ChatGPT is one of the authors of a preprint about using the tool for medical education, which was posted on the medical repository medRxiv in December last year. However, the team behind the repository and its sister site, bioRxiv, are currently discussing the appropriateness of using and crediting AI tools such as ChatGPT when writing studies, according to Richard Sever, co-founder and assistant director of Cold Spring Harbor Laboratory press in New York.
Despite this, an editorial in the journal Nurse Education in Practice this month credited ChatGPT as a co-author alongside Siobhan O'Connor, a health-technology researcher at the University of Manchester, UK. However, the journal's editor-in-chief, Roger Watson, stated that this credit was an error and will be corrected soon.
Additionally, Alex Zhavoronkov, chief executive of Insilico Medicine, an AI-powered drug-discovery company in Hong Kong, credited ChatGPT as a co-author of a perspective article in the journal Oncoscience last month. He also mentioned that his company has published more than 80 papers produced by generative AI tools. According to Zhavoronkov, the newest paper delves into the advantages and disadvantages of using the medication rapamycin through the lens of Pascal's wager, a philosophical concept. He notes that ChatGPT, the AI tool used to generate the article, produced a superior piece compared to earlier versions of generative AI. Zhavoronkov also mentions that the paper underwent peer review by the editor of Oncoscience at his request.
According to neurobiologist Almira Osmanovic Thunström of Sahlgrenska University Hospital in Gothenburg, Sweden, a fourth article co-written by the chatbot GPT-3 will soon be published in a peer-reviewed journal. The article was initially posted on the French preprint server HAL in June 2022. Despite initial rejection from one journal, the paper was accepted by a second journal with GPT-3 listed as an author after Thunström made revisions in response to reviewer requests.
The editors-in-chief of Nature and Science have stated that ChatGPT does not meet the standards for authorship. They suggest that authors who use language learning models (LLMs) while developing a paper should document their use in the methods or acknowledgements sections.
The publisher, Taylor & Francis in London, is currently reviewing its policy on this matter, according to Sabina Alam, the Director of Publishing Ethics and Integrity. She concurs that authors are responsible for the validity and integrity of their work and should cite any use of LLMs in the acknowledgements section.
The ethics of generative AI
According to Matt Hodgkinson, a research-integrity manager at the UK Research Integrity Office in London, there are already established authorship guidelines that prevent AI tools such as ChatGPT from being credited as co-authors. These guidelines state that a co-author must make a significant scholarly contribution to the article, have the ability to agree to being a co-author, and take responsibility for the study or the part they contributed to. Hodgkinson states that it is the latter requirement that poses a problem for AI tools as they cannot take responsibility for their actions.
Zhavoronkov also expresses concerns about the misuse of the system in academia, as it may lead to individuals without domain expertise attempting to write scientific papers.
Source: Image: Erikona/Getty Images #655652064