The use of large language models (LLMs) in scholarly publications has grown dramatically since the launch of ChatGPT in late 2022. This usage is often undisclosed, and it can be challenging for readers and reviewers to identify human written but LLM-revised or translated text, or predominantly LLM-generated text. Given the known quality and reliability issues connected with LLM-generated text, their potential growth poses an increasing problem for research integrity, and for public trust in research. This study presents a simple and easily reproducible methodology to show the growth in the full text of published papers, across the full range of research, as indexed in the Dimensions database. It uses this to demonstrate that LLM tools are likely to have been involved in the production of more than 10% of all published papers in 2024, based on disproportionate use of specific indicative words, and draws together earlier studies to confirm that this is a plausible overall estimate. It then discusses the implications of this for the integrity of scholarly publishing, highlighting evidence that use of LLMs for text generation is still being concealed or downplayed by authors, and presents an argument that more comprehensive disclosure requirements are urgently required to address this.
翻译:自2022年底ChatGPT发布以来,大型语言模型(LLMs)在学术出版物中的使用急剧增长。这种使用往往未被披露,读者和审稿人难以识别由人类撰写但经LLM修订或翻译的文本,或主要由LLM生成的文本。鉴于LLM生成文本存在已知的质量和可靠性问题,其潜在增长对研究诚信以及公众对研究的信任构成了日益严重的问题。本研究提出了一种简单且易于复现的方法,基于Dimensions数据库索引的跨学科研究全文,展示已发表论文中LLM使用量的增长。通过分析特定指示性词语的异常使用情况,该方法表明2024年所有已发表论文中超过10%可能涉及LLM工具参与撰写,并结合早期研究确认这是一个合理的总体估计。随后,本文讨论了这一现象对学术出版诚信的影响,强调有证据表明作者仍在隐瞒或淡化使用LLM生成文本的行为,并论证迫切需要更全面的披露要求以应对此问题。