Large Language Models (LLMs) are increasingly embedded in academic writing practices. Although numerous studies have explored how researchers employ these tools for scientific writing, their concrete implementation, limitations, and design challenges within the literature review process remain underexplored. In this paper, we report a user study with researchers across multiple disciplines to characterize current practices, benefits, and \textit{pain points} in using LLMs to investigate related work. We identified three recurring gaps: (i) lack of trust in outputs, (ii) persistent verification burden, and (iii) requiring multiple tools. This motivates our proposal of six design goals and a high-level framework that operationalizes them through improved related papers visualization, verification at every step, and human-feedback alignment with generation-guided explanations. Overall, by grounding our work in the practical, day-to-day needs of researchers, we designed a framework that addresses these limitations and models real-world LLM-assisted writing, advancing trust through verifiable actions and fostering practical collaboration between researchers and AI systems.
翻译:大型语言模型(LLMs)正日益融入学术写作实践。尽管已有大量研究探讨了研究人员如何利用这些工具进行科学写作,但它们在文献综述过程中的具体实施、局限性和设计挑战仍未得到充分探索。本文通过一项跨学科研究人员的用户研究,描述了当前使用LLMs调查相关工作的实践、优势与痛点。我们识别出三个反复出现的缺口:(i)对输出结果缺乏信任,(ii)持续的验证负担,以及(iii)需要依赖多种工具。这促使我们提出了六项设计目标和一个高层次框架,该框架通过改进相关论文可视化、每一步骤的验证以及人类反馈与生成引导解释的对齐来实现这些目标。总体而言,通过将我们的工作扎根于研究人员的实际日常需求,我们设计了一个能够解决这些局限并模拟现实世界LLM辅助写作的框架,通过可验证的行动提升信任,并促进研究人员与AI系统之间的实用协作。