Systematic reviews are a key component of evidence-based medicine, playing a critical role in synthesizing existing research evidence and guiding clinical decisions. However, with the rapid growth of research publications, conducting systematic reviews has become increasingly burdensome, with title and abstract screening being one of the most time-consuming and resource-intensive steps. To mitigate this issue, we designed a two-stage dynamic few-shot learning (DFSL) approach aimed at improving the efficiency and performance of large language models (LLMs) in the title and abstract screening task. Specifically, this approach first uses a low-cost LLM for initial screening, then re-evaluates low-confidence instances using a high-performance LLM, thereby enhancing screening performance while controlling computational costs. We evaluated this approach across 10 systematic reviews, and the results demonstrate its strong generalizability and cost-effectiveness, with potential to reduce manual screening burden and accelerate the systematic review process in practical applications.
翻译:系统综述是循证医学的核心组成部分,在综合现有研究证据和指导临床决策方面发挥着关键作用。然而,随着研究文献的快速增长,开展系统综述的工作负担日益加重,其中标题与摘要筛选是最耗时、资源最密集的环节之一。为缓解这一问题,我们设计了一种两阶段动态少样本学习方法,旨在提升大型语言模型在标题与摘要筛选任务中的效率与性能。具体而言,该方法首先使用低成本大型语言模型进行初步筛选,随后利用高性能大型语言模型对低置信度样本进行重新评估,从而在控制计算成本的同时提升筛选性能。我们在10项系统综述中对该方法进行了评估,结果表明其具有强大的泛化能力和成本效益,在实际应用中具备减轻人工筛选负担、加速系统综述流程的潜力。