Large Language Models (LLMs) have greatly advanced knowledge graph question answering (KGQA), yet existing systems are typically optimized for returning highly relevant but predictable answers. A missing yet desired capacity is to exploit LLMs to suggest surprise and novel ("serendipitious") answers. In this paper, we formally define the serendipity-aware KGQA task and propose the SerenQA framework to evaluate LLMs' ability to uncover unexpected insights in scientific KGQA tasks. SerenQA includes a rigorous serendipity metric based on relevance, novelty, and surprise, along with an expert-annotated benchmark derived from the Clinical Knowledge Graph, focused on drug repurposing. Additionally, it features a structured evaluation pipeline encompassing three subtasks: knowledge retrieval, subgraph reasoning, and serendipity exploration. Our experiments reveal that while state-of-the-art LLMs perform well on retrieval, they still struggle to identify genuinely surprising and valuable discoveries, underscoring a significant room for future improvements. Our curated resources and extended version are released at: https://cwru-db-group.github.io/serenQA.
翻译:大型语言模型(LLMs)极大地推动了知识图谱问答(KGQA)的发展,但现有系统通常针对返回高度相关但可预测的答案进行优化。一个尚未实现但备受期待的能力是利用LLMs提出令人惊喜且新颖(“意外发现型”)的答案。本文正式定义了具备意外发现感知能力的KGQA任务,并提出了SerenQA框架,以评估LLMs在科学KGQA任务中揭示意外洞见的能力。SerenQA包含基于相关性、新颖性和惊喜度的严格意外发现度量标准,以及一个源自临床知识图谱、专注于药物重定位的专家标注基准。此外,该框架还设计了一个结构化评估流程,涵盖三个子任务:知识检索、子图推理和意外发现探索。实验结果表明,尽管最先进的LLMs在检索任务上表现良好,但在识别真正令人惊喜且有价值的发现方面仍存在困难,这凸显了未来改进的重要空间。我们整理的资源及扩展版本发布于:https://cwru-db-group.github.io/serenQA。