SWE-Bench-Verified, a dataset comprising 500 issues, serves as a de facto benchmark for evaluating various large language models (LLMs) on their ability to resolve GitHub issues. But this benchmark may overlap with model training data. If that is true, scores may reflect training recall, not issue-solving skill. To study this, we test two Claude models that frequently appear in top-performing agents submitted to the benchmark. We ask them to find relevant files using only issue text, and then issue text plus file paths. We then run the same setup on BeetleBox and SWE-rebench. Despite both benchmarks involving popular open-source Python projects, models performed 3 times better on SWE-Bench-Verified. They were also 6 times better at finding edited files, without any additional context about the projects themselves. This gap suggests the models may have seen many SWE-Bench-Verified tasks during training. As a result, scores on this benchmark may not reflect an agent's ability to handle real software issues, yet it continues to be used in ways that can misrepresent progress and lead to choices that favour agents that use certain models over strong agent design. Our setup tests the localization step with minimal context to the extent that the task should be logically impossible to solve. Our results show the risk of relying on older popular benchmarks and support the shift toward newer datasets built with contamination in mind.
翻译:SWE-Bench-Verified 是一个包含 500 个问题的数据集,作为评估各种大语言模型(LLMs)解决 GitHub 问题能力的实际基准。但该基准可能与模型训练数据存在重叠。若果真如此,其评分反映的可能是训练记忆而非问题解决能力。为探究此问题,我们测试了在基准提交中频繁出现的两个 Claude 模型。我们要求它们仅使用问题文本,以及问题文本加文件路径来查找相关文件。随后,我们在 BeetleBox 和 SWE-rebench 上运行相同设置。尽管两个基准均涉及流行的开源 Python 项目,模型在 SWE-Bench-Verified 上的表现却高出 3 倍。在没有任何额外项目上下文的情况下,它们定位被编辑文件的能力也高出 6 倍。这种差距表明模型可能在训练过程中已接触过大量 SWE-Bench-Verified 任务。因此,该基准的评分可能无法反映代理处理真实软件问题的能力,但其仍被以可能误导进展认知的方式使用,导致决策偏向采用特定模型的代理而非优秀代理设计。我们的测试设置以最小化上下文检验定位步骤,其任务难度理论上应无法解决。研究结果揭示了依赖陈旧流行基准的风险,并支持向考虑数据污染问题构建的新数据集转移。