Optimizing the performance of large-scale software repositories demands expertise in code reasoning and software engineering (SWE) to reduce runtime while preserving program correctness. However, most benchmarks emphasize what to fix rather than how to fix code. We introduce \textsc{SWE-fficiency}, a benchmark for evaluating repository-level performance optimization on real workloads. Our suite contains 498 tasks across nine widely used data-science, machine-learning, and HPC repositories (e.g., numpy, pandas, scipy): given a complete codebase and a slow workload, an agent must investigate code semantics, localize bottlenecks and relevant tests, and produce a patch that matches or exceeds expert speedup while passing the same unit tests. To enable this how-to-fix evaluation, our automated pipeline scrapes GitHub pull requests for performance-improving edits, combining keyword filtering, static analysis, coverage tooling, and execution validation to both confirm expert speedup baselines and identify relevant repository unit tests. Empirical evaluation of state-of-the-art agents reveals significant underperformance. On average, agents achieve less than 0.15x the expert speedup: agents struggle in localizing optimization opportunities, reasoning about execution across functions, and maintaining correctness in proposed edits. We release the benchmark and accompanying data pipeline to facilitate research on automated performance engineering and long-horizon software reasoning.
翻译:优化大规模软件仓库的性能需要代码推理与软件工程(SWE)的专业知识,以在保持程序正确性的同时降低运行时开销。然而,现有基准测试大多关注“修复什么”而非“如何修复代码”。本文提出 \\textsc{SWE-fficiency} 基准,用于评估真实工作负载下的仓库级性能优化能力。该测试套件涵盖九个广泛使用的数据科学、机器学习及高性能计算仓库(如 numpy、pandas、scipy)中的 498 项任务:给定完整代码库与低效工作负载,智能体需探究代码语义、定位性能瓶颈与相关测试,并生成能匹配或超越专家加速效果且通过相同单元测试的补丁。为实现这种“如何修复”的评估,我们构建的自动化流水线通过关键词过滤、静态分析、覆盖率工具与执行验证相结合的方式,从 GitHub 拉取请求中提取性能优化编辑,既确认专家加速基线,又识别相关仓库单元测试。对前沿智能体的实证评估显示其表现显著不足:平均而言,智能体仅达到专家加速效果的 0.15 倍以下,其在定位优化机会、跨函数执行推理以及保持编辑正确性方面存在明显困难。我们开源该基准及配套数据流水线,以推动自动化性能工程与长周期软件推理领域的研究。