Deep research systems represent an emerging class of agentic information retrieval methods that generate comprehensive and well-supported reports to complex queries. However, most existing frameworks rely on dynamic commercial search APIs, which pose reproducibility and transparency challenges in addition to their cost. To address these limitations, we introduce \textsc{DeepResearchGym} as an open-source sandbox that combines a reproducible search API with a rigorous evaluation protocol for benchmarking deep research systems. The API indexes large-scale public web corpora, namely ClueWeb22 and FineWeb, using a state-of-the-art dense retriever and approximate nearest neighbor search via DiskANN. It achieves lower latency than popular commercial APIs while ensuring stable document rankings across runs, and is free for research use. To evaluate deep research systems' outputs, we extend the Researchy Questions benchmark with automatic metrics through LLM-as-a-judge to measure alignment with users' information needs, retrieval faithfulness, and report quality. Experimental results show that systems integrated with~\textsc{DeepResearchGym} achieve performance comparable to those using commercial APIs, with performance rankings remaining consistent across evaluation metrics. A case study on short-answer search agents further demonstrates the sandbox's utility for cost-effective training, showing that models trained within the sandbox can generalize to commercial search.
翻译:深度研究系统代表了一类新兴的智能信息检索方法,能够针对复杂查询生成全面且论证充分的报告。然而,现有框架大多依赖动态的商业搜索API,除了成本问题外,还带来了可复现性和透明度方面的挑战。为应对这些局限,我们推出\\textsc{DeepResearchGym}作为开源沙盒,它结合了可复现的搜索API与严格的评估协议,用于对深度研究系统进行基准测试。该API通过先进的密集检索器和基于DiskANN的近似最近邻搜索,对大规模公共网络语料库(即ClueWeb22和FineWeb)建立索引。它在确保多次运行中文档排序稳定的同时,实现了比主流商业API更低的延迟,并免费供研究使用。为评估深度研究系统的输出,我们通过LLM-as-a-judge扩展了Researchy Questions基准,引入自动指标以衡量系统输出与用户信息需求的匹配度、检索忠实度及报告质量。实验结果表明,集成~\\textsc{DeepResearchGym}的系统能达到与使用商业API相当的性能,且在不同评估指标下性能排名保持一致。针对短答案搜索代理的案例研究进一步证明了该沙盒在低成本训练中的实用性:在沙盒中训练的模型能够泛化至商业搜索环境。