LLM-based web agents have recently made significant progress, but much of it has occurred in closed-source systems, widening the gap with open-source alternatives. Progress has been held back by two key challenges: first, a narrow focus on single-step tasks that overlooks the complexity of multi-step web interactions; and second, the high compute costs required to post-train LLM-based web agents. To address this, we present the first statistically grounded study on compute allocation for LLM web-agent post-training. Our approach uses a two-stage pipeline, training a Llama 3.1 8B student to imitate a Llama 3.3 70B teacher via supervised fine-tuning (SFT), followed by on-policy reinforcement learning. We find this process highly sensitive to hyperparameter choices, making exhaustive sweeps impractical. To spare others from expensive trial-and-error, we sample 1,370 configurations and use bootstrapping to estimate effective hyperparameters. Our results show that combining SFT with on-policy RL consistently outperforms either approach alone on both WorkArena and MiniWob++. Further, this strategy requires only 55% of the compute to match the peak performance of pure SFT on MiniWob++, effectively pushing the compute-performance Pareto frontier, and is the only strategy that can close the gap with closed-source models.
翻译:基于大语言模型(LLM)的网络智能体近期取得了显著进展,但多数成果集中于闭源系统,进一步拉大了与开源替代方案的差距。进展主要受限于两大挑战:一是现有研究过度聚焦于单步任务,忽视了多步网络交互的复杂性;二是对基于LLM的网络智能体进行后训练所需的高昂计算成本。为此,我们首次提出了针对LLM网络智能体后训练计算资源分配的统计基础研究。我们采用两阶段训练流程:首先通过监督微调(SFT)训练Llama 3.1 8B学生模型模仿Llama 3.3 70B教师模型,随后进行同策略强化学习。研究发现该过程对超参数选择极为敏感,使得穷举式参数搜索难以实现。为避免他人重复昂贵的试错过程,我们采样了1,370组配置参数,并采用自助法估计有效超参数。实验结果表明,在WorkArena和MiniWob++基准测试中,结合SFT与同策略RL的方法始终优于单独使用任一策略。此外,该策略仅需55%的计算量即可在MiniWob++上达到纯SFT方法的峰值性能,有效推进了计算-性能帕累托前沿,并且是唯一能缩小与闭源模型差距的技术路径。