Large Language Models (LLMs) have shown significant promise in automated theorem proving, yet progress is often constrained by the scarcity of diverse and high-quality formal language data. To address this issue, we introduce Spark-Prover-X1, a 7B parameter model trained via an three-stage framework designed to unlock the reasoning potential of more accessible and moderately-sized LLMs. The first stage infuses deep knowledge through continuous pre-training on a broad mathematical corpus, enhanced by a suite of novel data tasks. Key innovation is a "CoT-augmented state prediction" task to achieve fine-grained reasoning. The second stage employs Supervised Fine-tuning (SFT) within an expert iteration loop to specialize both the Spark-Prover-X1-7B and Spark-Formalizer-X1-7B models. Finally, a targeted round of Group Relative Policy Optimization (GRPO) is applied to sharpen the prover's capabilities on the most challenging problems. To facilitate robust evaluation, particularly on problems from real-world examinations, we also introduce ExamFormal-Bench, a new benchmark dataset of 402 formal problems. Experimental results demonstrate that Spark-Prover-X1-7B achieves state-of-the-art performance among similarly-sized open-source models, attaining a 37.0\% average pass rate (pass@32). It shows exceptional performance on difficult competition benchmarks, notably solving 27 problems on PutnamBench (pass@32) and achieving 24.0\% on CombiBench (pass@32). Our work validates that this diverse training data and progressively refined training pipeline provides an effective path for enhancing the formal reasoning capabilities of lightweight LLMs. Both Spark-Prover-X1-7B and Spark-Formalizer-X1-7B, along with the ExamFormal-Bench dataset, are made publicly available at:https://www.modelscope.cn/organization/iflytek, https://gitcode.com/ifly_opensource.
翻译:大型语言模型在自动定理证明领域展现出巨大潜力,但其进展常受限于多样化、高质量形式化语言数据的稀缺性。为解决这一问题,我们提出了Spark-Prover-X1,这是一个拥有70亿参数的模型,通过三阶段训练框架进行训练,旨在释放更易获取的中等规模大型语言模型的推理潜力。第一阶段通过在广泛的数学语料库上进行持续预训练来注入深度知识,并通过一系列新颖的数据任务进行增强。关键创新在于引入“思维链增强状态预测”任务以实现细粒度推理。第二阶段在专家迭代循环中采用监督微调,专门训练Spark-Prover-X1-7B和Spark-Formalizer-X1-7B模型。最后,应用针对性分组相对策略优化来提升证明器在最复杂问题上的能力。为促进稳健评估,特别是在现实考试问题上的评估,我们还引入了ExamFormal-Bench,这是一个包含402个形式化问题的新基准数据集。实验结果表明,Spark-Prover-X1-7B在同等规模的开源模型中实现了最先进的性能,达到37.0%的平均通过率。在困难竞赛基准上表现出卓越性能,特别是在PutnamBench上解决了27个问题,在CombiBench上达到24.0%的通过率。我们的工作验证了这种多样化训练数据和逐步优化的训练流程为增强轻量级大型语言模型的形式推理能力提供了有效途径。Spark-Prover-X1-7B和Spark-Formalizer-X1-7B模型以及ExamFormal-Bench数据集已在以下地址公开提供:https://www.modelscope.cn/organization/iflytek, https://gitcode.com/ifly_opensource。