As large reasoning models (LRMs) grow more capable, chain-of-thought (CoT) reasoning introduces new safety challenges. Existing SFT-based safety alignment studies dominantly focused on filtering prompts with safe, high-quality responses, while overlooking hard prompts that always elicit harmful outputs. To fill this gap, we introduce UnsafeChain, a safety alignment dataset constructed from hard prompts with diverse sources, where unsafe completions are identified and explicitly corrected into safe responses. By exposing models to unsafe behaviors and guiding their correction, UnsafeChain enhances safety while preserving general reasoning ability. We fine-tune three LRMs on UnsafeChain and compare them against recent SafeChain and STAR-1 across six out-of-distribution and five in-distribution benchmarks. UnsafeChain consistently outperforms prior datasets, with even a 1K subset matching or surpassing baseline performance, demonstrating the effectiveness and generalizability of correction-based supervision. We release our dataset and code at https://github.com/mbzuai-nlp/UnsafeChain
翻译:随着大型推理模型(LRMs)能力不断增强,思维链(CoT)推理引入了新的安全挑战。现有的基于监督微调(SFT)的安全对齐研究主要集中于筛选具有安全、高质量响应的提示,而忽略了那些总是引发有害输出的困难提示。为填补这一空白,我们提出了UnsafeChain,这是一个从多种来源的困难提示构建的安全对齐数据集,其中不安全补全被识别并显式修正为安全响应。通过让模型接触不安全行为并引导其修正,UnsafeChain在保持通用推理能力的同时增强了安全性。我们在UnsafeChain上微调了三个LRMs,并在六个分布外和五个分布内基准测试中与最近的SafeChain和STAR-1进行了比较。UnsafeChain始终优于先前数据集,即使仅使用1K子集也能匹配或超越基线性能,证明了基于修正监督的有效性和泛化能力。我们在https://github.com/mbzuai-nlp/UnsafeChain发布了数据集和代码。