Safety alignment instills in Large Language Models (LLMs) a critical capacity to refuse malicious requests. Prior works have modeled this refusal mechanism as a single linear direction in the activation space. We posit that this is an oversimplification that conflates two functionally distinct neural processes: the detection of harm and the execution of a refusal. In this work, we deconstruct this single representation into a Harm Detection Direction and a Refusal Execution Direction. Leveraging this fine-grained model, we introduce Differentiated Bi-Directional Intervention (DBDI), a new white-box framework that precisely neutralizes the safety alignment at critical layer. DBDI applies adaptive projection nullification to the refusal execution direction while suppressing the harm detection direction via direct steering. Extensive experiments demonstrate that DBDI outperforms prominent jailbreaking methods, achieving up to a 97.88\% attack success rate on models such as Llama-2. By providing a more granular and mechanistic framework, our work offers a new direction for the in-depth understanding of LLM safety alignment.
翻译:安全对齐赋予大语言模型(LLMs)拒绝恶意请求的关键能力。先前的研究将这种拒绝机制建模为激活空间中的单一线性方向。我们认为这是一种过度简化,混淆了两种功能上不同的神经过程:危害检测与拒绝执行。在本研究中,我们将这一单一表征解构为危害检测方向与拒绝执行方向。基于这一细粒度模型,我们提出了差异化双向干预(DBDI),一种新的白盒框架,可在关键层精确地中和安全对齐。DBDI通过自适应投影置零作用于拒绝执行方向,同时通过直接引导抑制危害检测方向。大量实验表明,DBDI优于现有的越狱方法,在Llama-2等模型上实现了高达97.88%的攻击成功率。通过提供一个更细粒度且机制化的框架,我们的工作为深入理解LLM安全对齐提供了新的方向。