Black-box distillation creates student large language models (LLMs) by learning from a proprietary teacher model's text outputs alone, without access to its internal logits or parameters. In this work, we introduce Generative Adversarial Distillation (GAD), which enables on-policy and black-box distillation. GAD frames the student LLM as a generator and trains a discriminator to distinguish its responses from the teacher LLM's, creating a minimax game. The discriminator acts as an on-policy reward model that co-evolves with the student, providing stable, adaptive feedback. Experimental results show that GAD consistently surpasses the commonly used sequence-level knowledge distillation. In particular, Qwen2.5-14B-Instruct (student) trained with GAD becomes comparable to its teacher, GPT-5-Chat, on the LMSYS-Chat automatic evaluation. The results establish GAD as a promising and effective paradigm for black-box LLM distillation.
翻译:黑盒蒸馏通过仅学习专有教师模型的文本输出,而不访问其内部逻辑或参数,来创建学生大型语言模型(LLMs)。在本研究中,我们提出了生成对抗蒸馏(GAD),该方法实现了在线策略和黑盒蒸馏。GAD将学生LLM视为生成器,并训练一个判别器来区分其响应与教师LLM的响应,从而形成一个极小极大博弈。判别器充当与学生共同演化的在线策略奖励模型,提供稳定、自适应的反馈。实验结果表明,GAD在性能上持续超越常用的序列级知识蒸馏。特别是,使用GAD训练的Qwen2.5-14B-Instruct(学生模型)在LMSYS-Chat自动评估中,与其教师模型GPT-5-Chat表现相当。这些结果确立了GAD作为黑盒LLM蒸馏的一种有前景且有效的范式。