Knowledge distillation has been widely used to compress existing deep learning models while preserving the performance on a wide range of applications. In the specific context of Automatic Speech Recognition (ASR), distillation from ensembles of acoustic models has recently shown promising results in increasing recognition performance. In this paper, we propose an extension of multi-teacher distillation methods to joint CTC-attention end-to-end ASR systems. We also introduce three novel distillation strategies. The core intuition behind them is to integrate the error rate metric to the teacher selection rather than solely focusing on the observed losses. In this way, we directly distill and optimize the student toward the relevant metric for speech recognition. We evaluate these strategies under a selection of training procedures on different datasets (TIMIT, Librispeech, Common Voice) and various languages (English, French, Italian). In particular, state-of-the-art error rates are reported on the Common Voice French, Italian and TIMIT datasets.
翻译:广泛使用知识蒸馏法来压缩现有的深层学习模式,同时保留广泛应用的绩效。在自动语音识别(ASR)的具体背景下,从各种声学模型中蒸馏最近显示了提高认知性绩效的可喜成果。在本文件中,我们提议将多教师蒸馏法推广到四氯化碳注意端到端的联合ASR系统。我们还引入了三种新颖的蒸馏战略。它们背后的核心直觉是将错误率衡量标准纳入教师选择,而不是仅仅侧重于观察到的损失。这样,我们直接蒸馏和优化学生到相关语音识别指标。我们根据选择的不同数据集(TIMIT、Librispeech、通用语音)和各种语言(英语、法语、意大利语)的培训程序来评估这些战略。特别是,在通用语音法语、意大利语和TIMIT数据集上报告了最新误差率。