Mixture-of-experts (MoE) architectures have expanded from language modeling to automatic speech recognition (ASR). Traditional MoE methods, such as the Switch Transformer, route experts independently within each layer. Our analysis reveals that routers in most layers make expert choices that are not strongly correlated with the choices of the routers in other layers. To increase the cooperation between experts in different layers and encourage greater specialization, we use a shared router across different MoE layers. We call this model Omni-router Transformer. Extensive experiments on a large-scale pseudo-labeled dataset and evaluations across 10 diverse, out-of-domain ASR benchmarks demonstrate that the Omni-router Transformer is able to achieve lower training loss and consistently outperform dense and Switch Transformer models, reducing average word error rates by 11.2% and 8.2%, respectively, while providing structured expert usage and improved robustness to diverse data.
翻译:专家混合(MoE)架构已从语言建模扩展到自动语音识别(ASR)。传统MoE方法(如Switch Transformer)在每一层内独立路由专家。我们的分析表明,大多数层的路由器所做的专家选择与其他层的路由器选择之间并不存在强相关性。为增强不同层专家间的协作并促进更深入的专业化,我们在不同的MoE层间使用共享路由器,并将该模型称为Omni-router Transformer。在大规模伪标记数据集上的大量实验及对10个多样化、跨领域ASR基准的评估表明,Omni-router Transformer能够实现更低的训练损失,并持续优于稠密模型和Switch Transformer模型,分别将平均词错误率降低11.2%和8.2%,同时提供结构化的专家使用模式并增强了对多样化数据的鲁棒性。