In this paper, we propose Mixture of Layer-Wise Tokens (MoLT), a parameter- and memory-efficient adaptation framework for audio-visual learning. The key idea of MoLT is to replace conventional, computationally heavy sequential adaptation at every transformer layer with a parallel, lightweight scheme that extracts and fuses layer-wise tokens only from the late layers. We adopt two types of adapters to distill modality-specific information and cross-modal interaction into compact latent tokens in a layer-wise manner. A token fusion module then dynamically fuses these layer-wise tokens by taking into account their relative significance. To prevent the redundancy of latent tokens, we apply an orthogonality regularization between latent tokens during training. Through the systematic analysis of the position of adaptation in the pre-trained transformers, we extract latent tokens only from the late layers of the transformers. This strategic adaptation approach avoids error propagation from the volatile early-layer features, thereby maximizing the adaptation performance while maintaining parameter and memory efficiency. Through extensive experiments, we demonstrate that MoLT outperforms existing methods on diverse audio-visual benchmarks, including Audio-Visual Question Answering, Audio-Visual Segmentation, and Audio-Visual Event Localization.
翻译:本文提出了一种参数与内存高效的视听学习适配框架——层间令牌混合方法(Mixture of Layer-Wise Tokens, MoLT)。其核心思想在于:摒弃传统计算密集型、逐层顺序适配的Transformer架构,转而采用一种并行轻量级方案,仅从预训练模型的深层提取并融合层间令牌。我们采用两类适配器,以分层方式将模态特定信息与跨模态交互蒸馏至紧凑的潜在令牌中。随后,令牌融合模块依据令牌的相对重要性动态融合这些层间令牌。为消除潜在令牌的冗余性,我们在训练过程中对潜在令牌施加正交正则化约束。通过对预训练Transformer中适配位置的系统分析,我们仅从模型的深层提取潜在令牌。这种策略性适配方法避免了因浅层特征波动导致的误差传播,从而在保持参数与内存效率的同时最大化适配性能。大量实验表明,MoLT在多种视听基准测试(包括视听问答、视听分割与视听事件定位)中均优于现有方法。