Multimodal sentiment analysis aims to extract and integrate semantic information collected from multiple modalities to recognize the expressed emotions and sentiment in multimodal data. This research area's major concern lies in developing an extraordinary fusion scheme that can extract and integrate key information from various modalities. However, one issue that may restrict previous work to achieve a higher level is the lack of proper modeling for the dynamics of the competition between the independence and relevance among modalities, which could deteriorate fusion outcomes by causing the collapse of modality-specific feature space or introducing extra noise. To mitigate this, we propose the Bi-Bimodal Fusion Network (BBFN), a novel end-to-end network that performs fusion (relevance increment) and separation (difference increment) on pairwise modality representations. The two parts are trained simultaneously such that the combat between them is simulated. The model takes two bimodal pairs as input due to the known information imbalance among modalities. In addition, we leverage a gated control mechanism in the Transformer architecture to further improve the final output. Experimental results on three datasets (CMU-MOSI, CMU-MOSEI, and UR-FUNNY) verifies that our model significantly outperforms the SOTA. The implementation of this work is available at https://github.com/declare-lab/BBFN.
翻译:多式情绪分析旨在提取和整合从多种模式收集的语义信息,以承认多式联运数据中表达的情绪和情绪,这一研究领域的主要关切在于发展一个非常的融合计划,能够从各种模式中提取和整合关键信息;然而,一个可能限制以往工作达到更高水平的问题,是缺乏对独立与相关性之间竞争动态的适当模型,这种竞争可能会通过造成特定模式特征空间崩溃或引入额外噪音而加剧融合结果。为缓解这一点,我们提议建立双双双模式融合网络(BBBFFN),这是一个新型端对端网络,在对称模式中进行融合(增量)和分离(异端加量),但两个部分同时受到训练,以模拟它们之间的战斗。模型采用两对双模式作为投入,因为已知的模式之间信息不平衡。此外,我们利用变异结构中的门控制机制进一步改进最后产出。三种数据集的实验结果(CMU-MOSI, CMU-MOSESII)和双端分离(异端加加分)在双模式/MEFAFAFMA中进行重大核查。