Understanding how visual content communicates sentiment is critical in an era where online interaction is increasingly dominated by this kind of media on social platforms. However, this remains a challenging problem, as sentiment perception is closely tied to complex, scene-level semantics. In this paper, we propose an original framework, MLLMsent, to investigate the sentiment reasoning capabilities of Multimodal Large Language Models (MLLMs) through three perspectives: (1) using those MLLMs for direct sentiment classification from images; (2) associating them with pre-trained LLMs for sentiment analysis on automatically generated image descriptions; and (3) fine-tuning the LLMs on sentiment-labeled image descriptions. Experiments on a recent and established benchmark demonstrate that our proposal, particularly the fine-tuned approach, achieves state-of-the-art results outperforming Lexicon-, CNN-, and Transformer-based baselines by up to 30.9%, 64.8%, and 42.4%, respectively, across different levels of evaluators' agreement and sentiment polarity categories. Remarkably, in a cross-dataset test, without any training on these new data, our model still outperforms, by up to 8.26%, the best runner-up, which has been trained directly on them. These results highlight the potential of the proposed visual reasoning scheme for advancing affective computing, while also establishing new benchmarks for future research.
翻译:在社交媒体平台日益被视觉内容主导的在线互动时代,理解视觉内容如何传达情感至关重要。然而,这仍是一个具有挑战性的问题,因为情感感知与复杂的场景级语义紧密相关。本文提出了一种原创框架MLLMsent,从三个视角探究多模态大语言模型(MLLMs)的情感推理能力:(1)利用MLLMs直接从图像进行情感分类;(2)将其与预训练的大语言模型结合,对自动生成的图像描述进行情感分析;(3)在情感标注的图像描述上对大语言模型进行微调。在近期建立的基准测试上的实验表明,我们的方案,特别是微调方法,取得了最先进的结果,在不同评估者一致性和情感极性类别上,分别比基于词典、CNN和Transformer的基线方法高出最多30.9%、64.8%和42.4%。值得注意的是,在跨数据集测试中,即使未对这些新数据进行任何训练,我们的模型仍比直接在这些数据上训练的最佳次优模型高出最多8.26%。这些结果突显了所提出的视觉推理方案在推进情感计算方面的潜力,同时也为未来研究确立了新的基准。