Affective artificial intelligence has made substantial advances in recent years; yet two critical issues persist, particularly in sensitive applications. First, these systems frequently operate as 'black boxes', leaving their decision-making processes opaque. Second, audit logs often lack reliability, as the entity operating the system may alter them. In this work, we introduce the concept of Immutable Explainability, an architecture designed to address both challenges simultaneously. Our approach combines an interpretable inference engine - implemented through fuzzy logic to produce a transparent trace of each decision - with a cryptographic anchoring mechanism that records this trace on a blockchain, ensuring that it is tamper-evident and independently verifiable. To validate the approach, we implemented a heuristic pipeline integrating lexical and prosodic analysis within an explicit Mamdani-type multimodal fusion engine. Each inference generates an auditable record that is subsequently anchored on a public blockchain (Sepolia Testnet). We evaluated the system using the Spanish MEACorpus 2023, employing both the original corpus transcriptions and those generated by Whisper. The results show that our fuzzy-fusion approach outperforms baseline methods (linear and unimodal fusion). Beyond these quantitative outcomes, our primary objective is to establish a foundation for affective AI systems that offer transparent explanations, trustworthy audit trails, and greater user control over personal data.
翻译:近年来,情感人工智能取得了显著进展;然而,在敏感应用中仍存在两个关键问题。首先,这些系统常作为'黑箱'运行,其决策过程缺乏透明度。其次,审计日志往往缺乏可靠性,因为系统运营方可能对其进行篡改。本研究提出了'不可变可解释性'的概念,旨在通过一种架构设计同时应对这两项挑战。该方法结合了可解释推理引擎(通过模糊逻辑实现,生成透明的决策轨迹)与加密锚定机制(将该轨迹记录于区块链上),确保其具备防篡改特性并支持独立验证。为验证该方案,我们实现了一个启发式处理流程,在显式的Mamdani型多模态融合引擎中整合了词汇与韵律分析。每次推理均生成可审计记录,随后锚定至公共区块链(Sepolia测试网)。我们使用西班牙语MEACorpus 2023数据集进行评估,同时采用原始语料转录文本与Whisper生成的转录文本。结果表明,我们的模糊融合方法优于基线方法(线性融合与单模态融合)。除量化结果外,本研究的主要目标是为情感人工智能系统建立基础框架,使其能够提供透明的解释、可信的审计追踪,并赋予用户对个人数据的更强控制权。