This paper addresses the problem of Bangla hate speech identification, a socially impactful yet linguistically challenging task. As part of the "Bangla Multi-task Hate Speech Identification" shared task at the BLP Workshop, IJCNLP-AACL 2025, our team "Retriv" participated in all three subtasks: (1A) hate type classification, (1B) target group identification, and (1C) joint detection of type, severity, and target. For subtasks 1A and 1B, we employed a soft-voting ensemble of transformer models (BanglaBERT, MuRIL, IndicBERTv2). For subtask 1C, we trained three multitask variants and aggregated their predictions through a weighted voting ensemble. Our systems achieved micro-f1 scores of 72.75% (1A) and 72.69% (1B), and a weighted micro-f1 score of 72.62% (1C). On the shared task leaderboard, these corresponded to 9th, 10th, and 7th positions, respectively. These results highlight the promise of transformer ensembles and weighted multitask frameworks for advancing Bangla hate speech detection in low-resource contexts. We made experimental scripts publicly available for the community.


翻译:本文针对孟加拉语仇恨言论识别这一具有社会影响力但语言上极具挑战性的任务展开研究。作为 IJCNLP-AACL 2025 会议上 BLP 研讨会“孟加拉语多任务仇恨言论识别”共享任务的一部分,我们团队“Retriv”参与了全部三个子任务:(1A) 仇恨类型分类、(1B) 目标群体识别,以及 (1C) 类型、严重程度与目标的联合检测。对于子任务 1A 和 1B,我们采用了 Transformer 模型(BanglaBERT、MuRIL、IndicBERTv2)的软投票集成方法。对于子任务 1C,我们训练了三种多任务变体,并通过加权投票集成对其预测结果进行聚合。我们的系统在子任务 1A 和 1B 中分别取得了 72.75% 和 72.69% 的微平均 F1 分数,在子任务 1C 中获得了 72.62% 的加权微平均 F1 分数。在共享任务排行榜上,这些成绩分别对应第 9、第 10 和第 7 名。这些结果凸显了 Transformer 集成与加权多任务框架在资源匮乏背景下推进孟加拉语仇恨言论检测的潜力。我们已将实验脚本公开供社区使用。

0
下载
关闭预览

相关内容

Top
微信扫码咨询专知VIP会员