Pre-training has proven effective for learning transferable features in sign language understanding (SLU) tasks. Recently, skeleton-based methods have gained increasing attention because they can robustly handle variations in subjects and backgrounds without being affected by appearance or environmental factors. Current SLU methods continue to face three key limitations: 1) weak semantic grounding, as models often capture low-level motion patterns from skeletal data but struggle to relate them to linguistic meaning; 2) imbalance between local details and global context, with models either focusing too narrowly on fine-grained cues or overlooking them for broader context; and 3) inefficient cross-modal learning, as constructing semantically aligned representations across modalities remains difficult. To address these, we propose Sigma, a unified skeleton-based SLU framework featuring: 1) a sign-aware early fusion mechanism that facilitates deep interaction between visual and textual modalities, enriching visual features with linguistic context; 2) a hierarchical alignment learning strategy that jointly maximises agreements across different levels of paired features from different modalities, effectively capturing both fine-grained details and high-level semantic relationships; and 3) a unified pre-training framework that combines contrastive learning, text matching and language modelling to promote semantic consistency and generalisation. Sigma achieves new state-of-the-art results on isolated sign language recognition, continuous sign language recognition, and gloss-free sign language translation on multiple benchmarks spanning different sign and spoken languages, demonstrating the impact of semantically informative pre-training and the effectiveness of skeletal data as a stand-alone solution for SLU.
翻译:预训练已被证明对于学习手语理解任务中的可迁移特征具有显著效果。近年来,基于骨架的方法因其能够稳健处理主体和背景变化,且不受外观或环境因素影响而受到越来越多的关注。当前手语理解方法仍面临三个关键局限:1)语义基础薄弱,模型常从骨架数据中捕捉低级运动模式,但难以将其与语言意义关联;2)局部细节与全局语境失衡,模型要么过于聚焦细粒度线索,要么为追求更广泛语境而忽视它们;3)跨模态学习效率低下,构建跨模态语义对齐表征仍然困难。为解决这些问题,我们提出Sigma,一个统一的基于骨架的手语理解框架,其特点包括:1)一种手语感知的早期融合机制,促进视觉与文本模态间的深度交互,以语言语境丰富视觉特征;2)一种分层对齐学习策略,联合最大化来自不同模态的配对特征在不同层级上的一致性,有效捕捉细粒度细节与高层语义关系;3)一个统一的预训练框架,结合对比学习、文本匹配和语言建模,以促进语义一致性与泛化能力。Sigma在多个涵盖不同手语和口语的基准测试中,于孤立手语识别、连续手语识别和无注释手语翻译任务上均取得了新的最先进结果,证明了语义信息预训练的重要性以及骨架数据作为手语理解独立解决方案的有效性。