Cell-level radiomics features provide fine-grained insights into tumor phenotypes and have the potential to significantly enhance diagnostic accuracy on hematoxylin and eosin (H&E) images. By capturing micro-level morphological and intensity patterns, these features support more precise tumor identification and improve AI interpretability by highlighting diagnostically relevant cells for pathologist review. However, most existing studies focus on slide-level or patch-level tumor classification, leaving cell-level radiomics analysis largely unexplored. Moreover, there is currently no dedicated backbone specifically designed for radiomics data. Inspired by the recent success of the Mamba architecture in vision and language domains, we introduce a Unified Attention-Mamba (UAM) backbone for cell-level classification using radiomics features. Unlike previous hybrid approaches that integrate Attention and Mamba modules in fixed proportions, our unified design flexibly combines their capabilities within a single cohesive architecture, eliminating the need for manual ratio tuning and improving encode capability. We develop two UAM variants to comprehensively evaluate the benefits of this unified structure. Building on this backbone, we further propose a multimodal UAM framework that jointly performs cell-level classification and image segmentation. Experimental results demonstrate that UAM achieves state-of-the-art performance across both tasks on public benchmarks, surpassing leading image-based foundation models. It improves cell classification accuracy from 74% to 78% ($n$=349,882 cells), and tumor segmentation precision from 75% to 80% ($n$=406 patches). These findings highlight the effectiveness and promise of UAM as a unified and extensible multimodal foundation for radiomics-driven cancer diagnosis.


翻译:细胞级影像组学特征为肿瘤表型提供了细粒度的洞察,并具有显著提升苏木精-伊红(H&E)图像诊断准确性的潜力。通过捕捉微观层面的形态学和强度模式,这些特征支持更精确的肿瘤识别,并通过为病理学家突出显示诊断相关细胞来提升人工智能的可解释性。然而,现有研究大多聚焦于玻片级或切片级的肿瘤分类,细胞级影像组学分析在很大程度上尚未得到充分探索。此外,目前尚无专门为影像组学数据设计的专用骨干网络。受近期Mamba架构在视觉和语言领域成功的启发,我们引入了一种用于基于影像组学特征进行细胞级分类的统一注意力-Mamba(UAM)骨干网络。与以往以固定比例集成注意力和Mamba模块的混合方法不同,我们的统一设计在单一连贯的架构内灵活结合了二者的能力,无需手动调整比例,并提升了编码能力。我们开发了两种UAM变体以全面评估此统一结构的优势。基于此骨干网络,我们进一步提出了一种多模态UAM框架,可联合执行细胞级分类与图像分割。实验结果表明,UAM在公开基准测试中,于两项任务上均达到了最先进的性能,超越了领先的基于图像的基础模型。它将细胞分类准确率从74%提升至78%(n=349,882个细胞),并将肿瘤分割精度从75%提升至80%(n=406个切片)。这些发现凸显了UAM作为一种统一且可扩展的多模态基础,在影像组学驱动的癌症诊断中的有效性与前景。

0
下载
关闭预览

相关内容

【CIKM2020】多模态知识图谱推荐系统,Multi-modal KG for RS
专知会员服务
98+阅读 · 2020年8月24日
国家自然科学基金
0+阅读 · 2016年12月31日
国家自然科学基金
3+阅读 · 2015年12月31日
VIP会员
Top
微信扫码咨询专知VIP会员