We propose a prompt-conditioned framework built on MedSigLIP that injects textual priors via Feature-wise Linear Modulation (FiLM) and multi-scale pooling. Text prompts condition patch-token features on clinical intent, enabling data-efficient learning and rapid adaptation. The architecture combines global, local, and texture-aware pooling through separate regression heads fused by a lightweight MLP, trained with pairwise ranking loss. Evaluated on the LDCTIQA2023 (a public LDCT quality assessment challenge) with 1,000 training images, we achieve PLCC = 0.9575, SROCC = 0.9561, and KROCC = 0.8301, surpassing the top-ranked published challenge submissions and demonstrating the effectiveness of our prompt-guided approach.
翻译:我们提出了一种基于MedSigLIP的提示条件化框架,通过特征级线性调制(FiLM)和多尺度池化注入文本先验。文本提示根据临床意图对图像块标记特征进行条件化,实现了数据高效学习和快速适应。该架构通过独立的回归头结合全局、局部和纹理感知池化,并由轻量级多层感知器融合,采用成对排序损失进行训练。在LDCTIQA2023(一个公开的低剂量CT质量评估挑战赛)上使用1,000张训练图像进行评估,我们取得了PLCC = 0.9575、SROCC = 0.9561和KROCC = 0.8301的指标,超越了已发表挑战赛中的最高排名提交结果,证明了我们提示引导方法的有效性。