Few-shot segmentation aims to segment unseen object categories from just a handful of annotated examples. This requires mechanisms that can both identify semantically related objects across images and accurately produce segmentation masks. We note that Segment Anything 2 (SAM2), with its prompt-and-propagate mechanism, offers both strong segmentation capabilities and a built-in feature matching process. However, we show that its representations are entangled with task-specific cues optimized for object tracking, which impairs its use for tasks requiring higher level semantic understanding. Our key insight is that, despite its class-agnostic pretraining, SAM2 already encodes rich semantic structure in its features. We propose SANSA (Semantically AligNed Segment Anything 2), a framework that makes this latent structure explicit, and repurposes SAM2 for few-shot segmentation through minimal task-specific modifications. SANSA achieves state-of-the-art performance on few-shot segmentation benchmarks specifically designed to assess generalization, outperforms generalist methods in the popular in-context setting, supports various prompts flexible interaction via points, boxes, or scribbles, and remains significantly faster and more compact than prior approaches. Code is available at https://github.com/ClaudiaCuttano/SANSA.
翻译:少样本分割旨在仅通过少量标注样本分割未见过的物体类别。这需要能够同时识别跨图像的语义相关物体并精确生成分割掩码的机制。我们注意到,Segment Anything 2(SAM2)凭借其提示-传播机制,既提供了强大的分割能力,又内置了特征匹配过程。然而,我们发现其表征与针对物体跟踪优化的任务特定线索相互纠缠,这损害了其在需要更高层次语义理解任务中的应用。我们的核心洞见是:尽管SAM2经过类别无关的预训练,但其特征中已编码了丰富的语义结构。我们提出了SANSA(语义对齐的Segment Anything 2),这是一个使这种潜在结构显式化的框架,并通过最小的任务特定修改将SAM2重新用于少样本分割。SANSA在专门为评估泛化能力设计的少样本分割基准测试中实现了最先进的性能,在流行的上下文情境中优于通用方法,支持通过点、框或涂鸦进行灵活交互的多种提示方式,并且比先前方法显著更快、更紧凑。代码可在https://github.com/ClaudiaCuttano/SANSA获取。