Composed Image Retrieval (CIR) is a cross-modal task that aims to retrieve target images from large-scale databases using a reference image and a modification text. Most existing methods rely on a single model to perform feature fusion and similarity matching. However, this paradigm faces two major challenges. First, one model alone can't see the whole picture and the tiny details at the same time; it has to handle different tasks with the same weights, so it often misses the small but important links between image and text. Second, the absence of dynamic weight allocation prevents adaptive leveraging of complementary model strengths, so the resulting embedding drifts away from the target and misleads the nearest-neighbor search in CIR. To address these limitations, we propose Dynamic Adaptive Fusion (DAFM) for multi-model collaboration in CIR. Rather than optimizing a single method in isolation, DAFM exploits the complementary strengths of heterogeneous models and adaptively rebalances their contributions. This not only maximizes retrieval accuracy but also ensures that the performance gains are independent of the fusion order, highlighting the robustness of our approach. Experiments on the CIRR and FashionIQ benchmarks demonstrate consistent improvements. Our method achieves a Recall@10 of 93.21 and an Rmean of 84.43 on CIRR, and an average Rmean of 67.48 on FashionIQ, surpassing recent strong baselines by up to 4.5%. These results confirm that dynamic multi-model collaboration provides an effective and general solution for CIR.
翻译:组合图像检索(CIR)是一项跨模态任务,旨在通过参考图像和修改文本来从大规模数据库中检索目标图像。现有方法大多依赖单一模型进行特征融合和相似度匹配。然而,这种范式面临两大挑战。首先,单个模型无法同时兼顾整体视图与细微细节;它必须使用相同的权重处理不同任务,因此常常遗漏图像与文本之间微小但关键的关联。其次,缺乏动态权重分配机制阻碍了自适应利用互补模型的优势,导致生成的嵌入向量偏离目标,误导CIR中的最近邻搜索。为克服这些局限,我们提出了用于CIR多模型协作的动态自适应融合(DAFM)方法。DAFM并非孤立地优化单一方法,而是利用异构模型的互补优势,并自适应地重新平衡其贡献。这不仅最大化检索精度,还确保性能提升与融合顺序无关,凸显了方法的鲁棒性。在CIRR和FashionIQ基准测试上的实验表明了一致的改进效果。我们的方法在CIRR上实现了93.21的Recall@10和84.43的Rmean,在FashionIQ上达到平均67.48的Rmean,较近期强基线最高提升4.5%。这些结果证实,动态多模型协作为CIR提供了有效且通用的解决方案。