Composed Image Retrieval (CIR) is a cross-modal task that aims to retrieve target images from large-scale databases using a reference image and a modification text. Most existing methods rely on a single model to perform feature fusion and similarity matching. However, this paradigm faces two major challenges. First, one model alone can't see the whole picture and the tiny details at the same time; it has to handle different tasks with the same weights, so it often misses the small but important links between image and text. Second, the absence of dynamic weight allocation prevents adaptive leveraging of complementary model strengths, so the resulting embedding drifts away from the target and misleads the nearest-neighbor search in CIR. To address these limitations, we propose Dynamic Adaptive Fusion (DAFM) for multi-model collaboration in CIR. Rather than optimizing a single method in isolation, DAFM exploits the complementary strengths of heterogeneous models and adaptively rebalances their contributions. This not only maximizes retrieval accuracy but also ensures that the performance gains are independent of the fusion order, highlighting the robustness of our approach. Experiments on the CIRR and FashionIQ benchmarks demonstrate consistent improvements. Our method achieves a Recall@10 of 93.21 and an Rmean of 84.43 on CIRR, and an average Rmean of 67.48 on FashionIQ, surpassing recent strong baselines by up to 4.5%. These results confirm that dynamic multi-model collaboration provides an effective and general solution for CIR.
翻译:组合图像检索(CIR)是一项跨模态任务,旨在利用参考图像和修改文本从大规模数据库中检索目标图像。现有方法大多依赖单一模型进行特征融合和相似度匹配。然而,该范式面临两大挑战:首先,单一模型无法同时兼顾全局概貌与局部细节;其必须使用相同权重处理不同任务,因此常忽略图像与文本间微小但关键的关联。其次,动态权重分配的缺失阻碍了互补模型优势的自适应利用,导致生成的嵌入向量偏离目标,误导CIR中的最近邻搜索。为突破这些局限,我们提出用于CIR多模型协作的动态自适应融合(DAFM)方法。DAFM并非孤立优化单一方法,而是利用异构模型的互补优势并自适应地重新平衡其贡献。这不仅最大化检索精度,且确保性能提升与融合顺序无关,凸显了方法的鲁棒性。在CIRR和FashionIQ基准测试上的实验显示出一致的改进:我们的方法在CIRR上实现Recall@10达93.21、Rmean达84.43,在FashionIQ上平均Rmean达67.48,较近期强基线最高提升4.5%。这些结果证实动态多模型协作为CIR提供了有效且通用的解决方案。