Vision transformers in vision-language models apply uniform computational effort across all images, expending 175.33 GFLOPs (ViT-L/14) whether analysing a straightforward product photograph or a complex street scene. We propose ICAR (Image Complexity-Aware Retrieval), which enables vision transformers to use less compute for simple images whilst processing complex images through their full network depth. The key challenge is maintaining cross-modal alignment: embeddings from different processing depths must remain compatible for text matching. ICAR solves this through dual-path training that produces compatible embeddings from both reduced-compute and full-compute processing. This maintains compatibility between image representations and text embeddings in the same semantic space, whether an image exits early or processes fully. Unlike existing two-stage approaches that require expensive reranking, ICAR enables direct image-text matching without additional overhead. To determine how much compute to use, we develop ConvNeXt-IC, which treats image complexity assessment as a classification task. By applying modern classifier backbones rather than specialised architectures, ConvNeXt-IC achieves state-of-the-art performance with 0.959 correlation with human judgement (Pearson) and 4.4x speedup. Evaluated on standard benchmarks augmented with real-world web data, ICAR achieves 20% practical speedup while maintaining category-level performance and 95% of instance-level performance, enabling sustainable scaling of vision-language systems.
翻译:视觉语言模型中的视觉Transformer对所有图像采用统一计算强度,无论是分析简单的产品照片还是复杂的街景场景,均需消耗175.33 GFLOPs(ViT-L/14)。本文提出ICAR(图像复杂度感知检索)方法,使视觉Transformer能够对简单图像使用较少计算量,同时对复杂图像保持完整的网络深度处理。核心挑战在于保持跨模态对齐:来自不同处理深度的嵌入向量必须保持文本匹配兼容性。ICAR通过双路径训练解决该问题,从降维计算和全计算处理中生成兼容的嵌入向量。这确保了无论是早期退出还是完整处理的图像表征,都能与同一语义空间中的文本嵌入保持兼容性。与现有需要昂贵重排序的两阶段方法不同,ICAR无需额外开销即可实现直接的图文匹配。为确定计算量使用程度,我们开发了ConvNeXt-IC模型,将图像复杂度评估视为分类任务。通过采用现代分类器主干网络而非专用架构,ConvNeXt-IC实现了与人类判断0.959相关系数(皮尔逊)的最优性能,并获得4.4倍加速。在标准基准测试集与真实网络数据增强的评估中,ICAR实现了20%的实际加速,同时保持类别级别性能及95%的实例级别性能,为视觉语言系统的可持续扩展提供了可行方案。