We introduce VL-JEPA, a vision-language model built on a Joint Embedding Predictive Architecture (JEPA). Instead of autoregressively generating tokens as in classical VLMs, VL-JEPA predicts continuous embeddings of the target texts. By learning in an abstract representation space, the model focuses on task-relevant semantics while abstracting away surface-level linguistic variability. In a strictly controlled comparison against standard token-space VLM training with the same vision encoder and training data, VL-JEPA achieves stronger performance while having 50% fewer trainable parameters. At inference time, a lightweight text decoder is invoked only when needed to translate VL-JEPA predicted embeddings into text. We show that VL-JEPA natively supports selective decoding that reduces the number of decoding operations by 2.85x while maintaining similar performance compared to non-adaptive uniform decoding. Beyond generation, the VL-JEPA's embedding space naturally supports open-vocabulary classification, text-to-video retrieval, and discriminative VQA without any architecture modification. On eight video classification and eight video retrieval datasets, the average performance VL-JEPA surpasses that of CLIP, SigLIP2, and Perception Encoder. At the same time, the model achieves comparable performance as classical VLMs (InstructBLIP, QwenVL) on four VQA datasets: GQA, TallyQA, POPE and POPEv2, despite only having 1.6B parameters.
翻译:我们提出了VL-JEPA,一种基于联合嵌入预测架构(JEPA)的视觉-语言模型。与经典视觉-语言模型(VLM)中自回归生成词元的方式不同,VL-JEPA预测目标文本的连续嵌入表示。通过在抽象表示空间中学习,该模型聚焦于任务相关的语义,同时抽象掉表层语言变异。在与使用相同视觉编码器和训练数据的标准词元空间VLM训练进行的严格对照比较中,VL-JEPA以可训练参数量减少50%的情况下实现了更强的性能。在推理阶段,仅在需要时将VL-JEPA预测的嵌入转换为文本时调用轻量级文本解码器。我们证明,与非自适应的均匀解码相比,VL-JEPA原生支持选择性解码,可将解码操作次数减少2.85倍,同时保持相近的性能。除了生成任务外,VL-JEPA的嵌入空间无需任何架构修改即可天然支持开放词汇分类、文本到视频检索以及判别式视觉问答(VQA)。在八个视频分类和八个视频检索数据集上,VL-JEPA的平均性能超越了CLIP、SigLIP2和Perception Encoder。同时,在四个VQA数据集(GQA、TallyQA、POPE和POPEv2)上,该模型仅拥有16亿参数,却取得了与经典VLM(如InstructBLIP、QwenVL)相当的性能。