Learning manipulable representations of the world and its dynamics is central to AI. Joint-Embedding Predictive Architectures (JEPAs) offer a promising blueprint, but lack of practical guidance and theory has led to ad-hoc R&D. We present a comprehensive theory of JEPAs and instantiate it in {\bf LeJEPA}, a lean, scalable, and theoretically grounded training objective. First, we identify the isotropic Gaussian as the optimal distribution that JEPAs' embeddings should follow to minimize downstream prediction risk. Second, we introduce a novel objective--{\bf Sketched Isotropic Gaussian Regularization} (SIGReg)--to constrain embeddings to reach that ideal distribution. Combining the JEPA predictive loss with SIGReg yields LeJEPA with numerous theoretical and practical benefits: (i) single trade-off hyperparameter, (ii) linear time and memory complexity, (iii) stability across hyper-parameters, architectures (ResNets, ViTs, ConvNets) and domains, (iv) heuristics-free, e.g., no stop-gradient, no teacher-student, no hyper-parameter schedulers, and (v) distributed training-friendly implementation requiring only $\approx$50 lines of code. Our empirical validation covers 10+ datasets, 60+ architectures, all with varying scales and domains. As an example, using imagenet-1k for pretraining and linear evaluation with frozen backbone, LeJEPA reaches 79\% with a ViT-H/14. We hope that the simplicity and theory-friendly ecosystem offered by LeJEPA will reestablish self-supervised pre-training as a core pillar of AI research (\href{https://github.com/rbalestr-lab/lejepa}{GitHub repo}).
翻译:学习世界及其动态的可操控表示是人工智能的核心。联合嵌入预测架构(JEPAs)提供了一个有前景的蓝图,但缺乏实践指导和理论导致了临时的研发。我们提出了一个全面的JEPA理论,并将其实例化为{\\bf LeJEPA}——一个简洁、可扩展且理论扎实的训练目标。首先,我们确定各向同性高斯分布是JEPA嵌入应遵循的最优分布,以最小化下游预测风险。其次,我们引入了一种新颖的目标——{\\bf 草图化各向同性高斯正则化}(SIGReg)——来约束嵌入以达到该理想分布。将JEPA预测损失与SIGReg相结合,产生了具有众多理论和实践优势的LeJEPA:(i)单一权衡超参数,(ii)线性的时间和内存复杂度,(iii)在超参数、架构(ResNets、ViTs、ConvNets)和领域间的稳定性,(iv)无需启发式方法,例如无需停止梯度、无需师生架构、无需超参数调度器,以及(v)分布式训练友好的实现,仅需约50行代码。我们的实证验证覆盖了10多个数据集、60多种架构,均具有不同的规模和领域。例如,使用ImageNet-1k进行预训练和冻结骨干网络的线性评估,LeJEPA在ViT-H/14上达到了79%的准确率。我们希望LeJEPA提供的简洁性和理论友好生态系统能重新确立自监督预训练作为人工智能研究的核心支柱(\\href{https://github.com/rbalestr-lab/lejepa}{GitHub仓库})。