The capacity of foundation models allows for their application to new, unseen tasks. The adaptation to such tasks is called transfer learning. An efficient transfer learning method that circumvents parameter optimization is imprinting. The conceptual differences between studies on imprinting form the basis of our systematic investigation. In this work, we propose the general \texttt{IMPRINT} framework, identifying three main components: generation, normalization, and aggregation. Through the lens of this framework, we conduct an in-depth analysis and a comparison of the existing methods. Our findings reveal the benefits of representing novel data with multiple proxies in the generation step and show the importance of proper normalization. Beyond an extensive analytical grounding, our framework enables us to propose a novel variant of imprinting which outperforms previous work on transfer learning tasks by 4\%. This variant determines proxies through clustering motivated by the neural collapse phenomenon -- a connection that we draw for the first time. We publicly release our code at https://github.com/DATEXIS/IMPRINT.
翻译:基础模型的强大能力使其能够应用于未见的新任务。针对此类任务的适配过程称为迁移学习。印刻是一种无需参数优化的高效迁移学习方法。现有印刻研究在概念上的差异构成了我们系统性研究的基础。本文提出通用的\\texttt{IMPRINT}框架,识别出三个核心组件:生成、归一化与聚合。通过该框架视角,我们对现有方法进行了深入分析与比较。研究发现:在生成步骤中使用多个代理表示新数据具有显著优势,同时证明了适当归一化的重要性。除提供全面的理论分析基础外,本框架还使我们提出了一种新型印刻变体,该变体在迁移学习任务上以4%的优势超越先前工作。该变体通过聚类确定代理的动机源于神经坍缩现象——这是我们首次揭示的理论关联。代码已公开发布于https://github.com/DATEXIS/IMPRINT。