Tabular foundation models such as TabPFN have revolutionized predictive machine learning for tabular data. At the same time, the driving factors of this revolution are hard to understand. Existing open-source tabular foundation models are implemented in complicated pipelines boasting over 10,000 lines of code, lack architecture documentation or code quality. In short, the implementations are hard to understand, not beginner-friendly, and complicated to adapt for new experiments. We introduce nanoTabPFN, a simplified and lightweight implementation of the TabPFN v2 architecture and a corresponding training loop that uses pre-generated training data. nanoTabPFN makes tabular foundation models more accessible to students and researchers alike. For example, restricted to a small data setting it achieves a performance comparable to traditional machine learning baselines within one minute of pre-training on a single GPU (160,000x faster than TabPFN v2 pretraining). This eliminated requirement of large computational resources makes pre-training tabular foundation models accessible for educational purposes. Our code is available at https://github.com/automl/nanoTabPFN.
翻译:以TabPFN为代表的表格基础模型已彻底革新了表格数据的预测性机器学习。然而,这一革新背后的驱动因素却难以理解。现有的开源表格基础模型实现于复杂的流程中,代码量超过一万行,且缺乏架构文档或代码质量保证。简而言之,这些实现难以理解,对初学者不友好,且难以适配新的实验。我们提出了nanoTabPFN,这是TabPFN v2架构的一个简化、轻量级实现,并配有使用预生成训练数据的相应训练循环。nanoTabPFN使得表格基础模型对学生和研究人员都更加易于接触。例如,在小型数据设置下,它仅需在单个GPU上进行一分钟的预训练(比TabPFN v2的预训练快160,000倍),即可达到与传统机器学习基线相当的性能。这种对大规模计算资源需求的消除,使得为教育目的进行表格基础模型的预训练成为可能。我们的代码可在 https://github.com/automl/nanoTabPFN 获取。