This work explores the challenge of building ``Machines that Can Remember'', framing long-term memory as the problem of efficient ultra-long context modeling. We argue that this requires three key properties: \textbf{sparsity}, \textbf{random-access flexibility}, and \textbf{length generalization}. To address ultra-long-context modeling, we leverage Hierarchical Sparse Attention (HSA), a novel attention mechanism that satisfies all three properties. We integrate HSA into Transformers to build HSA-UltraLong, which is an 8B-parameter MoE model trained on over 8 trillion tokens and is rigorously evaluated on different tasks with in-domain and out-of-domain context lengths to demonstrate its capability in handling ultra-long contexts. Results show that our model performs comparably to full-attention baselines on in-domain lengths while achieving over 90\% accuracy on most in-context retrieval tasks with contexts up to 16M. This report outlines our experimental insights and open problems, contributing a foundation for future research in ultra-long context modeling.
翻译:本研究探讨了构建‘能够记忆的机器’这一挑战,将长期记忆问题定义为高效超长上下文建模。我们认为这需要三个关键特性:\\textbf{稀疏性}、\\textbf{随机访问灵活性}和\\textbf{长度泛化能力}。为应对超长上下文建模,我们采用了一种新颖的注意力机制——分层稀疏注意力(HSA),该机制同时满足上述三个特性。我们将HSA集成到Transformer架构中,构建了HSA-UltraLong模型:这是一个包含80亿参数的混合专家模型,基于超过8万亿标记进行训练,并在领域内和领域外不同长度的上下文任务中进行了严格评估,以证明其处理超长上下文的能力。实验结果表明,在领域内长度任务上,我们的模型性能与全注意力基线模型相当,同时在大多数上下文长度高达1600万的上下文检索任务中实现了超过90%的准确率。本报告总结了实验发现与待解问题,为未来超长上下文建模研究提供了基础。