The rise of disaggregated AI GPUs has exposed a critical bottleneck in large-scale attention workloads: non-uniform memory access (NUMA). As multi-chiplet designs become the norm for scaling compute capabilities, memory latency and bandwidth vary sharply across compute regions, undermining the performance of traditional GPU kernel scheduling strategies that assume uniform memory access. We identify how these NUMA effects distort locality in multi-head attention (MHA) and present Swizzled Head-first Mapping, a spatially-aware scheduling strategy that aligns attention heads with GPU NUMA domains to exploit intra-chiplet cache reuse. On AMD's MI300X architecture, our method achieves up to 50% higher performance over state-of-the-art attention algorithms using conventional scheduling techniques and sustains consistently high L2 cache hit rates of 80-97%. These results demonstrate that NUMA-aware scheduling is now fundamental to achieving full efficiency on next-generation disaggregated GPUs, offering a path forward for scalable AI training and inference.
翻译:解耦式AI GPU的兴起暴露了大规模注意力计算负载中的一个关键瓶颈:非均匀内存访问(NUMA)。随着多芯片组设计成为扩展计算能力的常态,不同计算区域间的内存延迟和带宽差异显著,这削弱了传统GPU内核调度策略(其假设内存访问是均匀的)的性能。我们揭示了这些NUMA效应如何扭曲多头注意力(MHA)中的局部性,并提出了Swizzled Head-first Mapping——一种空间感知的调度策略,该策略将注意力头与GPU NUMA域对齐,以利用芯片组内缓存重用。在AMD的MI300X架构上,我们的方法相比采用传统调度技术的最先进注意力算法,性能提升高达50%,并持续保持80-97%的高L2缓存命中率。这些结果表明,NUMA感知调度对于在下一代解耦式GPU上实现完全效率至关重要,为可扩展的AI训练和推理提供了一条前进路径。