Training large language models (LLMs) is fundamentally constrained by limited device memory and costly inter-device communication. Although pipeline parallelism alleviates memory pressure by partitioning models across devices, it incurs activation communication overhead that scales linearly with sequence length, limiting efficiency in long-context training. Recent weight-passing approaches (e.g., WeiPipe) mitigate this by transmitting model weights instead of activations, but suffer from redundant peer-to-peer (P2P) transfers and underutilized intra-node bandwidth. We propose TawPipe--topology-aware weight pipeline parallelism, which exploits hierarchical bandwidth in distributed clusters for improved communication efficiency. TawPipe: (i) groups devices based on topology to optimize intra-node collective and inter-node P2P communication; (ii) assigns each device a fixed shard of model weights and gradients, avoiding redundant transfers; and (iii) overlaps communication with computation to hide latency. Unlike global collective operations used in fully sharded data parallelism (FSDP), TawPipe confines most communication within node boundaries, significantly reducing cross-node traffic. Extensive experiments on up to 24 GPUs with LLaMA-style models show that TawPipe achieves superior throughput and scalability compared to state-of-the-art baselines.
翻译:大语言模型(LLM)的训练从根本上受到设备内存有限和设备间通信成本高昂的制约。尽管流水线并行通过将模型划分到不同设备上来缓解内存压力,但其产生的激活值通信开销随序列长度线性增长,限制了长上下文训练的效率。近期的权重传递方法(如WeiPipe)通过传输模型权重而非激活值来缓解此问题,但存在冗余的点对点(P2P)传输和节点内带宽利用不足的缺陷。我们提出TawPipe——一种网络拓扑感知的权重流水线并行方法,该方法利用分布式集群中的分层带宽结构来提升通信效率。TawPipe具有以下特点:(i)基于网络拓扑对设备进行分组,以优化节点内集合通信和节点间P2P通信;(ii)为每个设备分配固定的模型权重和梯度分片,避免冗余传输;(iii)通过通信与计算重叠来隐藏延迟。与全分片数据并行(FSDP)中使用的全局集合操作不同,TawPipe将大部分通信限制在节点边界内,显著减少了跨节点流量。在多达24个GPU上使用LLaMA风格模型进行的广泛实验表明,与现有先进基线方法相比,TawPipe实现了更优的吞吐量和可扩展性。