Communication between agents often constitutes a major computational bottleneck in distributed learning. One of the most common mitigation strategies is to compress the information exchanged, thereby reducing communication overhead. To counteract the degradation in convergence associated with compressed communication, error feedback schemes -- most notably $\mathrm{EF}$ and $\mathrm{EF}^{21}$ -- were introduced. In this work, we provide a tight analysis of both of these methods. Specifically, we find the Lyapunov function that yields the best possible convergence rate for each method -- with matching lower bounds. This principled approach yields sharp performance guarantees and enables a rigorous, apples-to-apples comparison between $\mathrm{EF}$, $\mathrm{EF}^{21}$, and compressed gradient descent. Our analysis is carried out in the simplified single-agent setting, which allows for clean theoretical insights and fair comparison of the underlying mechanisms.
翻译:在分布式学习中,智能体间的通信通常构成主要的计算瓶颈。最常见的缓解策略之一是对交换的信息进行压缩,从而降低通信开销。为抵消压缩通信带来的收敛性能下降,误差反馈方案——尤其是 $\mathrm{EF}$ 和 $\mathrm{EF}^{21}$——被提出。本研究对这两种方法进行了紧致分析。具体而言,我们为每种方法找到了能给出最优收敛率的李雅普诺夫函数,并给出了匹配的下界。这种原理性方法提供了精确的性能保证,并支持对 $\mathrm{EF}$、$\mathrm{EF}^{21}$ 与压缩梯度下降法进行严谨的公平比较。我们的分析在简化的单智能体场景下进行,这有助于获得清晰的理论见解并公平比较其内在机制。