Most studies in cross-device federated learning focus on small models, due to the server-client communication and on-device computation bottlenecks. In this work, we leverage various techniques for mitigating these bottlenecks to train larger language models in cross-device federated learning. With systematic applications of partial model training, quantization, efficient transfer learning, and communication-efficient optimizers, we are able to train a $21$M parameter Transformer and $20.2$M parameter Conformer that achieve the same or better perplexity as that of a similarly sized LSTM with $\sim10\times$ smaller client-to-server communication cost and $11\%$ lower perplexity than smaller LSTMs commonly studied in literature.
翻译:由于服务器-客户通信和设备计算瓶颈问题,在跨设备联合学习方面的多数研究侧重于小型模型,在这项工作中,我们利用各种技术来减轻这些瓶颈,在跨设备联合学习方面培训较大的语言模型。通过系统应用部分模型培训、量化、高效转让学习和通信效率优化,我们得以培训一个210 M 参数变异器和20.2 M 参数组合,其复杂程度与类似规模的LSTM相同或更好,其客户对服务器通信成本较小,而且比文献中通常研究的较小LSTMs低11 美元。