聯邦學習2019年6月論文解讀

2019-08-11 17:00


Adaptive Federated Learning in Resource Constrained Edge ComputingSystems

S. Wang is with the IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598 USA

解讀:未來的通信有與機器學習結合的趨勢,且邊緣計算和聯邦學習分别是近期通信領域和人工智能領域的研究熱點。論文分析基于梯度下降的模型訓練方法,并進行了收斂性分析。論文于2019年發表在IEEE Jouranl on Selected Areas in Communications, VOL. 37, NO. 6, JUNE 2019,值得深入閱讀。

附:論文摘要

Emerging technologiesand applications including Internet of Things, social networking, andcrowd-sourcing generate large amounts of data at the network edge. Machinelearning models are often built from the collected data, to enable the detection,classification, and prediction of future events. Due to bandwidth, storage, andprivacy concerns, it is often impractical to send all the data to a centralizedlocation. In this paper, we consider the problem of learning model parametersfrom data distributed across multiple edge nodes, without sending raw data to acentralized place. Our focus is on a generic class of machine learning modelsthat are trained using gradient descent-based approaches. We analyze theconvergence bound of distributed gradient descent from a theoretical point ofview, based on which we propose a control algorithm that determines the besttradeoff between local update and global parameter aggregation to minimize theloss function under a given resource budget. The performance of the proposedalgorithm is evaluated via extensive experiments with real datasets, both on anetworked prototype system and in a larger-scale simulated environment. Theexperimentation results show that our proposed approach performs near to theoptimum with various machine learning models and different data distributions.

 

推薦閱讀: