TY - JOUR
T1 - A survey
T2 - Distributed Machine Learning for 5G and beyond
AU - Nassef, Omar
AU - Sun, Wenting
AU - Purmehdi, Hakimeh
AU - Tatipamula, Mallik
AU - Mahmoodi, Toktam
N1 - Funding Information:
Mallik Tatipamula is a CTO at Ericsson, leading evolution of Ericsson’s technology, and champion the company’s next phase of innovation and growth driven by 5G Distributed Multi-Cloud Deployments. He also leads O-RAN and 6G research efforts. Prior to Ericsson, he held several leadership positions at F5 networks, Juniper, Cisco, Motorola, Nortel and IIT Chennai. Since 2011, he has been a visiting professor at King’s College London. He is a Fellow of Canadian Academy of Engineering (CAE) and The Institution of Engineering and Technology (IET). He received “UC Berkeley’s Garwood Center for Corporate Innovation Award,” “CTO/Technologist of the year” award (sponsored by NTT) by World Communications Awards (WCA), “IEEE ComSoc Distinguished Industry Leader Award,” “IET Achievement medal in telecommunications” “CTO of the year from Silicon Valley Business Journal (2019–2020)”. He received his Ph.D., Master’s, and bachelor’s degrees from the University of Tokyo, IIT (Chennai), and the NIT, Warangal, India, respectively.
Funding Information:
The authors would like to acknowledge the many valuable discussions and suggestions provided by Arthur Brisebois and Bassant Selim, whom contributed to this work.
Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/4/22
Y1 - 2022/4/22
N2 - 5G is the fifth generation of cellular networks. It enables billions of connected devices to gather and share information in real time; a key facilitator in Industrial Internet of Things (IoT) applications. It has more capabilities in terms of bandwidth, latency/delay, processing powers and flexibility to utilize either edge or cloud resources. Furthermore, 6G is expected to be equipped with the new capability to converge ubiquitous communication, computation, sensing and controlling for a variety of sectors, which heightens the complexity in a more heterogeneous environment This increased complexity, combined with energy efficiency and Service Level Agreement (SLA) requirements makes application of Machine Learning (ML) and distributed ML necessary. A decentralized approach stemming from distributed learning is a very attractive option compared with a centralized architecture for model learning and inference. Distributed ML exploits recent Artificial Intelligence (AI) technology advancements to allow collaborated ML, whilst safeguarding private data, minimizing both communication and computation overhead along with addressing ultra-low latency requirements. In this paper, we review a number of distributed ML architectures and designs, that focus on optimizing communication, computation and resource distribution. Privacy, information security and compute frameworks, are also analyzed and compared with respect to different distributed ML approaches. We summarize the major contributions and trends in this area and highlight the potential of distributed ML to help researchers and practitioners make informed decisions on selecting the right ML approach for 5G and Beyond related AI applications. To enable distributed ML for 5G and Beyond, communication, security, and computing platform often counter balance each other, thus, consideration and optimization of these aspects at an overall system level is crucial to realize the full potential of AI for 5G and Beyond. These different aspects do not only pertain to 5G, but will also enable careful design of distributed machine learning architectures to circumvent the same hurdles that will inevitably burden 5G and Beyond network generations. This is the first survey paper that brings together all these aspects for distributed ML.
AB - 5G is the fifth generation of cellular networks. It enables billions of connected devices to gather and share information in real time; a key facilitator in Industrial Internet of Things (IoT) applications. It has more capabilities in terms of bandwidth, latency/delay, processing powers and flexibility to utilize either edge or cloud resources. Furthermore, 6G is expected to be equipped with the new capability to converge ubiquitous communication, computation, sensing and controlling for a variety of sectors, which heightens the complexity in a more heterogeneous environment This increased complexity, combined with energy efficiency and Service Level Agreement (SLA) requirements makes application of Machine Learning (ML) and distributed ML necessary. A decentralized approach stemming from distributed learning is a very attractive option compared with a centralized architecture for model learning and inference. Distributed ML exploits recent Artificial Intelligence (AI) technology advancements to allow collaborated ML, whilst safeguarding private data, minimizing both communication and computation overhead along with addressing ultra-low latency requirements. In this paper, we review a number of distributed ML architectures and designs, that focus on optimizing communication, computation and resource distribution. Privacy, information security and compute frameworks, are also analyzed and compared with respect to different distributed ML approaches. We summarize the major contributions and trends in this area and highlight the potential of distributed ML to help researchers and practitioners make informed decisions on selecting the right ML approach for 5G and Beyond related AI applications. To enable distributed ML for 5G and Beyond, communication, security, and computing platform often counter balance each other, thus, consideration and optimization of these aspects at an overall system level is crucial to realize the full potential of AI for 5G and Beyond. These different aspects do not only pertain to 5G, but will also enable careful design of distributed machine learning architectures to circumvent the same hurdles that will inevitably burden 5G and Beyond network generations. This is the first survey paper that brings together all these aspects for distributed ML.
KW - 5G networks
KW - Distributed inference
KW - Distributed machine learning
KW - Latency
KW - Machine Learning
UR - http://www.scopus.com/inward/record.url?scp=85125137800&partnerID=8YFLogxK
U2 - 10.1016/j.comnet.2022.108820
DO - 10.1016/j.comnet.2022.108820
M3 - Article
AN - SCOPUS:85125137800
SN - 1389-1286
VL - 207
JO - COMPUTER NETWORKS
JF - COMPUTER NETWORKS
M1 - 108820
ER -