Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/143868
Title: | Deep reinforcement learning for mobile 5g and beyond : fundamentals, applications, and challenges | Authors: | Xiong, Zehui Zhang, Yang Niyato, Dusit Deng, Ruilong Wang, Ping Wang, Li-Chun |
Keywords: | Engineering::Computer science and engineering | Issue Date: | 2019 | Source: | Xiong, Z., Zhang, Y., Niyato, D., Deng, R., Wang, P., & Wang, L.-C. (2019). Deep Reinforcement Learning for Mobile 5G and Beyond: Fundamentals, Applications, and Challenges. IEEE Vehicular Technology Magazine, 14(2), 44–52. doi:10.1109/mvt.2019.2903655 | Journal: | IEEE Vehicular Technology Magazine | Abstract: | Future-generation wireless networks (5G and beyond) must accommodate surging growth in mobile data traffic and support an increasingly high density of mobile users involving a variety of services and applications. Meanwhile, the networks become increasingly dense, heterogeneous, decentralized, and ad hoc in nature, and they encompass numerous and diverse network entities. Consequently, different objectives, such as high throughput and low latency, need to be achieved in terms of service, and resource allocation must be designed and optimized accordingly. However, considering the dynamics and uncertainty that inherently exist in wireless network environments, conventional approaches for service and resource management that require complete and perfect knowledge of the systems are inefficient or even inapplicable. Inspired by the success of machine learning in solving complicated control and decision-making problems, in this article we focus on deep reinforcement- learning (DRL)-based approaches that allow network entities to learn and build knowledge about the networks and thus make optimal decisions locally and independently. We first overview fundamental concepts of DRL and then review related works that use DRL to address various issues in 5G networks. Finally, we present an application of DRL for 5G network slicing optimization. The numerical results demonstrate that the proposed approach achieves superior performance compared with baseline solutions. | URI: | https://hdl.handle.net/10356/143868 | ISSN: | 1556-6072 | DOI: | 10.1109/MVT.2019.2903655 | Rights: | © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/MVT.2019.2903655. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Deep reinforcement learning for mobile 5G and beyond fundamentals applications and challenges.pdf | 780.83 kB | Adobe PDF | View/Open |
PublonsTM
Citations
18
Updated on Jan 18, 2021
Page view(s)
28
Updated on Jan 22, 2021
Download(s) 50
67
Updated on Jan 22, 2021
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.