Please use this identifier to cite or link to this item:
|Title:||VLC and D2D heterogeneous network optimization : a reinforcement learning approach based on equilibrium problems with equilibrium constraints||Authors:||Raveendran, Neetu
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2019||Source:||Raveendran, N., Zhang, H., Niyato, D., Yang, F., Song, J. & Han, Z. (2019). VLC and D2D heterogeneous network optimization : a reinforcement learning approach based on equilibrium problems with equilibrium constraints. IEEE Transactions On Wireless Communications, 18(2), 1115-1127. https://dx.doi.org/10.1109/TWC.2018.2890057||Project:||M4082187 (4080)
|Journal:||IEEE Transactions on Wireless Communications||Abstract:||The radio frequency spectrum crunch has triggered the harnessing of other sources of bandwidth, for which visible light is a promising candidate. Even though visible light communication (VLC) ensures high capacity, coverage is limited. This necessitates the integration of VLC and device-To-device (D2D) technologies into heterogeneous networks. In particular, mobile users which are accessible by the VLC transmitters can relay data to mobile users which are not, by means of D2D communication. However, due to the distributed behaviors of mobile users, determining optimal data transmission routes from VLC transmitters to end mobile devices is a major challenge. In this paper, we propose a reinforcement learning (RL)-based approach to determine multi-hop data transmission routes in an indoor VLC-D2D heterogeneous network. We obtain the rewards for the RL-based method dynamically, by formulating the interactions between the mobile users relaying the data as an equilibrium problem with equilibrium constraints and using alternating direction method of multipliers to solve it. The proposed technique can achieve optimal data transmission routes in a distributed manner. The simulation results demonstrate the effectiveness of the proposed approach, showing that transmission routes with low delays and high capacities can be achieved through the learning algorithm.||URI:||https://hdl.handle.net/10356/150746||ISSN:||1536-1276||DOI:||10.1109/TWC.2018.2890057||Rights:||© 2019 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on Nov 30, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.