Please use this identifier to cite or link to this item:
|Title:||Toward Wi-Fi AP-assisted content prefetching for an on-demand TV series : a learning-based approach||Authors:||Hu, Wen
|Keywords:||Engineering::Computer science and engineering||Issue Date:||2017||Source:||Hu, W., Jin, Y., Wen, Y., Wang, Z., & Sun, L. (2018). Toward Wi-Fi AP-assisted content prefetching for an on-demand TV series : a learning-based approach. IEEE Transactions on Circuits and Systems for Video Technology, 28(7), 1665-1676. doi:10.1109/TCSVT.2017.2684302||Journal:||IEEE Transactions on Circuits and Systems for Video Technology||Abstract:||The emergence of smart Wi-Fi access points (AP), which are equipped with huge storage space, opens a new research area on how to utilize these resources at the edge network to improve users' quality of experience (e.g., a short startup delay and smooth playback). One important research interest in this area is content prefetching which predicts and accurately fetches contents ahead of users' requests to shift the traffic away during peak periods. However, in practice, the different video watching patterns among users and the varying network connection status lead to the time-varying server load, which eventually makes the content prefetching problem challenging. To understand this challenge, this paper first performs a large-scale measurement study on users' AP connection and TV series watching patterns using real traces. Then, based on the obtained insights, we formulate the content prefetching problem as a Markov decision process. The objective is to strike a balance between the increased prefetching and storage cost incurred by incorrect prediction and the reduced content download delay because of successful prediction. A learning-based approach is proposed to solve this problem and another three algorithms are adopted as baselines. In particular, first we investigate the performance lower bound by using a random algorithm and the upper bound by using an ideal offline approach. Then, we present a heuristic algorithm as another baseline. Finally, we design a reinforcement learning algorithm that is more practical to work in the online manner. Through extensive trace-based experiments, we demonstrate the performance gain of our design. Remarkably, our learning-based algorithm achieves a better precision and hit ratio (e.g., 80%) with about 70% (resp. 50%) cost saving compared to the random (resp. heuristic) algorithm.||URI:||https://hdl.handle.net/10356/142290||ISSN:||1051-8215||DOI:||10.1109/TCSVT.2017.2684302||Rights:||© 2017 IEEE. All rights reserved.||Fulltext Permission:||none||Fulltext Availability:||No Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Updated on Mar 10, 2021
Updated on Mar 9, 2021
Updated on May 14, 2021
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.