Please use this identifier to cite or link to this item:
Title: Fast media caching for geo-distributed data centers
Authors: Zhang, Wei
Wen, Yonggang
Liu, Fang
Chen, Yiqiang
Fan, Rui
Keywords: Engineering::Computer science and engineering
Issue Date: 2018
Source: Zhang, W., Wen, Y., Liu, F., Chen, Y., & Fan, R.. (2018). Fast media caching for geo-distributed data centers. Computer Communications, 120, 46-57. doi:10.1016/j.comcom.2018.02.005
Journal: Computer Communications
Abstract: Recent years have witnessed a phenomenal increase in video traffic. Virtual content delivery networks (vCDNs) coordinate video content delivery through the use of computing and storage resources from the cloud and distributes content to edge nodes near consumers to reduce network traffic and improve service experience. An important objective of vCDNs is operation cost minimization. Since cloud data centers are geo-distributed, content transfer costs vary significantly with different data centers, i.e., the cost is high for retrieval from distant data centers and lower for nearby retrievals. Many popular caching algorithms in use today, such as LRU, do not consider cost when making caching decisions, and as a result, suffer from high data transfer costs and increased network congestion. On the other hand, cost-aware caching algorithms such as LANDLORD [1] are computationally inefficient, with time complexity scaling linearly to the amount of content in the vCDN. Such algorithms are unable to keep pace with the exponential growth in video content over time. In this paper, we propose FMC (fast media caching), a cost-aware and highly efficient caching algorithm for vCDN delivery over geo-distributed data centers. The load cost of each content item is determined by both the item's size and distance from the data center it is loaded from. We first prove that FMC is [Formula presented] competitive under the resource augmentation paradigm, where FMC and the optimal offline adversary have k and h amount of cache, resp., and k ≥ h. Also, we show our algorithm is straightforward and efficient, requiring only O(log m) time per cache access, where m is the number of data centers and is a small constant in practice. We conduct experimental studies on FMC using both synthetic and YouTube traces. Our results show that FMC has on average 50% and up to 66.7% lower cost than LRU. Besides, we show FMC is much faster than LANDLORD, and the speedup scales linearly with cache size.
ISSN: 0140-3664
DOI: 10.1016/j.comcom.2018.02.005
Rights: © 2018 Elsevier B.V. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

Citations 50

Updated on Mar 10, 2021

Citations 50

Updated on Mar 9, 2021

Page view(s)

Updated on Apr 18, 2021

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.