Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLi, Yuanlongen_US
dc.contributor.authorWen, Yonggangen_US
dc.contributor.authorTao, Dachengen_US
dc.contributor.authorGuan, Kyleen_US
dc.identifier.citationLi, Y., Wen, Y., Tao, D. & Guan, K. (2020). Transforming cooling optimization for green data center via deep reinforcement learning. IEEE Transactions On Cybernetics, 50(5), 2002-2013.
dc.description.abstractData center (DC) plays an important role to support services, such as e-commerce and cloud computing. The resulting energy consumption from this growing market has drawn significant attention, and noticeably almost half of the energy cost is used to cool the DC to a particular temperature. It is thus an critical operational challenge to curb the cooling energy cost without sacrificing the thermal safety of a DC. The existing solutions typically follow a two-step approach, in which the system is first modeled based on expert knowledge and, thus, the operational actions are determined with heuristics and/or best practices. These approaches are often hard to generalize and might result in suboptimal performances due to intrinsic model errors for large-scale systems. In this paper, we propose optimizing the DC cooling control via the emerging deep reinforcement learning (DRL) framework. Compared to the existing approaches, our solution lends itself an end-to-end cooling control algorithm (CCA) via an off-policy offline version of the deep deterministic policy gradient (DDPG) algorithm, in which an evaluation network is trained to predict the DC energy cost along with resulting cooling effects, and a policy network is trained to gauge optimized control settings. Moreover, we introduce a de-underestimation (DUE) validation mechanism for the critic network to reduce the potential underestimation of the risk caused by neural approximation. Our proposed algorithm is evaluated on an EnergyPlus simulation platform and on a real data trace collected from the National Super Computing Centre (NSCC) of Singapore. The resulting numerical results show that the proposed CCA can achieve up to 11% cooling cost reduction on the simulation platform compared with a manually configured baseline control algorithm. In the trace-based study of conservative nature, the proposed algorithm can achieve about 15% cooling energy savings on the NSCC data trace. Our pioneering approach can shed new light on the application of DRL to optimize and automate DC operations and management, potentially revolutionizing digital infrastructure management with intelligence.en_US
dc.relation.ispartofIEEE Transactions on Cyberneticsen_US
dc.rights© 2019 IEEE. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleTransforming cooling optimization for green data center via deep reinforcement learningen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.subject.keywordsData Center (DC) Cooling Optimization ,en_US
dc.subject.keywordsDeep Learningen_US
dc.description.acknowledgementdate of current version April 15, 2020. This work was supported in part by the Green Data Centre Research Project, administrated by the Singapore Infocomm and Media Development Authority. This paper was recommended by Associate Editor Y. Zhangen_US
item.fulltextNo Fulltext-
Appears in Collections:SCSE Journal Articles

Citations 10

Updated on Dec 23, 2021

Citations 10

Updated on Dec 24, 2021

Page view(s)

Updated on Jul 3, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.