Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/178253
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ye, Zhisheng | en_US |
dc.contributor.author | Gao, Wei | en_US |
dc.contributor.author | Hu, Qinghao | en_US |
dc.contributor.author | Sun, Peng | en_US |
dc.contributor.author | Wang, Xiaolin | en_US |
dc.contributor.author | Luo, Yingwei | en_US |
dc.contributor.author | Zhang, Tianwei | en_US |
dc.contributor.author | Wen, Yonggang | en_US |
dc.date.accessioned | 2024-06-10T00:51:10Z | - |
dc.date.available | 2024-06-10T00:51:10Z | - |
dc.date.issued | 2024 | - |
dc.identifier.citation | Ye, Z., Gao, W., Hu, Q., Sun, P., Wang, X., Luo, Y., Zhang, T. & Wen, Y. (2024). Deep learning workload scheduling in GPU datacenters: a survey. ACM Computing Surveys, 56(6), 146-. https://dx.doi.org/10.1145/3638757 | en_US |
dc.identifier.issn | 0360-0300 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/178253 | - |
dc.description.abstract | Deep learning (DL) has demonstrated its remarkable success in a wide variety of fields. The development of a DL model is a time-consuming and resource-intensive procedure. Hence, dedicated GPU accelerators have been collectively constructed into a GPU datacenter. An efficient scheduler design for a GPU datacenter is crucially important to reduce operational cost and improve resource utilization. However, traditional approaches designed for big data or high-performance computing workloads can not support DL workloads to fully utilize the GPU resources. Recently, many schedulers are proposed to tailor for DL workloads in GPU datacenters. This article surveys existing research efforts for both training and inference workloads. We primarily present how existing schedulers facilitate the respective workloads from the scheduling objectives and resource utilization manner. Finally, we discuss several promising future research directions including emerging DL workloads, advanced scheduling decision making, and underlying hardware resources. A more detailed summary of the surveyed paper and code links can be found at our project website: https://github.com/S-Lab-System-Group/Awesome-DL-Scheduling-Papers | en_US |
dc.description.sponsorship | Agency for Science, Technology and Research (A*STAR) | en_US |
dc.language.iso | en | en_US |
dc.relation | IAF-ICP | en_US |
dc.relation.ispartof | ACM Computing Surveys | en_US |
dc.rights | © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. All rights reserved. | en_US |
dc.subject | Computer and Information Science | en_US |
dc.title | Deep learning workload scheduling in GPU datacenters: a survey | en_US |
dc.type | Journal Article | en |
dc.contributor.school | College of Computing and Data Science | en_US |
dc.contributor.research | S-Lab | en_US |
dc.identifier.doi | 10.1145/3638757 | - |
dc.identifier.scopus | 2-s2.0-85188808018 | - |
dc.identifier.issue | 6 | en_US |
dc.identifier.volume | 56 | en_US |
dc.identifier.spage | 146 | en_US |
dc.subject.keywords | Deep learning systems | en_US |
dc.subject.keywords | Datacenter scheduling | en_US |
dc.description.acknowledgement | The research is supported under the National Key R&D Program of China under Grant No. 2022YFB4500701 and the RIE2020 Industry Alignment Fund - Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in kind contributions from the industry partner(s). It is also supported by the National Science Foundation of China (Nos. 62032001, 62032008, 62372011). | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | CCDS Journal Articles |
SCOPUSTM
Citations
50
4
Updated on Dec 5, 2024
Page view(s)
193
Updated on Dec 9, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.