Please use this identifier to cite or link to this item:
Title: JALAD : joint accuracy- and latency-aware deep structure decoupling for edge-cloud execution
Authors: Li, Hongshan
Hu, Chenghao
Jiang, Jingyan
Wang, Zhi
Wen, Yonggang
Zhu, Wenwu
Keywords: Engineering::Computer science and engineering
Issue Date: 2019
Source: Li, H., Hu, C., Jiang, J., Wang, Z., Wen, Y., & Zhu, W. (2018). JALAD : joint accuracy- and latency-aware deep structure decoupling for edge-cloud execution. Proceedings of the 2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS), 671-678. doi:10.1109/PADSW.2018.8645013
Abstract: Recent years have witnessed a rapid growth of deep-network based services and applications. A practical and critical problem thus has emerged: how to effectively deploy the deep neural network models such that they can be executed efficiently. Conventional cloud-based approaches usually run the deep models in data center servers, causing large latency because a significant amount of data has to be transferred from the edge of network to the data center. In this paper, we propose JALAD, a joint accuracy- and latency-aware execution framework, which decouples a deep neural network so that a part of it will run at edge devices and the other part inside the conventional cloud, while only a minimum amount of data has to be transferred between them. Though the idea seems straightforward, we are facing challenges including i)how to find the best partition of a deep structure; ii)how to deploy the component at an edge device that only has limited computation power; and iii)how to minimize the overall execution latency. Our answers to these questions are a set of strategies in JALAD, including 1)A normalization based in-layer data compression strategy by jointly considering compression rate and model accuracy; 2)A latency-aware deep decoupling strategy to minimize the overall execution latency; and 3)An edge-cloud structure adaptation strategy that dynamically changes the decoupling for different network conditions. Experiments demonstrate that our solution can significantly reduce the execution latency: it speeds up the overall inference execution with a guaranteed model accuracy loss.
ISBN: 978-1-5386-7308-9
DOI: 10.1109/PADSW.2018.8645013
Rights: © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at:
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Conference Papers

Files in This Item:
File Description SizeFormat 
Joint Accuracy-And Latency-Aware Deep Structure Decoupling.pdf4.25 MBAdobe PDFView/Open

Citations 20

Updated on Mar 10, 2021

Page view(s)

Updated on Jan 17, 2022

Download(s) 50

Updated on Jan 17, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.