Please use this identifier to cite or link to this item:
Title: Bringing AI to edge : from deep learning's perspective
Authors: Liu, Di
Kong, Hao
Luo, Xiangzhong
Liu, Weichen
Subramaniam, Ravi
Keywords: Computer Science - Learning
Computer Science - Artificial Intelligence
Issue Date: 2022
Source: Liu, D., Kong, H., Luo, X., Liu, W. & Subramaniam, R. (2022). Bringing AI to edge : from deep learning's perspective. Neurocomputing, 485, 297-320.
Journal: Neurocomputing
Abstract: Edge computing and artificial intelligence (AI), especially deep learning for nowadays, are gradually intersecting to build a novel system, called edge intelligence. However, the development of edge intelligence systems encounters some challenges, and one of these challenges is the \textit{computational gap} between computation-intensive deep learning algorithms and less-capable edge systems. Due to the computational gap, many edge intelligence systems cannot meet the expected performance requirements. To bridge the gap, a plethora of deep learning techniques and optimization methods are proposed in the past years: light-weight deep learning models, network compression, and efficient neural architecture search. Although some reviews or surveys have partially covered this large body of literature, we lack a systematic and comprehensive review to discuss all aspects of these deep learning techniques which are critical for edge intelligence implementation. As various and diverse methods which are applicable to edge systems are proposed intensively, a holistic review would enable edge computing engineers and community to know the state-of-the-art deep learning techniques which are instrumental for edge intelligence and to facilitate the development of edge intelligence systems. This paper surveys the representative and latest deep learning techniques that are useful for edge intelligence systems, including hand-crafted models, model compression, hardware-aware neural architecture search and adaptive deep learning models. Finally, based on observations and simple experiments we conducted, we discuss some future directions.
ISSN: 0925-2312
DOI: 10.1016/j.neucom.2021.04.141
Rights: © 2021 Elsevier B.V.. All rights reserved. This paper was published in Neurocomputing and is made available with permission of Elsevier B.V.
Fulltext Permission: embargo_20240514
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Journal Articles

Files in This Item:
File Description SizeFormat 
  Until 2024-05-14
1.33 MBAdobe PDFUnder embargo until May 14, 2024

Page view(s)

Updated on May 15, 2022

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.