Please use this identifier to cite or link to this item:
Title: Computer vision optimization on embedded GPU board
Authors: Li, Ziyang
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Li, Z. (2022). Computer vision optimization on embedded GPU board. Final Year Project (FYP), Nanyang Technological University, Singapore.
Project: SCSE21-0325
Abstract: Computer vision tasks such as image classification have prevalent use and are greatly aided by the development of deep learning techniques, in particular CNN. Performing such tasks on specialized embedded GPU boards can have intriguing prospects in edge computing development. In this study, popular CNN model architectures including GoogLeNet, ResNet and VGG were implemented on the new Jetson Xavier NX Developer Kit. The models are implemented using different deep learning frameworks including PyTorch, TensorFlow and Caffe, the latter involving TensorRT, the Nvidia optimization tool for inference model. The model implementations were evaluated based on various metrics including timing and resource utilization and the results were compared. This study draws the conclusion that DL-based computer vision tasks are compute-bound even on more powerful GPU devices, and the choice of frameworks has a significant effect on the performance of the inference task. In particular, TensorRT produces very significant improvement in terms of inference timing, and scales well across model architecture and model depth.
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
1.61 MBAdobe PDFView/Open

Page view(s)

Updated on May 20, 2022


Updated on May 20, 2022

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.