Please use this identifier to cite or link to this item:
Title: Optimization of neural networks through high level synthesis
Authors: Liem, Jonathan Zhuan Kim
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Issue Date: 2018
Abstract: With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other systems. However, such a drawback of machine learning algorithms is the humongous computational and space complexity, requiring large amounts of power and/or physical size to run. In embedded systems, these issues pose a problem, as size and performance are key constraints. However, optimizing such solutions require engineering at the Register Transfer Level (RTL), which is time-consuming and error-prone. In such implementations, it may be acceptable to accept a solution that does the job well enough, instead of one that is optimized down to the last bit through RTL designs. In this report, we have implemented a small-scale machine learning model, trained offline in Python, a Convolutional Neural Network (CNN) onto an Field-Programmable Gate Array, the Zedboard. This report explores the combinations of compiler directives or compiler pragmas, which are interpreted by the High-Level Synthesis (HLS) compiler. Under these directives, the designer can affect how the solution is implemented, and can improve the space and computational complexity.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
Main FYP Report1.84 MBAdobe PDFView/Open

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.