Please use this identifier to cite or link to this item:
|Title:||Auxiliary network design for local learning in deep neural networks||Authors:||Peng, Jiawei||Keywords:||Engineering::Electrical and electronic engineering||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Peng, J. (2021). Auxiliary network design for local learning in deep neural networks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/149869||Project:||A3135-201||Abstract:||The training of deep neural networks utilizes the backpropagation algorithm which consists of the forward pass, backward pass and parameters update. The output of a certain layer is produced based on the output of its lower layers in a sequential manner, and the gradients can only flow back layer by layer. This forces the majority of the network to be idle during the training process and hence leads to inefficiency. This is recognized as forward, backward and update lockings. To break the lockings, various methods of decoupled learning have been investigated. Currently, these methods either lead to significant drop in accuracy performance or suffer from dramatic increase in memory usage. To remove these limitations, in this Final Year Project, a new form of decoupled learning, named decoupled neural network training scheme with re-computation and weight prediction (DTRP) is proposed. The proposed method splits a neural network into several modules and trains them synchronously on different workers. In particular, re-computation is adopted to solve the memory explosion problem. A weight prediction scheme is proposed to deal with the weight delay caused by re-computation. To execute weight prediction, several weight predictors are proposed. A batch compensation scheme is also explored which allows the proposed DTRP to run faster. Experiments are conducted on various Convolutional Neural Networks to perform image classification task, which shows comparable or better results against the state-of-art methods and the backpropagation. The experiments also reveal that the memory explosion problem is effectively solved, and a significant acceleration is achieved. Moreover, experiments show that the proposed DTRP can be applied to train very wide networks as well as extremely deep networks.||URI:||https://hdl.handle.net/10356/149869||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on May 25, 2022
Updated on May 25, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.