Please use this identifier to cite or link to this item:
|Title:||Algorithms and circuits for low-power machine learning IC||Authors:||Thosani, Tejas Hemant||Keywords:||DRNTU::Engineering::Electrical and electronic engineering||Issue Date:||2017||Abstract:||The world of artificial neural networks is an amazing field inspired by the biological model of learning. Multi layered feed-forward networks require significant human intervention for tuning and shows incredibly slow speeds of processing. An alternative model of a single layer feedforward neural network with randomized input layer and hidden layer bias has been proposed to improve efficiency and processing time by almost a thousand fold. We look at extreme learning machines proposed by Prof. Guang-Bin Huang which suggests that the input weights and the hidden layer biases can be randomly assigned if the activation functions are infinitely differentiable. We test different datasets to generate models using noisy parameters for regression, medical classification applications like Diabetes and speech recognition on cochlear implant extracted sound data. We study techniques to generalize data and optimize hidden layer and output of the machine by tuning parameters based on our needs. We also look at circuit implementations of sub-blocks of the neural network concerning the activation thresholding functions after optimizing the same for our datasets of interests. Future research in implementation of the entire neural network in hardware and the implications of non-idealities arising from the same are discussed.||URI:||http://hdl.handle.net/10356/72045||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.