Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/145298
Full metadata record
DC FieldValueLanguage
dc.contributor.authorNur Azila Azmanen_US
dc.date.accessioned2020-12-16T12:52:47Z-
dc.date.available2020-12-16T12:52:47Z-
dc.date.issued2020-
dc.identifier.urihttps://hdl.handle.net/10356/145298-
dc.description.abstractIn this era, machine learning and deep learning has become very ubiquitous and dominant in our society and it is starting to ingrain itself in our day to day lives whether we realise it or not. From the emergence of smartphones, to smart TVs and smart watches, all the small everyday items have been utilizing a certain kind of artificial intelligence that is easily overlooked as just technology. In reality, the technological sphere is vastly broad and AI is only the tip of an iceberg. Deep Learning is a branch of AI that is growing at an accelerating rate in the tech industry. In this paper, we will be riding on the trends of training a Convolutional Neural Network (CNN), more specifically, we will be focusing our premise on a single pre-trained network called MobileNet. MobileNet is a very popular, robust and lightweight pre-trained model that is available in Keras. We aim to study and understand the parameters that will allow us to increase the accuracy of the pretrained model, MobileNet, through a process called ‘fine-tuning’. We hope to be able to produce and infer from our experiments whereby we confirm if or not these parameters do affect the accuracy of the output model and if so, the degree of significance that a parameter holds in affecting the models’ accuracy. Furthermore, we will be learning how to prepare and process the dataset and samples collected to aid in our study. Our experiments were able to show that fine-tuning by removing the last 5 layers of the pretrained model and retraining them yielded that best result with an accuracy of 99%. We also measured that by increasing the learning rate by tenfold and increasing the trainable layers to 20 on two separate experiments whilst keeping other parameters constant; both yielded a poor performance of similar accuracy, approximately 56% .en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationA2333-192en_US
dc.subjectEngineering::Computer science and engineering::Computing methodologies::Image processing and computer visionen_US
dc.subjectEngineering::Electrical and electronic engineering::Electronic systemsen_US
dc.titleStudy of fine-tuning the pre-trained deep convolutional neural networks for image recognitionen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorJiang Xudongen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineering (Information Engineering and Media)en_US
dc.contributor.supervisoremailEXDJiang@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP Final Report.pdf
  Restricted Access
2.77 MBAdobe PDFView/Open

Page view(s)

166
Updated on Jun 28, 2022

Download(s)

5
Updated on Jun 28, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.