Please use this identifier to cite or link to this item:
|Title:||Study of fine-tuning the pre-trained deep convolutional neural networks for image recognition||Authors:||Nur Azila Azman||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Engineering::Electrical and electronic engineering::Electronic systems
|Issue Date:||2020||Publisher:||Nanyang Technological University||Project:||A2333-192||Abstract:||In this era, machine learning and deep learning has become very ubiquitous and dominant in our society and it is starting to ingrain itself in our day to day lives whether we realise it or not. From the emergence of smartphones, to smart TVs and smart watches, all the small everyday items have been utilizing a certain kind of artificial intelligence that is easily overlooked as just technology. In reality, the technological sphere is vastly broad and AI is only the tip of an iceberg. Deep Learning is a branch of AI that is growing at an accelerating rate in the tech industry. In this paper, we will be riding on the trends of training a Convolutional Neural Network (CNN), more specifically, we will be focusing our premise on a single pre-trained network called MobileNet. MobileNet is a very popular, robust and lightweight pre-trained model that is available in Keras. We aim to study and understand the parameters that will allow us to increase the accuracy of the pretrained model, MobileNet, through a process called ‘fine-tuning’. We hope to be able to produce and infer from our experiments whereby we confirm if or not these parameters do affect the accuracy of the output model and if so, the degree of significance that a parameter holds in affecting the models’ accuracy. Furthermore, we will be learning how to prepare and process the dataset and samples collected to aid in our study. Our experiments were able to show that fine-tuning by removing the last 5 layers of the pretrained model and retraining them yielded that best result with an accuracy of 99%. We also measured that by increasing the learning rate by tenfold and increasing the trainable layers to 20 on two separate experiments whilst keeping other parameters constant; both yielded a poor performance of similar accuracy, approximately 56% .||URI:||https://hdl.handle.net/10356/145298||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Updated on Jul 2, 2022
Updated on Jul 2, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.