Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/78706
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZheng, Weihua
dc.date.accessioned2019-06-26T01:53:13Z
dc.date.available2019-06-26T01:53:13Z
dc.date.issued2019
dc.identifier.urihttp://hdl.handle.net/10356/78706
dc.description.abstractImage description is currently a hot research field. Most image description generation networks use only a certain data set to train a neural network, and then use the neural network to describe the input image. However, due to the different distribution of different data sets, the network trained on one training set is difficult to perform well on another data set. The main purpose of this project is to improve the description of the current data set by using additional information on the network. The performance of any local network on different data sets can be improved. At the same time, we have added an adaptive attention mechanism to the LSTM network. Whenever a neural network wants to generate a word, this adaptive mechanism can be used to determine whether or not to consider the characteristics of the image. This mechanism can make the statements generated by the local network more reasonable and conform to the image content.en_US
dc.format.extent89 p.en_US
dc.language.isoenen_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleOpen resource-aided image analyticsen_US
dc.typeThesis
dc.contributor.supervisorMao Kezhien_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeMaster of Science (Signal Processing)en_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:EEE Theses
Files in This Item:
File Description SizeFormat 
dissertation.pdf
  Restricted Access
3 MBAdobe PDFView/Open

Page view(s)

240
Updated on Jul 15, 2024

Download(s)

2
Updated on Jul 15, 2024

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.