Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/139869
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGuo, Zechuanen_US
dc.date.accessioned2020-05-22T05:47:18Z-
dc.date.available2020-05-22T05:47:18Z-
dc.date.issued2020-
dc.identifier.urihttps://hdl.handle.net/10356/139869-
dc.description.abstractThis project aims to develop an automated system for recommendation of theses reviewers using Natural Language Processing (NLP) tools and state-of-the-art language models proposed in recent years. The review of theses or dissertations is crucial to access the research outcomes of Doctor of Philosophy (PhD) students. However, allocation of reviewers can often be a challenging task due to the scope of the project, expertise requirement and availability. In addition, some key details may possibly be overlooked when matching a thesis to reviewers. This will result in a mismatch of the research area between the PhD students and certain reviewers, which can affect the accuracy of the final assessment. Thus, there is a need to develop a system to recommend reviewers based on the similarity of their research fields. This project is based on the concept of semantic text matching, which measures the semantic similarity between source and target text documents. Starting from this idea, various word embedding techniques and deep learning models for comparing document semantics were explored and experimented on the dataset which consists of information pertaining to the research topics of PhD students and reviewers. The results of the Siamese Network were used as implementation benchmarks for the dataset. The performance of the other models was compared against the benchmark, in terms of four evaluation measures. Subsequently, ensemble learning and genetic algorithms were incorporated into the Siamese Network. The resulting model outperformed previous Siamese Networks. This significant improvement highlights the importance of considering various learning and optimization algorithms during the modelling process. Besides, careful tuning of hyperparameters is essential to achieve high-performance and robust language representation models. Finally, Transformer-based language representation models, BERT and ALBERT were implemented and tweaked appropriately to suit the dataset. These deep bidirectional architecture outperformed all previous models and achieved state-of-the-art results for the dataset.en_US
dc.language.isoenen_US
dc.publisherNanyang Technological Universityen_US
dc.relationA3050-191en_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleRecommendation of reviewers based on text analysis and machine learning : part ben_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorLihui CHENen_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineering (Electrical and Electronic Engineering)en_US
dc.contributor.supervisoremailelhchen@ntu.edu.sgen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP_Report_Final_Guo_Zechuan.pdf
  Restricted Access
2.23 MBAdobe PDFView/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.