Please use this identifier to cite or link to this item:
|Title:||An efficient and accurate deep learning architecture for text sentiment prediction||Authors:||Tan, Jun En||Keywords:||Engineering::Computer science and engineering||Issue Date:||2020||Publisher:||Nanyang Technological University||Project:||SCSE19-0279||Abstract:||Sentiment analysis is an important process in learning individual opinions on a certain topic, product or service. Sentiment analysis offers many insights on a product or service based on the polarity of its reviews, the public sentiment of opinions such as the consumer confidence of an economy as well as the public opinion on a certain political candidate or issue. Accurate and efficient sentiment analysis is important as it allows companies to create new marketing strategies or modify existing marketing strategies based on public opinion. The ability to gauge the sentiment of a topic means being better able to strategize and react appropriately to changes in sentiment. However, the human language is complex and is often layered with figurative language, the use of sarcasm and polysemy, which is the capacity for a word or phrase to have multiple meanings in different contexts. Therefore, it is a difficult task to effectively predict the sentiment of a sentence or document We investigate the use of different deep-learning approaches on the problem of sentiment classification, especially on polarity detection, where a review or opinion will be either classified as positive or negative. In this project, we study the different approaches used for sentiment polarity detection and create a novel deep learning architecture to attempt to maximize the accuracy of sentiment classification. We also explore new approaches that can reduce the memory footprint of the proposed model to ensure a faster and more efficient method of sentiment analysis. Several deep learning approaches that have been studied and applied in this project are transfer learning approaches in Natural Language Processing (NLP), Transformer model blocks and its architectures such as Bidirectional Encoder representations from Transformers (BERT) language representation models. This project intends to study the use of Transformer blocks with previous state-of-the-art deep learning model architectures such as pre-trained word embeddings and Recurrent Neural Network blocks. An architecture using a combination of embeddings, transformer blocks and Long Short-Term Memory networks is proposed and compared with existing state-of-the-art deep learning approaches in sentiment prediction. Our proposed model achieved over 90% accuracy for the sentiment classification tasks without fine-tuning of different hyperparameters. Although we do not achieve the state- 3 of-the-art results using the proposed model architecture, our model architecture achieves a high level of sentiment classification with a smaller memory footprint, making it viable for use in commercial applications compared to the more memory-intensive state-of-the-art deep learning models. We decided to further investigate the potential of the model architecture by including pre-trained weights and having a larger model size. The result obtained was comparable to the current state-of-the-art accuracies for text sentiment prediction.||URI:||https://hdl.handle.net/10356/138157||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Page view(s) 5050
checked on Oct 23, 2020
checked on Oct 23, 2020
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.