Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/146701
Title: | When Siri knows how you feel : study of machine learning in automatic sentiment recognition from human speech | Authors: | Zhang, Liu Ng, Eddie Yin Kwee |
Keywords: | Engineering::Mechanical engineering | Issue Date: | 2018 | Source: | Zhang, L., & Ng, E. Y. K. (2019). When Siri knows how you feel : study of machine learning in automatic sentiment recognition from human speech. Advances in Information and Communication Networks: Proceedings of the 2018 Future of Information and Communication Conference (FICC), 2, 591-602. doi:10.1007/978-3-030-03405-4_41 | Conference: | 2018 Future of Information and Communication Conference (FICC) | Abstract: | Opinions and sentiments are essential to human activities and have a wide variety of applications. As many decision makers turn to social media due to large volume of opinion data available, efficient and accurate sentiment analysis is necessary to extract those data. Hence, text sentiment analysis has recently become a popular field and has attracted many researchers. However, extracting sentiments from audio speech remains a challenge. This project explored the possibility of applying supervised Machine Learning in recognizing sentiments in English utterances on a sentence level. In addition, the project also aimed to examine the effect of combining acoustic and linguistic features on classification accuracy. Six audio tracks were randomly selected to be training data from 40 YouTube videos (monologue) with strong presence of sentiments. Speakers expressed sentiments towards products, films, or political events. These sentiments were manually labelled as negative and positive based on independent judgment of three experimenters. A wide range of acoustic and linguistic features were then analyzed and extracted using sound editing and text mining tools, respectively. A novel approach was proposed, which used a simplified sentiment score to integrate linguistic features and estimate sentiment valence. This approach improved negation analysis and hence increased overall accuracy. Results showed that when both linguistic and acoustic features were used, accuracy of sentiment recognition improved significantly, and that excellent prediction was achieved when the four classifiers were trained, respectively, namely, kNN, SVM, Neural Network, and Naïve Bayes. Possible sources of error and inherent challenges of audio sentiment analysis were discussed to provide potential directions for future research. | URI: | https://hdl.handle.net/10356/146701 | ISBN: | 9783030034047 | DOI: | 10.1007/978-3-030-03405-4_41 | Schools: | School of Mechanical and Aerospace Engineering | Rights: | © 2019 Springer Nature Switzerland AG. This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | MAE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
When siri knows how you feel study of machine learnging in automatic sentiment recognition from human speech.pdf | 804.1 kB | Adobe PDF | ![]() View/Open |
Page view(s)
361
Updated on May 7, 2025
Download(s) 20
233
Updated on May 7, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.