Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/73067
Title: Non-verbal speech analysis of parent child dialog
Authors: Balasubramanian Sandeep
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2017
Abstract: Over the recent years, human emotion recognition has been in top priority for researchers from various domains. Despite the use of various physio psychological parameters as an index to human emotions, speech signal is considered as an important parameter that reflects the emotional state of a human being. The importance of automated emotion recognition models can be accredited to the growing demand for socially intelligent systems. This dissertation work focuses on analysing speech signals by extracting non-verbal speech features in order to recognize the emotions and classify them accordingly. The research work has been carried out using the audio data recorded from different parent child conversations by providing them with visual stimuli in the form of pictures. The features are extracted from the audio data using MATLAB and OpenSMILE toolbox. The extracted features are classified into five classes as labelled in the experiment using WEKA Tool. In order to achieve higher classification accuracy, different pairs of classes were chosen based on K-means clustering algorithm and binary classification is performed. The scatter plots are visually represented and classification accuracy for various classifier algorithms have been tabulated. The ranking of classifiers has been done based on their classification accuracy.
URI: http://hdl.handle.net/10356/73067
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Theses

Files in This Item:
File Description SizeFormat 
BALASUBRAMANIAN SANDEEP_ G1601824J.pdf
  Restricted Access
Main article2.09 MBAdobe PDFView/Open

Page view(s) 20

94
checked on Oct 25, 2020

Download(s) 20

11
checked on Oct 25, 2020

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.