Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/139583
Title: Multimodal sentiment analysis using hierarchical fusion with context modeling
Authors: Majumder, Navonil
Hazarika, Devamanyu
Gelbukh, Alexander
Cambria, Erik
Poria, Soujanya
Keywords: Engineering::Computer science and engineering
Issue Date: 2018
Source: Majumder, N., Hazarika, D., Gelbukh, A., Cambria, E., & Poria, S. (2018). Multimodal sentiment analysis using hierarchical fusion with context modeling. Knowledge-Based Systems, 161, 124-133. doi:10.1016/j.knosys.2018.07.041
Journal: Knowledge-Based Systems
Abstract: Multimodal sentiment analysis is a very actively growing field of research. A promising area of opportunity in this field is to improve the multimodal fusion mechanism. We present a novel feature fusion strategy that proceeds in a hierarchical fashion, first fusing the modalities two in two and only then fusing all three modalities. On multimodal sentiment analysis of individual utterances, our strategy outperforms conventional concatenation of features by 1%, which amounts to 5% reduction in error rate. On utterance-level multimodal sentiment analysis of multi-utterance video clips, for which current state-of-the-art techniques incorporate contextual information from other utterances of the same clip, our hierarchical fusion gives up to 2.4% (almost 10% error rate reduction) over currently used concatenation. The implementation of our method is publicly available in the form of open-source code.
URI: https://hdl.handle.net/10356/139583
ISSN: 0950-7051
DOI: 10.1016/j.knosys.2018.07.041
Rights: © 2018 Elsevier B.V. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 5

168
Updated on Mar 12, 2023

Web of ScienceTM
Citations 5

133
Updated on Mar 17, 2023

Page view(s)

212
Updated on Mar 21, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.