Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/149202
Title: Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction
Authors: Wang, Wenya
Pan, Sinno Jialin
Keywords: Engineering::Computer science and engineering
Issue Date: 2020
Source: Wang, W. & Pan, S. J. (2020). Syntactically meaningful and transferable recursive neural networks for aspect and opinion extraction. Computational Linguistics, 45(4), 705-736. https://dx.doi.org/10.1162/coli_a_00362
Project: M4081532.020 
MOE2016-T2-2-060 
Journal: Computational Linguistics 
Abstract: In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.
URI: https://hdl.handle.net/10356/149202
ISSN: 0891-2017
DOI: 10.1162/coli_a_00362
Schools: School of Computer Science and Engineering 
Rights: © 2019 Association for Computational Linguistics. Published under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Journal Articles

Files in This Item:
File Description SizeFormat 
coli_a_00362.pdf910.1 kBAdobe PDFThumbnail
View/Open

Web of ScienceTM
Citations 20

12
Updated on Oct 30, 2023

Page view(s)

258
Updated on May 7, 2025

Download(s) 50

76
Updated on May 7, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.