Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/163145
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHe, Kaien_US
dc.contributor.authorMao, Ruien_US
dc.contributor.authorGong, Tieliangen_US
dc.contributor.authorLi, Chenen_US
dc.contributor.authorCambria, Eriken_US
dc.date.accessioned2022-11-25T02:13:47Z-
dc.date.available2022-11-25T02:13:47Z-
dc.date.issued2022-
dc.identifier.citationHe, K., Mao, R., Gong, T., Li, C. & Cambria, E. (2022). Meta-based self-training and re-weighting for aspect-based sentiment analysis. IEEE Transactions On Affective Computing, 3202831-. https://dx.doi.org/10.1109/TAFFC.2022.3202831en_US
dc.identifier.issn1949-3045en_US
dc.identifier.urihttps://hdl.handle.net/10356/163145-
dc.description.abstractAspect-based sentiment analysis (ABSA) means to identify fine-grained aspects, opinions, and sentiment polarities. Recent ABSA research focuses on utilizing multi-task learning (MTL) to achieve less computational costs and better performance. However, there are certain limits in MTL-based ABSA. For example, unbalanced labels and sub-task learning difficulties may result in the biases that some labels and sub-tasks are overfitting, while the others are underfitting. To address these issues, inspired by neuro-symbolic learning systems, we propose a meta-based self-training method with a meta-weighter (MSM). We believe that a generalizable model can be achieved by appropriate symbolic representation selection (in-domain knowledge) and effective learning control (regulation) in a neural system. Thus, MSM trains a teacher model to generate in-domain knowledge (e.g., unlabeled data selection and pseudo-label generation), where the generated pseudo-labels are used by a student model for supervised learning. Then, the meta-weighter of MSM is jointly trained with the student model to provide each instance with sub-task-specific weights to coordinate their convergence rates, balancing class labels, and alleviating noise impacts introduced from self-training. The following experiments indicate that MSM can utilize 50% labeled data to achieve comparable results to state-of-arts models in ABSA and outperform them with all labeled data.en_US
dc.language.isoenen_US
dc.relation.ispartofIEEE Transactions on Affective Computingen_US
dc.rights© 2022 IEEE. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleMeta-based self-training and re-weighting for aspect-based sentiment analysisen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.identifier.doi10.1109/TAFFC.2022.3202831-
dc.identifier.scopus2-s2.0-85137543873-
dc.identifier.spage3202831en_US
dc.subject.keywordsAspect-Based Sentiment Analysisen_US
dc.subject.keywordsMeta Learningen_US
dc.description.acknowledgementThis work has been supported by grant Key Research and Development Program of Ningxia Hui Nationality Autonomous Region (2022BEG02025); grant Key Research and Development Program of Shaanxi Province (2021GXLH-Z095); grant RIE2020 Industry Alignment Fund aˆ Industry Collaboration Projects (IAF-ICP) Funding Initiative; grant 61721002 from the Innovative Research Group of the National Natural Science Foundation of China, and grant IRT 17R86 from the Innovation Research Team of the Ministry of Education, Project of China Knowledge Centre for Engineering Science and Technology.en_US
item.grantfulltextnone-
item.fulltextNo Fulltext-
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 20

12
Updated on Mar 24, 2023

Page view(s)

33
Updated on Mar 26, 2023

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.