Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/153544
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGuo, Xuen_US
dc.contributor.authorLi, Boyangen_US
dc.contributor.authorYu, Hanen_US
dc.contributor.authorMiao, Chunyanen_US
dc.date.accessioned2021-12-12T07:12:13Z-
dc.date.available2021-12-12T07:12:13Z-
dc.date.issued2021-
dc.identifier.citationGuo, X., Li, B., Yu, H. & Miao, C. (2021). Latent-optimized adversarial neural transfer for sarcasm detection. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 5394-5407.en_US
dc.identifier.otherhttps://aclanthology.org/volumes/2021.naacl-main/-
dc.identifier.urihttps://hdl.handle.net/10356/153544-
dc.description.abstractThe existence of multiple datasets for sarcasm detection prompts us to apply transfer learning to exploit their commonality. The adversarial neural transfer (ANT) framework utilizes multiple loss terms that encourage the source-domain and the target-domain feature distributions to be similar while optimizing for domain-specific performance. However, these objectives may be in conflict, which can lead to optimization difficulties and sometimes diminished transfer. We propose a generalized latent optimization strategy that allows different losses to accommodate each other and improves training dynamics. The proposed method outperforms transfer learning and meta-learning baselines. In particular, we achieve 10.02% absolute performance gain over the previous state of the art on the iSarcasm dataset.en_US
dc.description.sponsorshipAI Singaporeen_US
dc.description.sponsorshipNanyang Technological Universityen_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationAISG2-RP-2020-019en_US
dc.relationNRF-NRFI05-2019-0002en_US
dc.relationNRF-NRFF13-2021-0006en_US
dc.relationNWJ2020-008en_US
dc.relationA20G8b0102en_US
dc.relationNSC-2019-011en_US
dc.rights© 2021 Association for Computational Linguistics. This is an open-access article distributed under the terms of the Creative Commons Attribution License.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleLatent-optimized adversarial neural transfer for sarcasm detectionen_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.conferenceProceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologiesen_US
dc.contributor.researchJoint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY)en_US
dc.description.versionPublished versionen_US
dc.identifier.spage5394en_US
dc.identifier.epage5407en_US
dc.subject.keywordsTransfer Learningen_US
dc.subject.keywordsDeep Learning Optimizationen_US
dc.subject.keywordsSarcasm Detectionen_US
dc.citation.conferencelocationOnlineen_US
dc.description.acknowledgementThis research is supported by the National Research Foundation, Singapore under its the AI Singapore Programme (AISG2-RP-2020-019), NRF Investigatorship (NRF-NRFI05-2019-0002), and NRF Fellowship (NRF-NRFF13-2021-0006); the Joint NTU-WeBank Research Centre on Fintech (NWJ-2020-008); the Nanyang Assistant/Associate Professorships (NAP); the RIE 2020 Advanced Manufacturing and Engineering Programmatic Fund (A20G8b0102), Singapore; NTU-SDU-CFAIR (NSC-2019-011).en_US
item.fulltextWith Fulltext-
item.grantfulltextopen-
Appears in Collections:SCSE Conference Papers
Files in This Item:
File Description SizeFormat 
2021.naacl-main.425.pdf1.01 MBAdobe PDFView/Open

Page view(s)

55
Updated on May 24, 2022

Download(s)

15
Updated on May 24, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.