Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/146260
Full metadata record
DC FieldValueLanguage
dc.contributor.authorTan, Zhi-Weien_US
dc.contributor.authorNguyen, Anh Hai Trieuen_US
dc.contributor.authorTran, Linh T. T.en_US
dc.contributor.authorKhong, Andy Wai Hoongen_US
dc.date.accessioned2021-02-04T06:18:41Z-
dc.date.available2021-02-04T06:18:41Z-
dc.date.issued2020-
dc.identifier.citationTan, Z.-W., Nguyen, A. H. T., Tran, L. T. T., & Khong, A. W. H. (2020). A joint-loss approach for speech enhancement via single-channel neural network and MVDR beamformer. Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 841-849.en_US
dc.identifier.urihttps://hdl.handle.net/10356/146260-
dc.description.abstractRecent developments of noise reduction involves the use of neural beamforming. While some success is achieved, these algorithms rely solely on the gain of the beamformer to enhance the noisy signals. We propose a framework that comprises two stages where the first-stage neural network aims to achieve a good estimate of the signal and noise to the secondstage beamformer. We also introduce an objective function that reduces the distortion of the speech component in each stage. This objective function improves the accuracy of the secondstage beamformer by enhancing the first-stage output, and in the second stage, enhances the training of the network by propagating the gradient through the beamforming operation. A parameter is introduced to control the trade-off between optimizing these two stages. Simulation results on the CHiME-3 dataset at low-SNR show that the proposed algorithm is able to exploit the enhancement gains from the neural network and the beamformer with improvement over other baseline algorithms in terms of speech distortion, quality and intelligibility.en_US
dc.description.sponsorshipNational Research Foundation (NRF)en_US
dc.language.isoenen_US
dc.relationMRP14en_US
dc.rights© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.en_US
dc.subjectEngineeringen_US
dc.titleA joint-loss approach for speech enhancement via single-channel neural network and MVDR beamformeren_US
dc.typeConference Paperen
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.contributor.conference2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)en_US
dc.contributor.researchST Engineering-NTU Corporate Laben_US
dc.description.versionAccepted versionen_US
dc.identifier.spage841en_US
dc.identifier.epage849en_US
dc.subject.keywordsNeural Beamformingen_US
dc.subject.keywordsComplex Spectral Mappingen_US
dc.citation.conferencelocationAuckland, New Zealanden_US
dc.description.acknowledgementThis work was supported within the STE-NTU Corporate Lab with funding support from ST Engineering and the National Research Foundation (NRF) Singapore under the Corp Lab@University Scheme (Ref. MRP14) at Nanyang Technological University, Singapore.en_US
item.grantfulltextopen-
item.fulltextWith Fulltext-
Appears in Collections:EEE Conference Papers
Files in This Item:
File Description SizeFormat 
APSIPA2020_revised_from_reviewer.pdf3.07 MBAdobe PDFThumbnail
View/Open

Page view(s)

218
Updated on Sep 29, 2023

Download(s) 50

60
Updated on Sep 29, 2023

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.