Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/146260
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tan, Zhi-Wei | en_US |
dc.contributor.author | Nguyen, Anh Hai Trieu | en_US |
dc.contributor.author | Tran, Linh T. T. | en_US |
dc.contributor.author | Khong, Andy Wai Hoong | en_US |
dc.date.accessioned | 2021-02-04T06:18:41Z | - |
dc.date.available | 2021-02-04T06:18:41Z | - |
dc.date.issued | 2020 | - |
dc.identifier.citation | Tan, Z.-W., Nguyen, A. H. T., Tran, L. T. T., & Khong, A. W. H. (2020). A joint-loss approach for speech enhancement via single-channel neural network and MVDR beamformer. Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 841-849. | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/146260 | - |
dc.description.abstract | Recent developments of noise reduction involves the use of neural beamforming. While some success is achieved, these algorithms rely solely on the gain of the beamformer to enhance the noisy signals. We propose a framework that comprises two stages where the first-stage neural network aims to achieve a good estimate of the signal and noise to the secondstage beamformer. We also introduce an objective function that reduces the distortion of the speech component in each stage. This objective function improves the accuracy of the secondstage beamformer by enhancing the first-stage output, and in the second stage, enhances the training of the network by propagating the gradient through the beamforming operation. A parameter is introduced to control the trade-off between optimizing these two stages. Simulation results on the CHiME-3 dataset at low-SNR show that the proposed algorithm is able to exploit the enhancement gains from the neural network and the beamformer with improvement over other baseline algorithms in terms of speech distortion, quality and intelligibility. | en_US |
dc.description.sponsorship | National Research Foundation (NRF) | en_US |
dc.language.iso | en | en_US |
dc.relation | MRP14 | en_US |
dc.rights | © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | en_US |
dc.subject | Engineering | en_US |
dc.title | A joint-loss approach for speech enhancement via single-channel neural network and MVDR beamformer | en_US |
dc.type | Conference Paper | en |
dc.contributor.school | School of Electrical and Electronic Engineering | en_US |
dc.contributor.conference | 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) | en_US |
dc.contributor.research | ST Engineering-NTU Corporate Lab | en_US |
dc.description.version | Accepted version | en_US |
dc.identifier.spage | 841 | en_US |
dc.identifier.epage | 849 | en_US |
dc.subject.keywords | Neural Beamforming | en_US |
dc.subject.keywords | Complex Spectral Mapping | en_US |
dc.citation.conferencelocation | Auckland, New Zealand | en_US |
dc.description.acknowledgement | This work was supported within the STE-NTU Corporate Lab with funding support from ST Engineering and the National Research Foundation (NRF) Singapore under the Corp Lab@University Scheme (Ref. MRP14) at Nanyang Technological University, Singapore. | en_US |
item.grantfulltext | open | - |
item.fulltext | With Fulltext | - |
Appears in Collections: | EEE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
APSIPA2020_revised_from_reviewer.pdf | 3.07 MB | Adobe PDF | ![]() View/Open |
Page view(s)
218
Updated on Sep 29, 2023
Download(s) 50
60
Updated on Sep 29, 2023
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.