Causal speech enhancement using dynamical-weighted loss and attention encoder-decoder recurrent neural network
dc.contributor.author | Salem, Nema | |
dc.contributor.author | Peracha, Fahad Khalil | |
dc.contributor.author | Irfan Khattak, Muhammad | |
dc.contributor.author | Saleem, Nasir | |
dc.date.accessioned | 2023-06-10T06:29:38Z | |
dc.date.available | 2023-06-10T06:29:38Z | |
dc.date.issued | 2023-05-11 | |
dc.identifier.doi | https://doi.org/10.1371/journal.pone.0285629 | en_US |
dc.identifier.uri | http://hdl.handle.net/20.500.14131/908 | |
dc.description.abstract | Speech enhancement (SE) reduces background noise signals in target speech and is applied at the front end in various real-world applications, including robust ASRs and real time processing in mobile phone communications. SE systems are commonly integrated into mobile phones to increase quality and intelligibility. As a result, a low-latency system is required to operate in real-world applications. On the other hand, these systems need effi cient optimization. This research focuses on the single-microphone SE operating in real time systems with better optimization. We propose a causal data-driven model that uses attention encoder-decoder long short-term memory (LSTM) to estimate the time-frequency mask from a noisy speech in order to make a clean speech for real-time applications that need low-latency causal processing. The encoder-decoder LSTM and a causal attention mechanism are used in the proposed model. Furthermore, a dynamical-weighted (DW) loss function is proposed to improve model learning by varying the weight loss values. Experi ments demonstrated that the proposed model consistently improves voice quality, intelligi bility, and noise suppression. In the causal processing mode, the LSTM-based estimated suppression time-frequency mask outperforms the baseline model for unseen noise types. The proposed SE improved the STOI by 2.64% (baseline LSTM-IRM), 6.6% (LSTM-KF), 4.18% (DeepXi-KF), and 3.58% (DeepResGRU-KF). In addition, we examine word error rates (WERs) using Google’s Automatic Speech Recognition (ASR). The ASR results show that error rates decreased from 46.33% (noisy signals) to 13.11% (proposed) 15.73% (LSTM), and 14.97% (LSTM-KF). | en_US |
dc.subject | Speech signal processing, speech, Deep learning, background noise | en_US |
dc.title | Causal speech enhancement using dynamical-weighted loss and attention encoder-decoder recurrent neural network | en_US |
dc.source.journal | PLOS-ONE | en_US |
dc.source.volume | 18 | en_US |
dc.source.issue | 5 | en_US |
refterms.dateFOA | 2023-06-10T06:29:38Z | |
dc.contributor.researcher | External Collaboration | en_US |
dc.contributor.lab | NA | en_US |
dc.subject.KSA | HEALTH | en_US |
dc.contributor.ugstudent | 0 | en_US |
dc.contributor.alumnae | 0 | en_US |
dc.source.index | Scopus | en_US |
dc.source.index | WoS | en_US |
dc.contributor.department | Electrical and Computer Engineering | en_US |
dc.contributor.pgstudent | Muhammad Irfan Khattak | en_US |
dc.contributor.firstauthor | Peracha, Fahad Khalil |