AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Robustness of Keyword Spotting Using Self-Supervised Deep Learning

Author

Term

4. term

Publication year

2023

Submitted on

Pages

55

Abstract

Keyword spotting (KWS) er en delopgave i talegenkendelse, der finder et begrænset sæt ord i lyd på enheder med begrænset regnekraft. Denne afhandling undersøger, hvordan støjrobustheden kan forbedres for Keyword Transformer (KWT), en nyere model hvis robusthed ikke tidligere er blevet studeret. Arbejdet kombinerer superviserede og selv-superviserede tilgange: multi-style træning (støjaugmentering) og adversarial træning samt fortræning med data2vec. Modellerne trænes og evalueres på et reduceret Google Speech Commands-datasæt med støj fra CHiME-3 og sammenlignes med baselines med og uden selv-superviseret fortræning. Resultaterne viser, at multi-style superviseret træning giver en markant højere nøjagtighed under støjfyldte forhold, mens adversarial træning kun giver en marginal forbedring. Desuden øger data2vec-fortræning på ren tale robustheden yderligere, især i kombination med multi-style træning. Afhandlingen giver praktiske træningsmetoder og en empirisk vurdering af KWT’s støjrobusthed.

Keyword spotting (KWS) is a subtask of automatic speech recognition that detects a small set of target words in audio on devices with limited computational resources. This thesis investigates how to improve noise robustness for the Keyword Transformer (KWT), a recent model whose robustness has not been studied. It combines supervised and self-supervised approaches: multi-style training (noise augmentation) and adversarial training, together with self-supervised pretraining using data2vec. Models are trained and evaluated on a reduced Google Speech Commands dataset with noise from CHiME-3 and compared against baselines with and without self-supervised pretraining. The results show that multi-style supervised training yields a significant accuracy gain under noisy conditions, whereas adversarial training offers only marginal improvement. Moreover, data2vec pretraining on clean speech further increases robustness, especially when combined with multi-style training. The thesis provides practical training recipes and an empirical assessment of KWT noise robustness.

[This summary has been generated with the help of AI directly from the project (PDF)]