AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Environment dependent personalized hearing aid configuration

Author

Term

4. term

Publication year

2019

Submitted on

Abstract

Høreapparatbrugere finjusterer ofte deres udstyr for at forbedre taleopfattelse eller dæmpe vindstøj, men manuelle justeringer er tidskrævende, ikke altid genbrugelige og kan være svære for mange brugere. Dette speciale undersøger, hvordan en miljøafhængig, personaliseret konfiguration kan automatiseres ved at lære den enkelte brugers præferencer i specifikke lydmiljøer og anvende dem igen, når et lignende miljø genkendes. Projektet afprøver flere Convolutional Neural Network (CNN)‑arkitekturer og forstærkningslæringsmetoder og udvikler en samlet løsning, hvor et forgrenet multi‑task CNN analyserer og klassificerer lydmiljøet og samtidig forudsiger generelle justeringer, mens Sarsa(λ) anvendes til interaktivt at indsamle brugerfeedback med relativt få brugerinteraktioner. Løsningen er realiseret som en prototype bestående af en smartphone‑app og en cloud‑komponent, som understøtter træningssessioner, der kobler præferencer til en miljørepræsentation. Konfigurationer omfatter tale‑fokus, støjreduktion, vindstøjreduktion samt forstærkning af bas, mellem og diskant (±6 dB) med udgangspunkt i ReSound Smart 3D‑applikationen. Test af prototypen viser, at CNN‑delen præsterer bedst, men at systemets samlede evne til at tilpasse sig og genkalde brugerpræferencer kræver yderligere forbedringer. Designet vurderes lovende, og der skitseres konkrete forbedringer som næste skridt mod acceptable resultater.

Hearing aid users often fine‑tune their devices to improve speech understanding or reduce wind noise, but manual adjustments are time‑consuming, not easily reusable, and challenging for many users. This thesis investigates how to automate environment‑dependent, personalized configuration by learning an individual user’s preferences in specific sound environments and reapplying them when similar conditions occur. The project evaluates multiple Convolutional Neural Network (CNN) architectures and reinforcement learning algorithms and develops a combined solution in which a branched multi‑task CNN analyzes and classifies the environment while also predicting general adjustments, and Sarsa(λ) is used to interactively learn user preferences with relatively few interactions. The solution is implemented as a prototype comprising a smartphone app and a cloud component that support training sessions linking preferences to an environment representation. Targeted parameters include speech focus, noise reduction, wind noise reduction, and bass, mid, and treble gain (±6 dB), building on the ReSound Smart 3D application. Prototype tests indicate that the CNN component performs best, while the system’s overall ability to adapt to and recall user preferences needs further work. The design is considered promising, and concrete improvements are outlined as next steps toward acceptable performance.

[This summary has been generated with the help of AI directly from the project (PDF)]