• Peter Guld Leth
4. term, Medialogy, Master (Master Programme)
This work presents a multimodal ensemble Sign Language Recognition (SLR) model using an n-gram linear classifier for Natural Language Processing, a vector encoding based on Euclidean distances for gesture recognition, and a fusion approach for confluencing the two. Furthermore, this work proposes a Virtual Reality (VR) User Interface (UI) based on prevailing usability heuristics. The SLR model was shown to have a mean classification accuracy of 41.5%, which is meaningfully below the state of the art, while the VR UI was found to not allow for sufficient levels of Adaptability. Still, there exists many ways in which the components of the SLR model could be improved, and it is the hope that derivative works can make use of the findings presented here for this.
SpecialisationInteraction
LanguageEnglish
Publication date2023
Number of pages23
ID: 530997839