AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Systematic user evaluation method - for Human-Robot Interaction

Author

Term

4. term

Publication year

2022

Abstract

Denne afhandling undersøger, hvordan systematiske brugerevalueringsmetoder kan informere robotdesign og forbedre menneske-robot-interaktion. Først kortlægges i et litteraturreview de udbredte metoder til dataindsamling i HRI og en gennemgående mangel på tydelig rapportering af analyser. Med udgangspunkt heri gennemføres en brugerevaluering i et kollaborativt beer-pong-scenarie, hvor tre typer dataindsamlingsmetoder sammenlignes: subjektive (fx observationer, interviews, spørgeskemaer), psykofysiske og kvantitative præstationsmål. Datasættene analyseres separat med tematisk analyse og standard statistiske parrede tests og sammenholdes derefter for at vurdere, hvad metoderne har til fælles, hvor de adskiller sig, hvordan de supplerer hinanden, samt hvilke tidsressourcer der kræves til forberedelse, indsamling og analyse. På tværs af metoderne fandtes flere overlappende indikatorer for interaktionen, og metoderne supplerede hinanden godt; spørgeskemaer havde den største forberedelsestid, dataindsamlingstiden var nogenlunde ens, og analyser af subjektive data var mest tidskrævende. Afhandlingen diskuterer implikationer for design af HRI-evalueringer og giver praktiske retningslinjer for valg og kombination af metoder.

This thesis examines how systematic user evaluation methods can inform robot design to improve human-robot interaction. A literature review first maps common HRI data-collection approaches and notes frequent underreporting of analyses. Guided by these insights, a user evaluation in a collaborative beer-pong scenario compares three families of data-collection methods: subjective (e.g., observations, interviews, questionnaires), psychophysical, and quantitative performance measures. Data were analyzed separately using thematic analysis and standard paired statistical tests, then compared to assess shared coverage of interaction, points of difference, complementarities, and time resources for preparation, collection, and analysis. The comparison shows overlapping indicators across methods and that they complement each other; questionnaires required the most preparation time, data collection times were similar, and subjective analyses were the most time-consuming. The thesis discusses implications for designing HRI evaluations and offers practical guidelines for selecting and combining methods.

[This summary has been generated with the help of AI directly from the project (PDF)]