AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


Testing, Testing - A comparative study of usability evaluation methods for mobile systems

Authors

;

Term

4. term (INF10 - Master Thesis)

Publication year

2007

Abstract

Dette speciale undersøger, hvordan og hvorfor forskere inden for menneske-computer-interaktion (HCI)—studiet af, hvordan mennesker og digitale systemer interagerer—evaluerer mobile systemer. En litteraturgennemgang viser, at fire tilgange bruges oftest: ekspertevalueringer, laboratorieevalueringer, feltevalueringer og longitudinelle evalueringer. Ekspert- og laboratoriestudier foregår typisk uden for den virkelige brugskontekst, mens felt- og longitudinelle studier foregår i virkelige omgivelser og, i det sidste tilfælde, over længere tid. Med afsæt i gennemgangen blev der planlagt evalueringssessioner, hvor alle fire metoder blev anvendt på et mobilt system for at opnå førstehåndsviden og sammenligne, hvad hver metode afdækker. Sammenligningen viser, at hver metode har forskellige styrker og svagheder, og at der ikke findes én “bedste” metode. For at undersøge, om styrkerne kunne kombineres, foreslår specialet en ny tilgang inspireret af cultural probes—enkle materialer, der inviterer deltagere til at dokumentere aspekter af deres hverdag—samt indsigter fra kontekstuelle og langsigtede studier. Denne “video probe-evaluering” blev designet, anvendt og sammenlignet med de fire etablerede metoder. Den løste nogle kendte problemer, men skabte nye, og er derfor nyttig, men ikke overlegen i forhold til de eksisterende tilgange. Specialet peger ikke på én anbefalet metode, men tydeliggør de afvejninger, der er forbundet med at vælge, hvordan mobile systemer bør evalueres.

This thesis examines how and why researchers in Human–Computer Interaction (HCI)—the study of how people and digital systems interact—evaluate mobile systems. A review of published work shows four approaches are used most often: expert evaluations, laboratory evaluations, field evaluations, and longitudinal evaluations. Expert and lab studies usually take place outside the real context of use, while field and longitudinal studies include real-world use and, in the latter case, extend over longer periods. Guided by this review, the thesis ran evaluation sessions using all four methods on a mobile system to gather first-hand evidence and compare what each method reveals. The comparison shows that every method has different strengths and weaknesses, and that no single “best” method exists. To explore whether strengths could be combined, the thesis proposes a new approach inspired by cultural probes—simple materials that invite participants to capture aspects of their everyday use—and by insights from contextual and long-term studies. This “video probe evaluation” was designed, applied, and compared with the four established methods. It solved some known problems but introduced new ones, and is therefore useful yet not superior to existing approaches. The thesis does not recommend a single method; instead, it clarifies the trade-offs involved in choosing how to evaluate mobile systems.

[This abstract was generated with the help of AI]