AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


RECON: Et framework til remote automatiseret felttest af mobile systemer

Translated title

RECON: A framework for remote automated field evaluation of mobile systems

Authors

;

Term

10. term

Publication year

2007

Pages

114

Abstract

Mobile enheder som smartphones og PDA’er bruges i stigende grad, men evalueres ofte med laboratoriemetoder, der ikke afspejler den skiftende kontekst i mobil brug. Feltstudier er mere kontekstnære, men kan være tidskrævende og med uklar merværdi. Dette projekt undersøger, om effektiviteten af feltevaluering kan øges uden at forringe (og gerne forbedre) effektiviteten ved at automatisere processen. Vi udviklede RECON, et rammeværk til fjern, automatiseret feltevaluering af mobile systemer, der kan indsamle data om brug (via hooks i den testede applikation), kontekst (via operativsystemets tjenester og tredjeparts-API’er) samt brugernes holdninger (via spørgeskemaer udløst ved bestemte hændelser på enheden). Rammeværket blev afprøvet i en række eksperimenter, der adresserede forskningsspørgsmål om bl.a. detektion af brugbarhedsproblemer, brugsmønstre, holdning, distraktion, position og effektivitet. Resultaterne indikerer, at automatisering kan gøre feltevaluering mere effektiv, mens der fortsat er behov for yderligere undersøgelser for at vurdere rammeværkets effektivitet fuldt ud.

Mobile devices such as smartphones and PDAs are increasingly common, yet they are often evaluated with laboratory methods that do not reflect the changing context of mobile use. Field studies are more contextually grounded but can be time-consuming with uncertain added value. This thesis investigates whether automating field evaluation can improve efficiency while maintaining (or improving) effectiveness. We developed RECON, a framework for remote, automated field evaluation of mobile systems that captures usage data (via hooks in the tested application), context (through operating system services and third-party APIs), and user attitudes (via on-device surveys triggered by specific events). The framework was tested in a series of experiments addressing research questions on detecting usability problems, use patterns, attitude, distraction, position, and efficiency. Findings indicate that automation can make field evaluation more efficient, while further research is needed to determine the framework’s effectiveness.

[This summary has been generated with the help of AI directly from the project (PDF)]