AAU Student Projects - visit Aalborg University's student projects portal
A master's thesis from Aalborg University
Book cover


'Usability evaluation in open source development'

Authors

; ;

Term

4. term (INF10 - Master Thesis)

Publication year

2006

Abstract

Open source-software er blevet et populært alternativ til kommercielle løsninger, men kritiseres ofte for manglende brugervenlighed. Denne afhandling undersøger, hvordan brugervenlighed i open source-projekter kan styrkes ved hjælp af fjern brugervenlighedsevaluering—altså at lade brugere teste softwaren online fra deres egne omgivelser i stedet for i et laboratorium. Først gennemførte vi en spørgeskemaundersøgelse og interviews med open source-udviklere og brugervenlighedsspecialister for at kortlægge holdninger og den nuværende praksis. Udviklerne vurderede brugervenlighed som vigtig, men det konkrete arbejde byggede ofte på sund fornuft frem for systematisk test. De største barrierer for bedre brugervenlighed fandtes i selve open source-udviklingsmodellen og den distribuerede organisering. Derfor sammenlignede vi tre fjernmetoder med en laboratorietest. Sammenligningen viste, at en synkron fjern evaluering (i realtid sammen med en evaluator) gav resultater, der var på linje med laboratorietesten. En asynkron fjern evaluering (hvor deltagere tester på egen hånd og rapporterer efterfølgende) fandt markant færre problemer end laboratorietesten, men de fundne problemer var kritiske. Begge fjernmetoder kan fungere som alternativer eller supplementer til brugervenlighedsevaluering i open source-miljøet.

Open source software has become a popular alternative to commercial products, but it is often criticized for being hard to use. This thesis explores how to improve usability in open source projects through remote usability evaluation—letting people test software online from their own location instead of in a lab. We began with a survey and interviews with open source developers and usability professionals to understand attitudes and current practices. Developers rated usability as important, yet most efforts relied on common sense rather than structured testing. Major obstacles to better usability work were found in the open source development model and its distributed organization. We therefore compared three remote methods against a laboratory evaluation. The comparison showed that a synchronous remote evaluation (real-time sessions with an evaluator) produced results comparable to the lab. An asynchronous remote evaluation (participants test on their own and report later) identified significantly fewer issues than the lab, but the issues found were critical. We consider both remote approaches viable alternatives or supplements to usability evaluation in the open source community.

[This abstract was generated with the help of AI]