Performance Evaluation of Crowdsourced HCI User Studies

Studenteropgave: Kandidatspeciale og HD afgangsprojekt

  • Allan Kærgaard Christensen
  • Simon Andre' Pedersen
4. semester, Medialogi, Kandidat (Kandidatuddannelse)
In this study, we examined the viability for HCI user studies
to use crowdsourcing as a participant group. Potentially
it could yield higher attendance for the studies and studies
would not rely on subjects classified as WEIRD. We conducted
a preliminary study to determine if touch or tilt controlled
a game better within a lab environment. We found
that touch outperformed tilt. Following this study, we examined
if an informed crowd (informed about being in an
experiment) and uninformed crowd could perform equivalent
to participants in a controlled lab environment. The study
showed that in our first level, a touch controlled game, the
lab environment outperformed both of the crowds, while the
informed crowd performed better than the uninformed. The
second level featured a device human resolution experiment
through Fitts’ law, to determine the smallest selectable target
with little effort. The data revealed that the lab consistently
produced fewer errors and we saw a significant increase in
errors between a Fitts’ ID of 3.70 and 4.64. For the informed
crowd we saw a spike in errors for a Fitts’ ID between 2.81
and 3.70. The uninformed crowd had generally too many errors
to determine a significant increase in errors. The smallest
selectable target for all three groups combined, was between
2 mm and 4 mm for touch devices.
Udgivelsesdato27 maj 2015
Antal sider77
ID: 212966692