Qualitätssicherung im Crowdsourcing

  • Bauer, Christine (Projektleitung)

Projektdetails

Geldgeber*innen

WU Wien (Drittmittelverwaltung)

Beschreibung

Recently, outsourcing tasks to an undefined crowd (i.e., crowdsourcing) has gained popularity in the industry (e.g., Threadless, Lego, Microsoft, Amazon Mechanical Turk) and is also a compelling topic in many scientific disciplines (e.g., for marketing research, user behaviour research, psychology, elicitation of users’ preferences and requirements, etc.). Having experienced answers of poor quality from the crowd in prior work (e.g., Cunin & Elsen, 2014; Hâggman, Tsai, Elsen, Honda, & Yang, 2014, in press), we want to delve into detail and find ways to raise the quality of the results of crowdsourced tasks.

In this study, we want to perform an experiment comparing data quality in several crowdsourcing settings. We will analyse the results of real-world tasks in the field of eliciting user preferences and behaviour, a field where crowds are considered as a robust channel for eliciting consumers’ preferences, perceptions, and similar kind of feedback (Kittur, Chi, & Suh, 2008). Using the crowd for eliciting consumers’ preferences appears particularly powerful; however, at the same time, also challenging in terms of quality and soundness assessment, since there is no unique way to distinguish between “good” and “bad” answers concerning subjective preferences. When seeking for high-quality answers, crowdsourcers have to develop quality control strategies that go beyond simple redundancy techniques or majority voting.
StatusAbgeschlossen
Tatsächlicher Beginn/ -es Ende22/12/1430/11/15

Projektpartner