This survey deals with the problem of evaluating the submissions to crowd sourcing websites on which data is increasing rapidly in both volume and complexity. Usually expert committees are installed to rate submissions, select winners and adjust monetary rewards. Thus, with an increasing number of submissions, this process is getting more complex, time-consuming and hence expensive. In this paper we suggest following text mining methodology, foremost similarity measurements and clustering algorithms, to evaluate the quality of submissions to crowd sourcing contests semi-automatically. We evaluate our approach by comparing text mining based measurement of more than 40'000 submissions with the real-world decisions made by expert committees using Precision and Recall together with F1-score.
contribution to scientific community
HSG Profile Area
SoM - Business Innovation
2013 46th Hawaii International Conference on System Sciences
IEEE Computer Society
Los Alamitos, CA
46th Hawaii International Conference on System Sciences (HICSS)