6. Conclusion, limitations and future work
This paper has discussed generic issues related to the crowdsourcing of design tasks. After establishing a general framework for the design of crowdsourced design tasks the paper investigates the sensitivity of the crowd’s response to different payment levels. Although the quantity of results generated by various payment levels is easily measured, the impact on the quality of the crowd’s design work is harder to judge. Indeed the effective assessment of design quality was seen to be key to the success of almost all approaches to crowdsourced design (i.e., HBGA, Competition, Multi-stage competition, etc.). Because of this selection of a method of design quality assessment is identified as an explicit activity in the cDesign framework and also the subject of an experimental investigation to establish if the crowd could match the judgement of human experts. The evaluation of the same set of designs were crowdsourced on both the basis of purely individual subjective judgements (free-evaluation) and then again against an explicit set of criteria (cDEC) proposed by the crowd. The cDEC process used qualitative research methods to determine evaluation criteria, that the Crowd where able to apply to make collective judgements on design quality that correlated strongly with those of an expert panel. In other words before Crowdsourced workers are used to evaluate designs, it is appropriate to collect the evaluation criteria from the crowd itself, and then use those crowdsourced evaluation criteria (called cDEC) to evaluate designs. The statistical analysis of the cDEC framework is based on a relatively small sample which limits the accuracy of any analysis. Consequently, future work will investigate the effectiveness of the approach using different design tasks and a larger sample size.