- مبلغ: ۸۶,۰۰۰ تومان
- مبلغ: ۹۱,۰۰۰ تومان
In recent years, the “power of the crowd” has been repeatedly demonstrated and various Internet platforms have been used to support applications of collaborative intelligence in tasks ranging from open innovation to image analysis. However, crowdsourcing applications in the fields of design research and creative innovation have been much slower to emerge. So, although there have been reports of systems and researchers using Internet crowdsourcing to carry out generative design, there are still many gaps in knowledge about the capability and limitations of the technology. Indeed the process models developed to support traditional commercial design (e.g. Pugh’s Total Design, Agile, Double-Diamond etc.) have yet to be established for Crowdsourced Design (cDesign). As a contribution to the development of such a general model this paper proposes a cDesign framework to support the creation of crowdsourced design activities. Within the cDesign framework the effective evaluation of design quality is identified as a key component that not only enables the leveraging of a large, virtual workforce’s creative activities but is also fundamental to almost all iterative optimisation processes. This paper reports an experimental investigation into two different Crowdsourced design evaluation approaches; free evaluation and ‘Crowdsourced Design Evaluation Criteria’ (cDEC). The results are benchmarked against a ‘manual’ evaluation carried out by a panel of experienced designers. The results suggest that the cDEC approach produces design rankings that correlate strongly with the judgements of an “expert panel”. The paper concludes that cDEC assessment methodology demonstrates how Crowdsourcing can be effectively used to evaluate, as well as generate, new design solutions.
6. Conclusion, limitations and future work
This paper has discussed generic issues related to the crowdsourcing of design tasks. After establishing a general framework for the design of crowdsourced design tasks the paper investigates the sensitivity of the crowd’s response to different payment levels. Although the quantity of results generated by various payment levels is easily measured, the impact on the quality of the crowd’s design work is harder to judge. Indeed the effective assessment of design quality was seen to be key to the success of almost all approaches to crowdsourced design (i.e., HBGA, Competition, Multi-stage competition, etc.). Because of this selection of a method of design quality assessment is identified as an explicit activity in the cDesign framework and also the subject of an experimental investigation to establish if the crowd could match the judgement of human experts. The evaluation of the same set of designs were crowdsourced on both the basis of purely individual subjective judgements (free-evaluation) and then again against an explicit set of criteria (cDEC) proposed by the crowd. The cDEC process used qualitative research methods to determine evaluation criteria, that the Crowd where able to apply to make collective judgements on design quality that correlated strongly with those of an expert panel. In other words before Crowdsourced workers are used to evaluate designs, it is appropriate to collect the evaluation criteria from the crowd itself, and then use those crowdsourced evaluation criteria (called cDEC) to evaluate designs. The statistical analysis of the cDEC framework is based on a relatively small sample which limits the accuracy of any analysis. Consequently, future work will investigate the effectiveness of the approach using different design tasks and a larger sample size.