5. Conclusions
The results reveal three intriguing patterns of the learners' participation, motives, and subject knowledge in comparison to their performance in the MOOC. First, there was a correlational relationship between learners' patterns of participation with their MOOC performance: learners who demonstrated active engagement tended to outperform the ones who did not prioritize a similar trait. Active engagement was evidenced by learners submitting at least one course assignment, and their participation in the discussion forum by posting and responding to others. Active engagement has been proposed to be a strong indicator of MOOC quality and student satisfaction (Ho et al., 2014; Jordan, 2014; Koller, Ng, Do, & Chen, 2013) and thus, the success of a MOOC. In order to encourage more learner participation, the design team plans to make pedagogical modifications for the next DS MOOC launch by making the discussion forums a more responsive and user-friendly place. This may be achieved by using strategies such as increasing the human interaction through synchronous sessions, creating/ encouraging forum discussions among subgroups by geographical locations or language background. It can also be achieved by increasing managerial skills including the management of the Teaching Staff: assigning course teaching staff to monitor and respond to students' questions by hours so that the level of responsiveness is assured on a global time scale (Haavind, Chandler, 2015). As for students' participation in an assignment, from the design perspective, there should be further investigation on the level of complexity, difficulty or time-consuming nature of the assignments to determine whether this might be a reason for the decreasing participation in the MOOC. However, the mystery of the decreasing participation and perhaps the pass/fail rate could be attributed to peer assessment, which presents natural pitfalls and provides challenges for MOOC design (Kulkarni et al., 2015). For the DS MOOC, learners' performance on each assignment was highly dependent on the assessment of their peers using a rubric. The final score of one assignment is the median of the four assessment outcomes (three peer assessments and one self-assessment). The quality of the peer feedback is unknown and needs further examination. The Coursera platform offers a mathematical solution for peer assessment with the random assignment of three peers for assignment grading instead of one. In order to enable more accurate assessment, the course design team created a rubric for the peer assessors to us and provided self-assessment by the learner to further level out the learners' performance grade. After the first DS MOOC launch, the design team decided to provide examples of peer assessment through sample grading by the instructor and course teaching staff of learners’ digital story submissions of different quality.