4. Discussion
Despite a long research tradition investigating the educational effectiveness of both static and dynamic pictures, their relative instructional importance may be difficult to assess. One key factor to consider when comparing both visualizations is the instructional task at stake: our evolved mind seems to be more suited to learn animated primary tasks and static secondary tasks. However, clear conclusions are not easy to draw, yet, as many of the studies comparing static and animated formats (for both primary and secondary tasks) have presented uncontrolled biases. We discussed appeal, variety, media, realism, number, size, and interaction biases as example of seven confounding variables. Over a decade ago, Tversky et al. (2002) observed that biases in comparisons between static and dynamic images existed. Our review indicates that researchers are still designing experiments that contain them, and thus to some extent ignoring the messages inherent in the review by Tversky and colleagues. For example, we reported two meta-analyses (Berney & Betrancourt, 2016; H of € fler & Leutner, 2007) that included several of the studies included here (e.g., Lewalter, 2003; Mayer et al., 2005; Ryoo & Linn, 2012; Wu & Chiang, 2013; Yang et al., 2003) that did not control these bias factors, which suggests some loss of validity. Clearly much greater attention to biases is needed into future meta-analyses as well as individual studies to take the field forward. We believe that the present review, categorizing and giving examples of these problematic current comparisons, was necessary to re-emphasize the original message of Tversky et al. (2002). In addition, we adopted a practical approach by providing guidelines for avoiding and controlling these problems in future investigations.