6. Conclusion
In this study, we have proposed a novel approach for ensemble combination scheme using a latent consensus function that relates individual predictors. Our basic idea is that the predicted value of individual predictors is composed of the reflection of the real function value and a specific error term. According to this assumption, we determine weights for the ensemble combination using a separate training algorithm. We have presented a comprehensive evaluation ofthe proposed method on simulated data set as well as real world data sets using neural networks and decision trees as base models. The experimental results show that the proposed method further improves its prediction performance by using self-correction for the malfunctioning base learners. By analyzing the results of corrupted toy data sets, we have shown that an ensemble can adjust weights by detecting the corrupt predictors in the learning process. Therefore, the proposed method can improve its performance even when a number of individual predictor outputs are corrupted. Future research will investigate two aspects. First, the predictor selection ability of the proposed method can be enhanced. Second, the effectiveness of the proposed method can be enhanced when different base learners like a learning model that uses the concepts of unsupervised models for supervised tasks [13,16,15,22,21,23], data sizes, and distributions are used.