- مبلغ: ۸۶,۰۰۰ تومان
- مبلغ: ۹۱,۰۰۰ تومان
Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many “engineering” prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, the prototype-learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype- and adaptive-learning theories. We refer to our new model as “prototype-incorporated EmNN”. Furthermore, we apply the proposed model to two real-life challenging tasks, namely, static hand-gesture recognition and face recognition, and compare the result to those obtained using the popular back-propagation neural network (BPNN), emotional BPNN (EmNN), deep networks, an exemplar classification model, and k-nearest neighbor.
This work builds on two fundamental-learning theories for explaining category generalization in humans, that is, prototype- and adaptive-learning theories. We share the idea that both the prototype and adaptive theories are valid for learning. Therefore, we propose that incorporating prototype knowledge into neural networks can be used to improve the overall learning experiences of such networks. The proposed neural network model has been applied to two challenging tasks in machine vision; namely, static hand-gesture recognition and face recognition. All the experiments performed in this work show that the incorporation of prototype learning into the EmNN improves overall learning and generalization. In addition, it is interesting that our proposed model which employs only one hidden layer and no convolution operations for feature learning achieves competitive performance against models with many hidden layers of features abstraction and convolution operations, i.e., deep neural networks. Future work includes the application of the proposed model to a wider domain of machine vision and pattern recognition problems. The authors believe that using an expanded domain of problems, it is possible to investigate even further that the optimum number of prototypes per class obtained within this work holds for a broad range of visual tasks. Furthermore, the connection between prototype learning and semisupervised learning as obtains in deep networks is an interesting future research direction.