دانلود رایگان مقاله انگلیسی همگرایی غیر یکنواخت الگوریتم های یادگیری آنلاین برای پرسپترون با مدرس پر سر و صدا - الزویر 2018

عنوان فارسی
همگرایی غیر یکنواخت الگوریتم های یادگیری آنلاین برای پرسپترون با مدرس پر سر و صدا
عنوان انگلیسی
Non-monotonic convergence of online learning algorithms for perceptrons with noisy teacher
صفحات مقاله فارسی
0
صفحات مقاله انگلیسی
6
سال انتشار
2018
نشریه
الزویر - Elsevier
فرمت مقاله انگلیسی
PDF
کد محصول
E6241
رشته های مرتبط با این مقاله
کامپیوتر و فناوری اطلاعات
گرایش های مرتبط با این مقاله
مهندسی الگوریتم ها و محاسبات، هوش مصنوعی، شبکه های کامپیوتری
مجله
شبکه های عصبی - Neural Networks
دانشگاه
Nara Institute of Science and Technology - Ikoma - Nara - Japan
کلمات کلیدی
منحنی یادگیری، پرسپترون، یادگیری آنلاین، مکانیک آماری، تجزیه و تحلیل همپوشانی
۰.۰ (بدون امتیاز)
امتیاز دهید
چکیده

abstract


Learning curves of simple perceptron were derived here. The learning curve of the perceptron learning with noisy teacher was shown to be non-monotonic, which has never appeared even though the learning curves have been analyzed for half a century. In this paper, we showed how this phenomenon occurs by analyzing the asymptotic property of the perceptron learning using a method in systems science, that is, calculating the eigenvalues of the system matrix and the corresponding eigenvectors. We also analyzed the AdaTron learning and the Hebbian learning in the same way and found that the learning curve of the AdaTron learning is non-monotonic whereas that of the Hebbian learning is monotonic.

نتیجه گیری

6. Conclusions


In this paper, we analyzed convergence properties of the perceptron learning, the AdaTron learning and the Hebbian learning, when the teacher was noisy. The learning curves in these cases were analytically derived using a statistical mechanical method and were consistent with the experimental results in our simulation. Our analyses showed that the learning curves of the perceptron learning and the AdaTron learning have an overshoot, that is, the covariance coefficient R of the teacher and the student exceeds the convergence value once. However, the Hebbian learning does not have this property. We showed that these phenomena result from the difference of the eigenvalues and eigenvectors of the system matrix using the asymptotic analysis of dynamical systems. This result may give a method for controlling the learning coefficient η to achieve a faster convergence speed and a lower residual error in the future.


بدون دیدگاه