ترجمه مقاله نقش ضروری ارتباطات 6G با چشم انداز صنعت 4.0
- مبلغ: ۸۶,۰۰۰ تومان
ترجمه مقاله پایداری توسعه شهری، تعدیل ساختار صنعتی و کارایی کاربری زمین
- مبلغ: ۹۱,۰۰۰ تومان
۱ مقدمه
۲ نظریه آماری دسته بندی
۱ ۲ توابع زیان و ریسک
۲ ۲ قاعده تصمیم بهینه بیز
۳ ۲ همسانی و سازگاری فیشر
۳ تابع زیان القایی با Correntropy
۴ آموزش با استفاده از تابع زیان C
۵ آزمایشات و نتایج
۶ بحث
۱ ۶ کاربرد
۲ ۶ انتخاب اندازه هسته
۳ ۶ انتخاب طرح سوئیچینگ
۷ نتیجه گیری
This paper presents a new loss function for neural network classification, inspired by the recently proposed similarity measure called Correntropy. We show that this function essentially behaves like the conventional square loss for samples that are well within the decision boundary and have small errors, and L0 or counting norm for samples that are outliers or are difficult to classify. Depending on the value of the kernel size parameter, the proposed loss function moves smoothly from convex to non-convex and becomes a close approximation to the misclassification loss (ideal 0–1 loss). We show that the discriminant function obtained by optimizing the proposed loss function in the neighborhood of the ideal 0–1 loss function to train a neural network is immune to overfitting, more robust to outliers, and has consistent and better generalization performance as compared to other commonly used loss functions, even after prolonged training. The results also show that it is a close competitor to the SVM. Since the proposed method is compatible with simple gradient based online learning, it is a practical way of improving the performance of neural network classifiers.