منوی کاربری
  • پشتیبانی: ۴۲۲۷۳۷۸۱ - ۰۴۱
  • سبد خرید

دانلود رایگان مقاله انگلیسی بهبود کارایی در شبکه های عصبی پیچشی با فیلترهای چند خطی - الزویر 2018

عنوان فارسی
بهبود کارایی در شبکه های عصبی پیچشی با فیلترهای چند خطی
عنوان انگلیسی
Improving Efficiency in Convolutional Neural Networks with Multilinear Filters
صفحات مقاله فارسی
0
صفحات مقاله انگلیسی
33
سال انتشار
2018
نشریه
الزویر - Elsevier
فرمت مقاله انگلیسی
PDF
کد محصول
E8626
رشته های مرتبط با این مقاله
مهندسی فناوری اطلاعات و کامپیوتر
گرایش های مرتبط با این مقاله
شبکه های کامپیوتری و هوش مصنوعی
مجله
شبکه های عصبی - Neural Networks
دانشگاه
Laboratory of Signal Processing - Tampere University of Technology - Finland
کلمات کلیدی
شبکه های عصبی پیچشی، طرح چند خطی، فشرده سازی شبکه
۰.۰ (بدون امتیاز)
امتیاز دهید
چکیده

Abstract


The excellent performance of deep neural networks has enabled us to solve several automatization problems, opening an era of autonomous devices. However, current deep net architectures are heavy with millions of parameters and require billions of floating point operations. Several works have been developed to compress a pre-trained deep network to reduce memory footprint and, possibly, computation. Instead of compressing a pre-trained network, in this work, we propose a generic neural network layer structure employing multilinear projection as the primary feature extractor. The proposed architecture requires several times less memory as compared to the traditional Convolutional Neural Networks (CNN), while inherits the similar design principles of a CNN. In addition, the proposed architecture is equipped with two computation schemes that enable computation reduction or scalability. Experimental results show the effectiveness of our compact projection that outperforms traditional CNN, while requiring far fewer parameters.

نتیجه گیری

5. Conclusions


In this paper, we proposed a multilinear mapping to replace the conventional convolution filter in Convolutional Neural Networks. The resulting structure’s complexity can be flexibly controlled by adjusting the number of projections in each mode through a hyper-parameter R. The proposed mapping comes with two computation schemes which either allow memory and computation reduction when R is small, or the scalability when R is large. Numerical results showed that with far fewer parameters, architectures employing our mapping could outperform standard CNNs. This are promising results and opens future research directions focusing on optimizing parameter R on individual convolution layers to achieve the most compact structure and performance.


بدون دیدگاه