ترجمه مقاله نقش ضروری ارتباطات 6G با چشم انداز صنعت 4.0
- مبلغ: ۸۶,۰۰۰ تومان
ترجمه مقاله پایداری توسعه شهری، تعدیل ساختار صنعتی و کارایی کاربری زمین
- مبلغ: ۹۱,۰۰۰ تومان
abstract
Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated – using rate-based neural networks trained with back-propagation – that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higherorder neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware solutions.
Supporting information
Video 1. We prepared a video available at https://youtu.be/ u32Xnz2hDkE showing the learning progress and neural activity over the Caltech face and motorbike task. Here we presented the face and motorbike training examples, propagated the corresponding spike waves, and applied the STDP rule. The input image is presented at the top-left corner of the screen. The output spikes of the input layer (i.e., DoG layer) at each time step is presented in the top-middle panel, and the accumulation of these spikes is shown in the top-right panel. For each of the subsequent convolutional layers, the preferred features, the output spikes at each time step, and the accumulation of the output spikes are presented in the corresponding panels. Note that 4, 8, and 2 features from the first, second and third convolutional layers are selected and shown, respectively. As mentioned, the learning occurs layer by layer, thus, the label of the layer which is currently doing the learning is specified by the red color. As seen, the first layer learns to detect edges, the second layer learns intermediate features, and finally the third layer learns face and motorbike prototype features.