Abstract
We introduce a purely feed-forward architecture for semantic segmentation. We map small image elements (superpixels) to rich feature representations extracted from a sequence of nested regions of increasing extent. These regions are obtained by “zooming out” from the superpixel all the way to scene-level resolution. This approach exploits statistical structure in the image and in the label space without setting up explicit structured prediction mechanisms, and thus avoids complex and expensive inference. Instead superpixels are classified by a feedforward multilayer network. Our architecture achieves 69.6% average accuracy on the PASCAL VOC 2012 test set.
1. Introduction
We consider one of the central vision tasks, semantic segmentation: assigning to each pixel in an image a category-level label. Despite attention it has received, it remains challenging, largely due to complex interactions between neighboring as well as distant image elements, the importance of global context, and the interplay between semantic labeling and instance-level detection. A widely accepted conventional wisdom, followed in much of modern segmentation literature, is that segmentation should be treated as a structured prediction task, which most often means using a random field or structured support vector machine model of considerable complexity.
5. Conclusions
The main point of this paper is to explore how far we can push feedforward semantic labeling of superpixels when we use multilevel, zoom-out feature construction and train nonlinear classifiers (multi-layer neural networks) with asymmetric loss. The results are perhaps surprising: we can far surpass previous state of the art, despite apparent simplicity of our method and lack of explicit representation of the structured nature of the segmentation task. Another important conclusion that emerges from this is that we finally have shown that segmentation, just like image classification, detection and other recognition tasks, can benefit from the advent of deep convolutional networks.