7. Conclusion
The use of Deep Neural Networks for texture recognition has seen a significant impediment due to a lack of thorough understanding of the limitations of existing Neural architectures. In this paper, we provide theoretical bounds on the use of Deep Neural Networks for texture classification. First, using the theory of VC-dimension we establish the relevance of handcrafted feature extraction. As a corollary to this analysis, we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rates. Then we use the concept of Intrinsic Dimension to show that texture datasets have a higher dimensionality than color/shape based data. Finally, we derive an important result on Relative Contrast that generalizes the one proposed in Aggarwal et al. (2001). From the theoretical and empirical analysis, we conclude that for texture data, we need to redesign neural architectures and devise new learning algorithms that can learn features similar to GLCM and other spatial correlation based texture-features from input data.