Abstract
A system for approximate number discrimination has been shown to arise in at least two types of hierarchical neural network models—a generative Deep Belief Network (DBN) and a Hierarchical Convolutional Neural Network (HCNN) trained to classify natural objects. Here, we investigate whether the same two network architectures can learn to recognise exact numerosity. A clear difference in performance could be traced to the specificity of the unit responses that emerged in the last hidden layer of each network. In the DBN, the emergence of a layer of monotonic ‘summation units’ was sufficient to produce classification behaviour consistent with the behavioural signature of the approximate number system. In the HCNN, a layer of units uniquely tuned to the transition between particular numerosities effectively encoded a thermometer-like ‘numerosity code’ that ensured near-perfect classification accuracy. The results support the notion that parallel pattern-recognition mechanisms may give rise to exact and approximate number concepts, both of which may contribute to the learning of symbolic numbers and arithmetic.
1. Introduction
What is the foundation for the conceptual development of natural numbers and elementary arithmetic? Although counting is our only procedure for exactly determining the size of large sets of items, both humans and non-human animals have a natural ‘number sense’ that consists of two components; for sets smaller than five, we can directly perceive the exact number of items in a process called ‘subitizing’ (Agrillo, Piffer, Bisazza, & Butterworth, 2012; Clements, Sarama, & Macdonald, 2019; Jevons, 1871; Tomonaga & Matsuzawa, 2002). Beyond this ‘subitizing range’, we can make approximate judgements about (i) the numerosity of a single set of items (estimation task), and (ii) the relative size of two sets of items (discrimination task), with an accuracy that decreases logarithmically with the size of the set or the difference between sets (Dehaene, 2011; Izard, Sann, Spelke, & Streri, 2009; Rugani, Regolin, & Vallortigara, 2008). Recently, our understanding of how this approximate number sense may be grounded in perception has been substantially advanced through neurophysiological experiments and computational modelling (Nieder & Dehaene, 2009). However, what cognitive mechanisms underlie and differentiate approximate estimation and exact enumeration is still unclear.
4. Discussion
In this paper we investigated if and how learning of small symbolic numbers can be supported by general learning principles of hierarchical neural networks. We trained two networks of different architectures and learning algorithms to classify the same input dataset of dot-pattern images using the same output classifier.