Convolutional Neural Network (CNN) classifier is a very popular classifier used to solve many problems, including image classification and object recognition. The CNN classifier usually improved by designing a deeper and bigger classifier which needs more memory and computational power to run the classifier. In this paper, we analyze and optimize the use of small and shallow CNN classifier on CIFAR dataset. Karpathy ConvNetJS CIFAR10 model was used as a base network of our classifier and extended by adding max-min pooling method. The max-min pooling is used to explore the negative and positive response of the convolution process which in theory will be trained the classifier more effectively. We choose several different configurations to analyze the effectiveness of the classifier by combining the training algorithm, batch normalization configuration, weights initialization methods, dropout regularization configuration, and heavy data augmentation. To ensure that the classifier we designed is still small and shallow CNN classifier, we limit the maximum number of layers in our CNN classifier to 15 layers. Experiments on CIFAR10 and CIFAR100 dataset shows that by compacting the kernel on each layer, the classifier can achieve good accuracy and comparable with another state-of-the-art classifier with a relatively same number of layers with an error rate of 6.99% on the CIFAR10 dataset and 29.41% on the CIFAR100 dataset.
|Number of pages||12|
|Journal||IAENG International Journal of Computer Science|
|Publication status||Published - 1 May 2019|
- CIFAR dataset
- Deep convolutional neural network
- Max-min pooling
- Shallow CNN classifier